I built a Go-like Green Fiber runtime that is memory safe like Rust, with no garbage collector, and a language that transpiles to Zig to use it

Hi all,

Despite the repo name, it’s not a VM (though it does have one for debugging).

I’ve been working on something for 5 months, and I wanted to share it with you all. I have so much appreciation for this awesome language and community, and I hope I can give back and contribute to it, and get more people using Zig by way of transpilation - it’s one of the most incredible languages I’ve ever had the pleasure of using!

Thank you all for some of the support questions you answered that were instrumental in getting the stack switching and error propagation to work for the Green Fiber runtime.

I’m curious to see where Zig async goes, and if there’s any way I can help out with that initiative.

Cheers,
Brian

7 Likes

I don’t want to be that guy, but it seems like the repository was largely written by AI (see for example fix: repair 8 broken zig test files to match current API · cuzzo/easy-vm@9845bd3 · GitHub with “Co-Authored-By: Claude Opus 4.6”).

I do have to say though that your programming language and use case of Zig looks pretty interesting. Still, I feel like the real “craft” of programming (as in coding manually instead of just wanting the result) is more valued in the Zig community.

3 Likes

I get it. I’m seeing a lot of LLM-built languages. And I understand the skepticism completely.

If you want to see a REALLY bad commit, see here: fix: restore CopyNode handler + CLI option flags lost in merge · cuzzo/easy-vm@ce39e82 · GitHub

Claude ran into a merge conflict, and decided to delete half of my app, not run any tests, and chug along for a few more commits (never testing anything). Luckily, it’s all in history.

I wrote the basis of the language by hand (including the majority of the Runtime in Zig, and a VM as it was originally going to be more like Erlang). I couldn’t keep spending all my time working on it, so I let LLMs go wild on it. They made unbelievable progress, but turned the architecture into spaghetti and left it completely riddled with bugs. I’ve been reigning them in over the past two weeks for this v0.1-pre release, and to the best of my knowledge it’s in a quite decent shape - competitive with Go and Rust/Tokio in non-patholgical cases.

If you see any parts of the codebase that’s still architecturally awful, I’m all ears! But I think it’s honestly in good shape at this point.

It was humbling. They definitely struggle on architecture without guidance, but there is no competing with them on pumping out features.

This would’ve taken me 5 years to do by hand. Claude did the back 80% of it in 1 month. The first 20% (easiest) took me 3 months.

Ultimately, I’m glad it exists, and there’s zero chance it would’ve ever made it this far on my own.

6 Likes

How much of what your README describes can your project actually do, right now?

1 Like

Most of it!

There’s a set of benchmarks, and examples. I believe up to this commit: fix: generalize dupe_string to promote_return for non-string HPT retu… · cuzzo/easy-vm@9cda97c · GitHub - you should be able to run all the benchmarks and examples on an x86 Linux machine.

Some of the highlights:

  • Sharding as a data sharing capability (I don’t know how to do this as easily in Rust)
    • Array/List/Pool/HashMap etc all work with sharding out of the box for shared nothing architectures
  • RwLock/Mutex (as :locked, :writeLocked) other data sharing capabilities.
    • MVCC :versioned is in the runtime but not enabled yet, because that requires more testing then I had time for.
  • Arc/Rc as ownership capabilities (:shared, :multiOwned).
  • BG/DO/CONCURRENT all work to spawn tasks concurrently.
  • Time as a tense → Futures/Promises/Streams/Observables etc all collapse to ~T
  • :soa / structure of arrays as a capabilities
  • Everything in WALKTHROUGH.md should be tested in benchmarks and examples and working

There’s two key things I wanted to achieve. Separating out the implementation details of achieving concurrency, and the monomorphization (I think is the right word) so that you need minimal changes outside of your STRUCTS/Classes to change from say - a sharded/shared-nothing architecture to a shared:mutex because you ended up having way more skew than you expected or whatever. The main point is that the language has everything it needs to generate skew & contention tests to show you how poorly or well a strategy could work, and it’s easy to try both, and the profiler makes it pretty easy to tell if you’re spending a ton of time in contention or locks or cache misses - and it’s easy to switch strategies to correct it.

This minimal interpreter for the language (I plan to have live-debugging as easy as Ruby) did pass all its tests up to the commit I linked: easy-vm/examples/minivm at master · cuzzo/easy-vm · GitHub - still haven’t figured out how to do distribution. Originally, I was going to build a VM instead of transpile to Zig, but quickly learned it would be WAY more effort than I thought to be competitive with the BEAM.

There’s also an example of a minimal Dragonfly/Redis TCP cache and JSON server that uses FFI.

Stack smash protection is in an LLVM Machine pass and is NOT currently turned on by default - this should add on about 3% overhead (not included in the benchmarking numbers). The language auto-detects stack sizes, so unless you have a reentrant function (which the language forces you to annotate), you don’t have to worry about stack smashing (theoretically, I’m sure there’s some bugs you can find if you look - though several benchmarks test it). In STRICT mode, you can’t use reentrant functions in fibers (you can in threads), so you don’t have to pay the stack-smash protection fee. FFI trampolines onto the main stack like Go - so shouldn’t be an issue. There’s some benchmarking and testing for that, but no guarantees.

Lock sorting may or may not exist (probably not). Deadlock protection is the next thing I wanted to test, but I am sidetracked on other things at the moment.

The Control Plane is very minimal at the moment. So any claims about that are more visionary for the v0.2 release (which if the LLMs can keep their pace should be by mid June). The Control Plane can detect a stack smash for an unknown stack and then you can configure if all future stacks upsize (to prevent hot-split thrashing) or if you want to just let them hot-split. It can also detect if you’re running tasks at too large of a size (but you should always just do the default size, since the size is typically known). It can also detect skewed sharded workloads (but the README says it cannot live balance skew - so its kind of pointless at the moment). In v0.2 tasks will be able to have a :killable capability (and priorities) to kill them if they use too many resources or you’d rather kill them than hot split the stack.

STRICT mode compilation doesn’t exist yet. Right now, the compiler silently tracks function effects - like IO, file access, heap vs frame allocation, etc, and in v0.2 → you’ll need to annotate them and can’t use reentrant functions in fibers, etc. STRICT EXTREME will force you into HFT-level of C (and the compiler / clear doctor should be able to walk you through how to get there) for any function marked #HOT.

The affine rules are janky. I’d been finding memory leaks in all kind of edge cases every time I add a new example or make progress on the VM. So I’m in the middle of introducing an MIR (like Rust) that will hopefully finally make that more reliable (as it’s the entire story for the language). So HEAD is currently buggier than the commit I linked.

No one’s using this yet (that I know of) so I’m treating master like a dev branch. I should probably grow up on that soon.