Most of it!
There’s a set of benchmarks, and examples. I believe up to this commit: fix: generalize dupe_string to promote_return for non-string HPT retu… · cuzzo/easy-vm@9cda97c · GitHub - you should be able to run all the benchmarks and examples on an x86 Linux machine.
Some of the highlights:
- Sharding as a data sharing capability (I don’t know how to do this as easily in Rust)
- Array/List/Pool/HashMap etc all work with sharding out of the box for shared nothing architectures
- RwLock/Mutex (as :locked, :writeLocked) other data sharing capabilities.
- MVCC :versioned is in the runtime but not enabled yet, because that requires more testing then I had time for.
- Arc/Rc as ownership capabilities (:shared, :multiOwned).
- BG/DO/CONCURRENT all work to spawn tasks concurrently.
- Time as a tense → Futures/Promises/Streams/Observables etc all collapse to ~T
- :soa / structure of arrays as a capabilities
- Everything in WALKTHROUGH.md should be tested in benchmarks and examples and working
There’s two key things I wanted to achieve. Separating out the implementation details of achieving concurrency, and the monomorphization (I think is the right word) so that you need minimal changes outside of your STRUCTS/Classes to change from say - a sharded/shared-nothing architecture to a shared:mutex because you ended up having way more skew than you expected or whatever. The main point is that the language has everything it needs to generate skew & contention tests to show you how poorly or well a strategy could work, and it’s easy to try both, and the profiler makes it pretty easy to tell if you’re spending a ton of time in contention or locks or cache misses - and it’s easy to switch strategies to correct it.
This minimal interpreter for the language (I plan to have live-debugging as easy as Ruby) did pass all its tests up to the commit I linked: easy-vm/examples/minivm at master · cuzzo/easy-vm · GitHub - still haven’t figured out how to do distribution. Originally, I was going to build a VM instead of transpile to Zig, but quickly learned it would be WAY more effort than I thought to be competitive with the BEAM.
There’s also an example of a minimal Dragonfly/Redis TCP cache and JSON server that uses FFI.
Stack smash protection is in an LLVM Machine pass and is NOT currently turned on by default - this should add on about 3% overhead (not included in the benchmarking numbers). The language auto-detects stack sizes, so unless you have a reentrant function (which the language forces you to annotate), you don’t have to worry about stack smashing (theoretically, I’m sure there’s some bugs you can find if you look - though several benchmarks test it). In STRICT mode, you can’t use reentrant functions in fibers (you can in threads), so you don’t have to pay the stack-smash protection fee. FFI trampolines onto the main stack like Go - so shouldn’t be an issue. There’s some benchmarking and testing for that, but no guarantees.
Lock sorting may or may not exist (probably not). Deadlock protection is the next thing I wanted to test, but I am sidetracked on other things at the moment.
The Control Plane is very minimal at the moment. So any claims about that are more visionary for the v0.2 release (which if the LLMs can keep their pace should be by mid June). The Control Plane can detect a stack smash for an unknown stack and then you can configure if all future stacks upsize (to prevent hot-split thrashing) or if you want to just let them hot-split. It can also detect if you’re running tasks at too large of a size (but you should always just do the default size, since the size is typically known). It can also detect skewed sharded workloads (but the README says it cannot live balance skew - so its kind of pointless at the moment). In v0.2 tasks will be able to have a :killable capability (and priorities) to kill them if they use too many resources or you’d rather kill them than hot split the stack.
STRICT mode compilation doesn’t exist yet. Right now, the compiler silently tracks function effects - like IO, file access, heap vs frame allocation, etc, and in v0.2 → you’ll need to annotate them and can’t use reentrant functions in fibers, etc. STRICT EXTREME will force you into HFT-level of C (and the compiler / clear doctor should be able to walk you through how to get there) for any function marked #HOT.
The affine rules are janky. I’d been finding memory leaks in all kind of edge cases every time I add a new example or make progress on the VM. So I’m in the middle of introducing an MIR (like Rust) that will hopefully finally make that more reliable (as it’s the entire story for the language). So HEAD is currently buggier than the commit I linked.
No one’s using this yet (that I know of) so I’m treating master like a dev branch. I should probably grow up on that soon.