std.Io is just an interface, it depends on threads or coroutines, so you need to have one of these, but what is behind them it doesn’t care. The terminology is a bit muddy, but I guess by “reactor” you mean readiness based event loop and by “proactor” you mean completion based event loop. In that case, it doesn’t really matter, because the final API has to be blocking, which is “proactor” by default, so it doesn’t matter if you want for readiness via epoll and then do recv() or if you send the full recv request to io_uring.
There are two things, the interface and the implementations.
The interface was designed to support single-threaded event-loop based operation, backed by stackful coroutines. So yes, it does support that. Nothing from the OS APIs is exposed directly here.
The std.Io.Evented implementation of the interface doesn’t exist yet, or it does exist, but it’s still not complete. The Linux version is probably the most complete, but still far from done.
That’s where zio comes, it’s basically another implementation of the interface, still not complete, because the interface keeps growing, but it’s already usable on all major operating systems. And yes, you can use zio’s implementatoin of std.Io with a single-threaded event loop.
My exisiting 0.15 code uses a single thread to dispatch over multiple sockets using non blocking / epoll / kqueue. Works exceptionally well, but it’s not simple to work with.
Porting to 0.16 introduces some much nicer patterns, but Io.Threaded is the only sane approach as of today within the stdlib ecosystem. I comfort myself by taking some measurements, and showing that lots of heavy threads isn’t as bad as it could be, and will definitely do the job in the interim. Its measurably better than Go already at this stage for the same solution, so that’s a plus at least.
It’s not what I want longer term, but it will do for now. I trust that it will be only a 1 line change to turn threads into fibers when it’s all baked .. mostly, haha
Zio looks awesome - great work, I’m highly tempted to jump onboard with this. But whether you go zio vs std threads today .. either way there is still a future point where a refactor is needed. It also looks like the final iteration of IO will be quite different to the current one eventually. (See discussions on taming the vtable bloat)
I want optimal, reusable, and not living under a cloud of a future refactor. Can’t have all 3 just yet, so I’ll stretch my timeline out instead.
Im using the latest dev build avail today 0.16.0-dev.2368+380ea6fb5
and running on Mac and FreeBSD
As an experiment, tried this …
Get the basic Io.Threaded instance from juicy main, and use that as the default concurrency
Create an Evented IO using Kqueue fibers, as they exist in stdlib, and do seem to work already with a few caveats
Do all the networking using the Threaded IO only - listen / accept / read initial payload / write responses etc
Spawn the “handler” concurrently using the Kqueue fiber io instead of the threaded io. Handler just runs a loop reading events from an Io.Queue … generates a payload … and writes the payload to the socket connection its attached to - but doing the write using the thread pool IO obviously.
Actually works, and it’s all just stdlib. Means I can comfortably have a very large number of persistent connections now managed by fibers, without needing a large number of heavy threads to manage them all.
The Network IO is still going through a thread pool - but that’s all fine, and it’s never been the bottleneck. Im mostly concerned with how many open persistent connections my app can handle before it all melts down. Using simple fibers here (should) dramatically raise the ceiling on that limit with the same hardware. Will test properly and report back on that.
(By “a few caveats” with Kqueue - you at least need to hack $ZIG_PATH/lib/std/Io/Kqueue.zig to add all the missing VTable entries to get it to compile …just add them to the vtable as undefined. Then with your app code, avoid calling ANYTHING that is @panic(“TODO”) - which is a lot)
But for building a fiber that acts like an lightweight actor - ie. loop on incoming messages with a correct yield – then do something useful each message, then doing a pure-stdlib approach is actually workable right now, if you are mad enough to try it in the lab.
Production ready today ? lol, probably not … but we are not expecting that to be true yet