Zio - async I/O framework

The thread pool size is limited, can only use up to N threads. But unlike async, it can’t run the code in-place when there is no free thread. And unlike concurrent, it can’t fail if it reached the N limit. In my implementation, the queue is a linked list and the tasks contain the list nodes, so it can never be full. Even if you use constrained array-based queue, e.g. for work-stealing, there should be an overflow queue that’s unlimited.

If it can’t run the code in-place, then you need somewhere to store the arguments. You can’t use an intrusive linked list for the queue, where would you store the node? You can’t use stack memory, you’re about to return from the function.

I just looked at the codebase and see that spawnBlocking is fallible because it heap allocates memory for the queue. Looks like it does grow unbounded…? I think this is where I dip out, I don’t like the LLM stench in here, sorry.

Yes, closures need allocations, I meant thread count can’t grow unbounded.

I’ve just released another version. From now on, I expect the API to be more or less stable. Some minor changes could be necessary, but I don’t expect anything major.

Three big changes:

  1. None of the the public APIs now accept the rt parameter. It actually made it easier for multiple runtimes within the same process to coooperate. Additionally, it’s possible to communicate with the async context from a foreign thread. You can use e.g. zio.Channel or zio.Mutex from anywhere, and it will the right thing. Removing the parameter also makes it easier to support embedded, which is not a major goal for now, but I want to keep that in mind when doing decisions.
  1. You can now use file and network I/O operations outside of the async context. You you use zio.File in a foreign thread, it will use regular blocking syscalls. I’ve done this as an easier upgrade path to Zig 0.16, because I need the equivalent of std.fs to use in a thread pool. I can’t depend on async file I/O to be efficient.

  2. There is now zio.CompletionQueue, which is similar to std.Io.Batch, but more general. It allows you to use any of the event-loop operations from a coroutine. It provides io_uring-like API no matter which backends you use. You use cq.submit() to add more requests, and cq.wait() or cq.timedWait() to iterate over completions. It uses no allocations internally, it’s all done via intrusive linked lists, and you can scale it to as many operations as you need. You can use it as small inner poller loop, you can also use it for more complex request handling, if you don’t want to use coroutine-per-connection, but also want to avoid callbacks.