I’m just wondering what is the status and if the core team has plans on adding it back at all. My core use of Zig is to build HTTP services and on that space async is really needed.
Hello @fenugurod
welcome to ziggit
From Zig Roadmap 2024:
Hi! Here’s the async status summary from official Wiki:
Thanks folks. Ah nice, was not aware of this page. I was mainly looking at the official site.
Have you considered using a library like Zig-aio: lightweight abstraction over io_uring and coroutines ?
This works but I have a huge dislike for these kind of approaches, aka not having async baked in the language, unless you’re building everything from scratch given that this does not compose with other libraries.
i’m curious to hear in what ways an event loop fails to compose with other libraries! i’ve been writing a program that seems to do nothing but compose libxev
with other libraries
If a library is doing some I/O or even worse deals with readiness/complete notifications via IOCP/uring it may not fit very well general I/O model/design of an application.
Because if there is no support at the language level the community will split into adopting different kinds of solutions. If all the libraries are blocking and sync code, yes, much easier, but once one library decides to use libxe and the other decides to use libuv is when you’ll start to have issues.
For example, this is a no issue in Go. When you fetch a package you don’t need to care if that thing is doing async IO or not, you assume that if it’s doing IO, it’s async because it’s how Go was built. Of course, Go has tradeoffs that Zig can’t make.
OTOH baking async in a language means lifting OS specific facilities up to syntax level and as I guess it’s not that good for a language like Zig.
I’m going to be unnecessarily provocative:
“async” is a gigantic code smell. Zig should explicitly say “no language async now or in the future”.
“async” “works” if you are only managing network socket resources–which is precisely what Javascript/Typescript and Go were built to do.
The problem comes when your “executor” has to deal with network sockets and file descriptors and GPU events and …
At which point defining “async” gets a lot lot lot harder. Which events get priority? How does allocation work? How does release work? How do you prioritize latency vs throughput? etc.
I have a strong suspicion that “async” necessarily requires a runtime which can define what both “garbage collection” and “executor” genuinely mean.
Since Zig is specifically not dependent upon a runtime, I think Zig is going to have a deep problem defining what “async” should be.
I have been wondering this a lot lately. I am hoping that whatever may be implemented would provide me with some means to prioritize or schedule tasks. But I have no idea how one would do this without “function coloring” since it may require having a scheduler
parameter in the functions signature similar to allocator
I can’t remember properly but wasn’t there plan to redesign the std to take an IO interface the same way we pass allocators to functions ? and if so would that solve issues regarding async semantic or am I missing something ?
It’s understandable that you might think this, given how the keyword is used in other places. Zig’s implementation wasn’t like that at all.
It was a reified stack frame, giving the ability to return from a function call more than once. Just that. Because it was a first class object (in the sense we use that word in systems programming), you could do things like copy it to the heap and use the copy to call it again.
The implementation wasn’t perfect, which is why it was regressed. It may not be possible to solve the issues with async
in a way which fits the language. But I hope it is, it’s a great feature for a bunch of programming patterns, including some producer-consumer patterns which you may not associate with that keyword.
I don’t disagree with you at all about the stink which other things called async
bring to languages which have that as a keyword. But Zig’s implementation wasn’t one of those.
Did I understand correctly that async
in Zig meant co-routine
and had nothing to do with (truly) asynchronous I/O?
Yes, specifically a ‘stackless’ coroutine, meaning that an async frame could only suspend
from that frame, and not arbitrarily deep into the stack.
Here there is a question - “Why stackless (async/await) for Swift”? So it seems that “stackless” is used interchangeably with “async”, but why?
And citation from here:
and the data that is required to resume execution is stored separately from the stack. This allows for sequential code that executes asynchronously (e.g. to handle non-blocking I/O without explicit callbacks)
Why does putting info needed for resume a co-routine on the heap make it possible asynchronous code? Does it mean that stackful co-routines does not allow “async code”?
No, stackful coroutines are a great match for single threaded concurrency, I prefer to work with them in fact, all things being equal (they never are). There are at least two asymmetric coroutine libraries for Zig which are up-to-date with the language and being actively worked on/maintained, and a few more which may or may not be. One of these days I’d like to try writing something using libxev and zigcoro
, which is designed to work with it. I picked up some nice patterns combining libuv with Lua coroutines working on a DARPA project, which I expect would be readily expressed using those libraries.
As I understand the lore, Zig chose stackless for two main reasons. One is that a stack frame has a fixed size. The function call can go as deep down stack as it needs to, but a suspend point is always from the same frame, so saving the frame for later will always have only the size of that call frame. That means it can be allocated on the stack of the calling function, and copied down and up to call and suspend it. A stackful coroutine basically requires allocation to implement yield
, and that’s not something which is desirable to add to the primitive control flow vocabulary of a language like Zig.
The second reason is that, as much as I love stackful coroutines, they strongly color the code. Zig’s goal was to provide async in a form which solves the function coloring problem, and it came very close. That just isn’t achievable with full stackful coroutines. It might not even be possible to eliminate every tint from a language which has any async facility at all.
But the first try at it did a great job, and I do hope that good solutions to the problems discovered in the process can be found, so that we can get it back.
Is there a reason that this wasn’t called a “delimited continuation” instead of “async”?
A delimited continuation is like a stack where some arbitrary but specific amount (however many until the prompt is reached) stack frames are cut off and thingified as the delimited continuation.
So single stack frame might be even more limited, I am not sure? (e.g. not suspending from deeper frames)
Anyhow I think delimited continuation is one of those way too fancy words that may scare people away from wanting to use it, or engage with trying to understand the thing.
Personally I prefer just describing the thing as name for the thing (maybe because I am german and we have a lot of words like that), because then you don’t have to learn so many fancy words and what they mean in what context.
And I think mathematicians often go too far down the road of turning everything into a soup of symbols and cryptic names (I sometimes wish math was more like a programming language where I can use go-to-definition instead of having to figure out myself whether the symbols are used in some usual way described somewhere)
I also think delimited continuation comes more from the racket / scheme / lisp community, which tends to be more niche and not so known.
Sometimes it is fun to use fancy words for something you have gotten to work, in some sense it’s programmers naming their babies, but I think it can get out of hand, where every simple thing gets a fancy name. (that is used differently in different language communities)
I wonder whether I would have had a quicker grasp of what a coroutine is, if it was described as alternating stacks, when I first encountered it.