My thoughts about async

hey guys, i was going thru a zig tutorial i found on github, basically a bunch of zig source files with little coding puzzles to solve.

when i hit the async section, i noticed the functionality got removed, and this got me thinking, how i would have implemented if i was an awesome programmer like the zig toolchain devs (im an amateur :)), but would it be a good design choice if we just declared a block of code async, and put all the functions in there that we want to run simultaneously, and once they all finish, the block returns to the outer scope ?

1 Like

I think that having the capability to suspend a function, obtain its frame, and later resume it like the previous model had is just enough for third parties to build their own async concurrency models on top. The language wouldn’t have async or await keywords like it did before, because that’s treading into the territory of picking an async model like the JS, C#, Rust camps on one side and Go / Erlang on the other. In my opinion, that part is too high level for Zig. Just keeping to the low level of pausing and resuming functions would leave room for building rich APIs on top of that, be it aync / await or coroutines / actors.

It’s like the treatment of strings in Zig. Not having a high level string type and just staying at the low bytes level allows development of any type of string functionality you want and not clashing with a “blessed” high level API integrated into the language.

And hey, welcome to Ziggit! :smiley_cat:


that makes sense, i have almost zero experience with multi threading, i was just thinking out loud XD.

It’s good that you brought this up for discussion! These days all languages are expected to have some type of async concurrency model, so the fact that Zig currently lacks one is an important issue for many. Zig does have great multithreading support, so I encourage you to explore that area and have some fun with threads. :^)


I like your suggestion for a few reasons, but primarily a philosophical one… excuse me while I dust off my philosophy hat… it’s kinda dusty…

Low level languages should spark our curiosity towards the inner-workings of our machines. Too much work has been done to hide the “implementation details” of concurrency and parallelism and I believe that has something to do with our state of understanding (which is fairly bleak, imo).

I hope that this is the direction they choose to go in instead of “here’s the Zig way of looking at threads”. It could be a while until specific libraries take precedence, but there’s a lot of untapped potential there too.


language-based async I think generally hurts lower-level languages when used or best it is ignored. The two I look to for this are rust and c++.

async (such as the coroutine/user threads c++ and rust went for) hurts latency straight out. Latency is far less forgiving and more difficult to code for than throughput: you can’t scale your way to success. And io_ring doesn’t save you (it doesn’t even help and has worse latency than epoll).

For throughput it is good enough to allow yourself to scale yourself to a, and it provides an ecosystem for those that aren’t the best network programmers to work in.

However, that comes with a price, as rust has learned. When async was being done there were post upon post about how it will fracture the rust network ecosystem, and after it left nightly and moved into the current release branch, there were people who even worked on it talking about how it will destroy rust.

and they’ve been right. you cannot pick up a rust network library that doesn’t use async now. It sucks all the network code into it. Even libraries that have a synchronous interface just use async and then wait so would be a very slim simple network turns into another tokio nightmare. There no longer such a thing as simple rust network code.

Async, the eater of code, dominates the entire stack now, and isn’t very flexibly when trying to finely tune network code or do things that is didn’t invisions (i remeber trying to up the recv buffer sizes in an async app was just the absolute mess (you might up it in tokio, but your http lib had to support it, and also you tls lib, and you websocket lib, etc… async model of the network stack doesn’t fit well with the model you get in userland). Other failed idea: setting certain options on the accept socket that needed to be set before the accept (this is point where i gave up and started looking for other languages after a few years of rust).

You wind up with a lot of mediocre libraries with no real way to make the real winners. Great if that is your language goal (Go) or the languages value add exists in another area (JS), bad if you are trying to be a high performance system language where people write their own tcp stacks (c++).

C++ has library level async and coroutines, but nobody uses them. yo need to have really high performance requirements to put the effort into them, but not so high that you are pushing the limits of async, so the band that justifies use is rather thin.

i hope it never comes back to the language. Adding to the issue that io_ring isn’t the best for low-latency message passing (its strenght is in high throughput streaming data), but everybody implements that, forgets about the low-latency crown (or just tells them its good enough, which it never is), all the network libs get sucked into the async hole, and the ones that don’t start to form an alternative stack that is incompatible with async and the network space is split.


I am all for this. For something like OSDev this is literally just the functionality I need: I can have a driver that loops infinitely and waits for events, and the top of the loop can suspend the function and an interrupt can resume it. I don’t need anything much more complicated than that, I imagine. And I think this level of async/await (can we name it something else? Function suspension?) is perfect for a language like Zig. It fits Zig’s zen perfectly.


I think Continuation fits nicely here.

what am i missing? why do you say this? It seems to basically be the same as c in this regard.

Is it really a continuation (copy of the entire stack)?
Or is it just a pointer to the stack?

1 Like

I guess I was thinking in comparison with languages like Go where you practically have zero access or control over threads. But yeah, it’s pretty much what C has to offer. I find Rust’s threading model superior in that you can obtain errors or values returned from spawned threads with no need for channels, mutexes, or atomics.

1 Like

That’s a really good question. I gues it depends on the implementation now that I think of it. But if it’s just dealing with the function frames, the pointer to frame would be simple and efficient IMO.

On the side of the user? I would be surprised if the implementation managed to do that without using those.

If it is only on the side of the user, then I don’t find that particular interesting, because then it just seems like a nice syntactic abstraction (if it fits your needs), I suspect that a zig library could provide a similar interface.

Async is definitely useful on the UI side, where perceived responsiveness is way, way more important than actual performance. Zig is just too lower-level for that, I think. Without garbage-collected text strings UI code would be really hard to write. I really don’t think it’s possible to create a programming language that can cover all aspects of development anymore.

1 Like

I am expecting the suspend/resume semantics to allow me to create an async/await or promise/future functional style like interface if I want that, using an eventloop implementation like libxev. Is that what the level will possibly be?

For UI one can leverage Lua or even JS again then by embedding those and “making it easy” to script the UI parts and have them reloaded without a complete rebuild. Depending on the UI use case I also think doing it all in zig might be too much, but if the underlying zig part exposes the right primitives to render the UI, the scripting would make it easy again. That is actually how got stuck on nodejs for ten years, because I was embedding V8 for the proprietary implementations I built to make it easily extendable etc. and let the customers/users do their specific things. I was first playing with Lua, also great. Then nodejs came along and made it so easy.

Yes, user code. I have no idea how they’re implementing it, but behind the scenes they probably are using some of those concurrency primitives.

It would have to re-implement std.Thread.spawn, allowing for functions that return something other than void. Handling unknown return value types would be challenging, but just for handling thread errors, allowing anyerror!void would suffice.

You can make a case for calling this a one-shot continuation, more specifically a one-frame one-shot continuation.

It’s not an especially powerful construct, but it has the big advantage of bounded size. The compiler can statically allocate room to store the function’s call frame, this can be copied onto the heap and passed around, or just called later.

I’m not convinced async needs to be a part of the core language. C has libraries like neco, which offers full (stackful, one-shot) coroutines, with a built-in scheduler, and some library support for common patterns like producer/consumer. It needs a sprinkle of platform-specific ASM, but it’s a lot more powerful than what Zig used to provide.

One of my colleagues is looking into implementing a [redacted], which needs delimited continuations: these have to be capable of multi-shot, as well as one-shot, for the scheme (pun intended) to work. Zig’s older async system wouldn’t help much in implementing that, but by the same token, the absence of it isn’t getting in his way. Same basic deal, some custom ASM disguised with ordinary-looking functions that does evil things to the stack.

It’ll be interesting to see how this area of the language progresses. I’m biased toward the combination of stackful coroutines, event loops with context-invoked callbacks, and a bit of lightweight scheduling, and if there are some language primitives which could be added to make that easier and smoother, so much the better. But it’s entirely plausible to implement all of that as library code, so maybe that’s enough.


That’s my kind of programming.