A long time ago (probably it was in the 2-nd half of the 90’s) I saw ~40 lines implementation of coroutines for DOS in Turbo Pascal and yes, it was exactly this, switching stacks. At that time I thoroughly studied that code (I still have it somewhere…) and thought I kinda understand everything Now we have more or less similar stuff under various names, goroutnes, fibers, greenlets, async (why on earth?!?) and I do not understand nothing
I found this article very good: [Why async Rust?](Why async Rust)
It explains why async Rust ist the way it is (as of 2023 October). It starts with explaining what the design space for async can be: stackless / stackful and cooperative / preemptive. This part is very interesting and should be language agnostic. The latter part explains tgen why Async Rust is what it is, and can be skipped if one is nit interested.
I will allow myself to be equally provocative in the opposite direction:
Blocking IO is not real. It doesn’t exist and is just some smoke&mirrors illusion created by UNIX process model.
Almost nothing is really blocking: you first ask hardware to do something, and then you get notified (interrrupted) when that happens (or you straight up poll the hardware once in a while).
The natural interface for IO syscalls is that you submit an IO request and than at some later point get notified about its completion. And, naturally, you vacant just issue a bunch of syscalls before the first one completes.
The API where you sort-of freeze while a syscall is executed is a runtime abstraction hard-coded into the kernel.
Really, the world is evented from top to bottom, and we should figure out a natural model for programming it. There are two principle obstacles on this way:
- As much as POSIX imposes on blocking IO abstraction, there isn’t really a good way to escape it. io_uring is a much more natural expression of what actually happens, but that’s not a widely available vendor-agnostic API yet.
- I don’t think we’ve figured out how to write low-level evented programs yet! High-level is clear — just implement cheap dynamic stacks at runtime, like Go, or pthreads, or Java. But for the low level state machine stuff, it feels like pretty much open problem. Rust perhaps Getty’s quite close to what you’d want, but it’s still isn’t quite there.
… to me “state machine” word combination is like a red rag to a bull
Take a look at this. There is also long description, but it’s in Russian.
One possible approach is event driven state machines as concurrency model.
aha… after reading this I got more clear understanding of all those (as @Sze expressed it) fancy words.
If we have some tasks in the form of functions (in terms C/Rust/Zig) and some way of transferring control from one task to another as many times as we want, then we have coroutines. Ok. If we have only 2 corouitnes, we can only the 1-st one from the 2-nd and vise versa. Now, what if we have more? Then it is not quite obvious which one should call one.
- if there is some special coroutine (a scheduler) then we have fibers; any other coroutine yields only to the scheduler and they are obliged to do this, cooperative multitasking, yep.
- if the scheduler is driven by some kind of timer then we have green threads (aka goroutines?..), preemptive multitasking.
Also I took a look at DOS+Pascal coroutines implementation I mentioned one more time and… haha! There are some examples there.
- just 2 coroutines… ok, it’s just “coroutines”
- 2 coroutines plus special scheduling coroutine, first two yields to the scheduler only…fibers!
- scheduler is driven by hardware timer interrupts and forcibly preempts tasks… green threads!!!
Process/thread/task state diagram is more or less identical in any OS.
If a thread in Windows invokes something like POSIX sleep
, what happens to it?
It goes to WAITING state. The name used for this state may differ, WAITING/BLOCKED/SLEEPING, but it’s all the same.
The very word “blocking” may have some negative bloom, kinda “blocking is baaaad”, but it’s not at all. If you like thread based concurrency, regardless of how many cores in your CPU, ok, use simple blocking I/O. If you do not like threads, let your single process/thread sleep/wait in epoll_wait
or similar OS API.
There is a difference between an asymmetric stackful coroutine and a delimited continuation, and neither of those things correspond to OG Zig async
, which is an asymmetric stackless coroutine.
@Sze is right that there’s a lot of terminology here, and it can obscure what’s going on, but the terminology is due to essential complexity, there are just a lot of ways of handling control flow once you get away from subroutines and the strict tree-shaped flow graph they provide.
To avoid adjectives, I’m calling the Zig style a ‘frame’, an asymmetric stackful coroutine just ‘coroutine’, and delimited continuation stays the way it is, because there are also continuations to be considered, although I won’t be doing so. There is a sense in which frames delimit a continuation, but that isn’t the usual sense of the term.
A delimited continuation is a reified call stack, which is also an accurate description of a coroutine. The difference is that a coroutine is one-shot, and a delimited continuation is many-shot. All this means is that you can restart a delimited continuation, but not a coroutine.
In dynamic languages (it’s not that different for systems languages but those involve details I’m choosing to ignore), you wrap
a function to create a coroutine, and resume
it by calling the coroutine with its arguments. At any point in the call stack, the coroutine can yield
, returning some data, and the coroutine can be resume
d until the actual return of the original function call. Between resume
s it is said to be suspended, after the return, it’s dead.
Delimited continuations, by convention, work differently. The continuation is delimited with a primitive called reset
, within which more control flow is defined, including one or more instances of shift
, which returns control to the outer reset
. So instead of creating your reified call stack by using wrap
on a function to create a coroutine, you create it directly out of function calls. The result is something which can be called repeatedly, but otherwise functions as a coroutine does.
This is more powerful than a coroutine, but also more expensive. You can fake either one with the other, and I have yet to be convinced that a delimited continuation is really the abstraction I should want. I don’t want to make this longer than it is by going on a tangent about applications, but a colleague has a strong use case for low-level delimited continuations, so the odds of me implementing them in Zig are greater than zero.
For more than you ever wanted to know about delimited continuations, click that link. If you read that and think “gosh these sound hard to reason about”, I agree with you.
I am gawking at these “delimited/undelimited continuations” (where is just “limited” continuations btw?) and thinking to myself that this is very-very oversubtle way to hide that very control flow from an application programmer.
A quote from wiki:
Continuations are the functional expression of the
GOTO
statement
And
For example, in C,
longjmp()
can be used to jump from the middle of one function to another.
Once I used such kind of of hackery, it was cool to know about it, but… I wouldn’t want to see this as a “best practice” here and there in every kinda ‘mimic-everything-from-functional-style’ language.
And another one:
While they are a sensible option in some special cases such as web programming
WHAAAAT?
Did I miss something? When did “web-programming” became so very “special”?
WEB-coders are already absolutely misguided by all these ‘futures’ and by proclamations like ‘this-frame-work-is-the-coolest-one-ever’, let implant ‘delimited continuation’ into their brains, why not?
IMHO, the author of the Zig is absolutely right when he firmly refuses to implement “closures”, “async”, “interfaces” and so on at a syntax level. C never had them and Zig won’t too. Cheers!
The author of Zig very much did implement async
. I get what you’re saying, but that is definitely a thing which did happen.
It’s fine for coroutines and delimited continuations to just be libraries in a Zig context. async
may or may not come back, but those two are a bad fit for the language, as a control flow primitive. It’s just a bit of asm magic needed, and they can be called as though they’re functions, that’s fine; building it in would involve making decisions which the language shouldn’t be making. I expect that coro-and-event-loop code and libraries will be a robust niche in Zig circles as the language grows more popular, whether we get another async
or not.
Delimited continuations? If you can’t think of a reason why you need them, then you don’t. I do, as it happens, but I don’t predict a Zig delimited continuations ecosystem will ever emerge. They’re just too weird for that.
If anything, I’d like to see this first (granted that there’s different uses of the word async
floating around).
Also, I’m moving this to “Explain” instead of “Help” because this is moving towards an open ended discussion.
A bit?..
Event driven state machines (I/O oriented) are much more easier to implement (with a proper OS API (or w/o it) and a proper programming language) than trying to emulate those mathematically inclined fantasies… well, they (EDSMs) might be harder to use, that’s true.
On the other hand, there were 60 years or so of trying to invent a language,
that would kinda simplify a programmer’s job… and what? things got from bad to worse.
Yes, a bit.
A bit under 650 lines for seven chipsets, including comments and whitespace.
this attitude is terrible, imo. i’m not (currently) a professional programmer, but i fail to see any context in which this is a useful thing to say or to think, and a discussion about the future directions of a young language seems like an especially poor fit for the kind of “we’ve been trying and things only got worse” this espouses.
I agree and I think it also just isn’t true (there are many more niche languages that do interesting things BQN, red, idris, roq, …), while there are plenty of old things that get rediscovered over and over again, I think that is somewhat natural because people learn through rediscovery, at least that has been true for me and I think a good way to acquire deep understanding about something, is to try and build it.
And that will involve making plenty of mistakes. you need to learn from and improve upon with later revisions, thinking you know everything now (or that there aren’t any new ideas) would become a self-fulfilling prophecy, because it would blind you towards some new ideas that you haven’t considered yet and seeing the value those ideas provide. (Just like some assembly programmers didn’t see the value in using C, some C programmers won’t see the value in using a language with actual modules etc. and so on) (And that can happen with every feature)
There are things about Go I disagree with and Zig seems more aligned with what I want from a language, but I still think Go is a good language that has evolved over the years and has created a combination of tradeoffs that make it more interesting then for example C++. (Which in my opinion lacks vision, clarity and focus)
Everyone will see the value of specifc languages differently and whether those solve things that are interesting to that individual, but I think it is good to stay open minded and realize that what you care about may not be what others care about.
Some want just a very ergonomic language that gives them every feature they want and that is a legitimate want, it just is something that Zig currently doesn’t give people who want a lot of features. Some other languages give you lots of features but also are implemented in a way that it would be difficult to learn, understand and change those implementations. My personal interest is in languages that are fairly minimal, because ideally I want to understand its implementation also and able to change or work on it.
I think it is good if a language can provide and give you lots of features, I just don’t want the language to take the easy route of “lets create a bloated language implementation so that we can provide lots of features”.
Having a small core of a few features that can be used for many things, is what makes the language and its implementation understandable and I think that is important for a good language, so that many people are able to understand the language and become contributors to it.
That doesn’t mean that certain things should never become full language features, it is more about what kind of additional complexity those features add to the language and whether this cost is low enough, or the feature is important enough and aligns with the goals of the language.
While many ideas have been tried, some of those ideas have only been tried in obscure languages that not many people have used, for example Icon’s and Unicon’s goal directed evaluation is an interesting concept that I haven’t seen in that way in other languages (Prolog/Datalog seems conceptually most similar and still quite different)
I guess my point is we shouldn’t get to entrenched in one way of thinking that other ways are never considered again, this talk makes it seem as maybe we tried more different things when we (as programmers) were still just playing around and figuring stuff out, instead of thinking of ourselves as people who know what they are doing (which we still often don’t):
I think it is important to keep that spirit of “what would happen if we did some things in a totally different way?” alive, the curiosity and joy that it enables alone are already great, but we may even find some really different ways to do things.
For what it’s worth, I interpreted that as a statement about language evolution in general, and more specifically the woeful complexity of async implementations in other languages (if you read @dee0xeed’s other posts in this thread, that’s a consistent theme). Not a knock on Zig specifically, or even a statement about Zig at all.
Regardless, let’s do our best to keep this conversation on a friendly basis.
Well, you have to understand that for probably 40 of those years programming language evolution was sloooooooow.
What primarily changed is that CPU, memory and storage became cheap and large. This meant that you could think about the “programming language” rather than how to make the compiler finish in less than a day and fit on a floppy.
Garbage collection, for example, was damn near a superpower for a programmer–if you could afford the memory. Prior to 1995 memory was expensive so your superpower was limited to non-PC computers. It’s no coincidence that scripting language adoption blew up as well as Java and Javascript appearing just as memory prices crashed.
And, even still, programming language evolution isn’t fast because it takes time for a programming language to gain adopters until it becomes something that has enough ecosystem that “normal” programmers can use it. VSCode did the world a favor by effectively forcing LSPs on everybody which cuts the amount of effort required to get support bootstrapped. However, it is still work and still takes time.
I would say that its really only been in the last 20 years that we have a Cambrian Explosion of programming languages that enables faster evolution. However, that evolution will never be fast because evolution in programming languages is limited by the social aspects of the ecosystem rather than the technical issues.
To me “async” is a social code smell akin to the Gang of Four patterns. It is attempting to elevate a language deficiency workaround (Javascript’s single threadedness) to some deep concept that will be seen as stupid 20 years from now. Yes, there is a need in languages for parallelism and concurrency constructs–Javascript’s “async” ain’t it.
(Gang of Four is “Design Patterns” Amazon.com. A book that programmers my hoary age couldn’t escape from even though it was obviously a gigantic code smell even back then. It was encoding C++ language deficiencies just like “async” encodes Javascript ones.)
I was not going to post anymore on this topic, but ok, paper doesn’t refuse ink.
CPU/RAM/DSD geometrical size became smaller.
The ratio “number of bits to hold” / “physical size of a storage” - this is what became “large”.
On a punched card you can see bits by your eyes.
On a //substitute any storage device here// you can’t.
Exactly. Zig is very attractive for C-programmers in the first place (static typing, manual memory management etc) and, probably for C++ programmers too (can’t speak for them all, I am not a C++ programmer).
On the other hand, there are people for whom Python/JavaScript/ was their 1-st programming language and it’s quite natural that these people want to have their most lovely features of those languages (“interfaces”, “closures”, class-based “OOP”, “garbage collection” etc) in any evolving language. Nothing’s wrong with it.
But including every thinkable and unthinkable feature mentioned into Zig syntax would immediately drop Zig’s attractiveness for C/C++ programmers down to zero.
I am not very found of GC and I have a couple of solid reasons to dislike it.
- I know some Python/JavaScript programmers, who don’t understand at all how GC works in these languages and as a consequence they constantly make RAM hogs.
- in C#/D GC may destroy your object (created in main function!!!) in the very beginning because it does not understand that this object will be needed at program termination.
C programmers perhaps. C++ has never met a feature without implementing it in several subtly incompatible ways.
I am not sure I am understanding this fully,
but RAII (C++ way to handle “resources”)
was always a thing that confused me a lot.
From CPU “point of view”, it’s always “polling”:
- it (CPU) checks it’s interrupts line(s) state(s) after each instruction completed.
- when executing an interrupt handler (or at least some parts of it), it does not pay no attention to interrupts lines states.
How can you hear your alarm-clock bell?
Does your brain count seconds all through the night in the background until it 1234567?
Or does it really “interrupted” by an external world?