Was playing with zig a bit and one of ways to experiment at some point was something that can be fast and probably parallel. So decided to start with some functionality that was pretty useful and understanding in golang. One of conditions was to use something that is already in zig. Here is a first implementation of zchan kaimanhub/zchan - Codeberg.org .
I do not know if it is useful but it is simple. Any comments, suggestion, contribution are welcome.
Trying to add several things where this lib could be used.
Just a quick side quest: Doesnât the assert here risk put and get calls being optimized away? If not I think I might have misunderstood std.debug.assertâs doc comment and could use some education.
Since assert is a regular function, the argument is evaluated (and is indeed in debug/safe builds), but since the expression is assumed to be true (otherwise unreachable) it seems like the whole expression is allowed to be removed
Is there a difference whether the expression is fallible or not, or is deemed to have side effects?
I think you right, I am not sure if I know for sure but it could theoretically optimize away the entire expression, including the side effects of the put /get calls.
Probably better way was to use like const result = try q.put(io, &.{item}, 1); assert(result == 1);âŚ
Released 0.1.0 version of zchan lib. Added support of unbuffered channels and updated to use zig 0.15.1.
If you have any suggestions what could be added to the lib to serve your needs please add comment here or create an issue on codeberg.
I hope it will be useful for someone.
Well, itâs all about Concurrency is not Parallelism.
Almost any (kinda) modern programming language
are trying to lay you low into (on the tip of MS) async/await paradigm,
but Zig (I hope ) is not that kind of programming language ,
itâs not forcing you to do concurrency âsyntactical wayâ.
Coroutines (renamed into goroutines in golang) were invented
~70 years ago and I do not really understand why this renaissance
has gained such level of popularity.
Concurrency (doing many things in turn by small chunks in single OS thread)
can be achieved without special support on the side of compiler, like this, for ex.
IMHO the main thing to understand about JS/C#-style async/await is that itâs just language syntax sugar which let the compiler turn sequential code into traditional switch-case state machines. You can achieve the same thing manually without the syntax sugar, the code just looks ugly and is much harder to understand (since every âasyncâ func is essentially a big switch-case, and local variables donât live on the stack, but in a âlocal function contextâ which needs to be passed into the function as agument).
Traditional âgreen threadingâ coroutines (which are pretty much the same thing as Win3.x style cooperative multitasking) requires stack-switching, which usually requires dropping down to assembly code to perform the âcontext switchâ and isnât supported on some VMs (like WASM) - although there is a wip stack-switching proposal for WASM: stack-switching/proposals/stack-switching/Explainer.md at main ¡ WebAssembly/stack-switching ¡ GitHub.
PS having said that: Iâm actually also not a big fan of async/await. Itâs mostly useful when your entire application logic runs as part of a global event loop (thatâs why it is such a good fit for Javascript). For instance in web games you canât simply await a function on the âmain execution pathâ, since that would block rendering. So youâll need to âspawnâ background execution paths which are decoupled from the main execution path (and await in those decoupled execution paths is fine), and you also need a whole âawait vocabularyâ for Promise objects (e.g. at least âawait anyâ and âawait allâ).
Interestingly Windows Events are pretty close to that idea without requiring the state machine code transform, e.g. you have functions like WaitForSingleObject() and WaitForMultipleObjects() - and while the wait is in progress other (OS)-threads and -processes are scheduled. The main difference to JS-style Promise objects is that Windows Events donât have a result-payload.
I think Zig needs coroutines and I really hope the compiler will bring back stackless async/await at some point. Because of the allocation pattern, closures and callbacks are pretty hard to manage. JS-style promises kind of solved the callback hell by chaining operations, but you canât really do that without allocating the closure for every operation and then dealing with deallocation. I think it much better to just let the context be on stack/async state. Iâm been working an a framework for running coroutines, very similar to the future Io interrace, and I must say, memory management in user code is much easier that way.
I agree with look like, but they definitely do not behave like ânormalâ code.
In essence, coroutines are non-local gotos (aka longjumps in C stdlib).
And goto statements are âconsidered harmfulâ, arenât they?
Co-routine is a generalization of sub-routine
and I would not say itâs easier to learn.
From a view-point of a user (programmer) - may beâŚ
From a view-point of a developer of a language - hardly.
Switch-case approach is more or less suitable for parsing state machines,
but in an event-driven environment itâs just a nonsense.
Just take a look here or here.
Event driven state machine is a (small) bunch of
(run to complete) functions (yeah, callbacks), no switch/case, lord forbids
The goal of cooperative multitasking (within a single OS-level thread) is a âfine-grained-concurrencyâ. For now, Iâve only seen two ways to achieve this goal:
async/await, forcing a developer to wait/hope for a library/language to support that stuff
event driven state machines, no pangs on the compiler side required, you just do what you want to
Just construct event-driven-state-machine level on top of an event-capture-engine and you will get a set of nice small call-backs. And also a set of specific machines, tx-ing machine for ex.