Blog post: Event Loops in Zig

Hi! I wrote a little blog post about my experience writing an event loop in Zig. I start by talking about the way I initially failed to correctly implement a basic linked list (basically because my stack-allocated nodes would go out of scope after being created), fixing that, creating a basic implementation with two allocations per event, one for the event data and one for the node, and finally the implementation I’m currently using, with 1000 pre-allocated events and nodes living in a pair of queues.

I try to aim my blog at, well, I’m not really sure, but not programmers usually, so you might find some of it aimed a little low. But! I’d love feedback if you have any. Thanks for taking a look :slight_smile:

https://ryleealanza.org/2023/06/21/The-Seamstress-Event-Loop-in-Zig.html

10 Likes

Hey! Welcome to the forum!

I liked it! May I ask who your intended audience is? It seems like it’s intended for people who are curious about C programming but don’t have a lot of experience with pointers (so probably intermediate)?

One thing you may try that I really like about Zig (although it may just be aesthetic in your case) is unlocking your mutexes with defer. I find that it really simplifies the control flow because it will always run as the function exits. Again, in your case, it may not be necessary.

1 Like

I guess I’m not sure… possibly “myself when I was googling around, confused about why my code wasn’t working”. e.g. I “knew” about pointers, but hadn’t truly wrestled much with them, “knew” about the stack vs. the heap, but had never been actually bitten by the difference, and so on.

For me this post is quite good for (more or less experienced) C programmers if they want to switch to Zig, I like it.

1 Like

Cool! This is great, I love these “roll your own data structure” blog posts. I don’t know what your event loop is processing, but it may be worth it to schedule asynchronous tasks instead. I could see your producer thread filling up the buffer while the worker thread waits for a file read, network transaction, etc.

1 Like

yeah, at the moment I can’t use async bc it doesn’t exist on the self-hosted compiler, but I’m very much looking forward to it!

This is delightfully well-written, thanks!

It seems like there’s some minor escaping issue, where & gets escaped twice and rendered as

self.tail = &new_node;

The solution I hit upon was to preallocate a pool of 1000 Node and Data objects and keep them recirculating while the program runs

I love how we solved this in TigerBeetle: rather than pre-allocating a bunch of nodes up-front (and worrying whether we pre-allocated enough nodes), we let the caller bring their own nodes with them. So, if a subsystem needs 5 entries to function, it’s on the subsystem to allocate (and free) those five nodes somewhere. Most of the time we statically know the per-subsystem requirements, so we don’t even really “allocate” these nodes, and just stash a statically-sized array somewhere.

4 Likes

I guess that another comment is that instead of

            @setCold(true);
            std.debug.assert(self.write_size == 0);
            std.debug.print("no nodes free!\n", .{});
            unreachable;

you could use std.debug.panic, as that’ll print message, signal cold, and terminate the execution in one package.

2 Likes

khe-khe :slight_smile:
didn’t you noticed event-driven-state-machines in the “show case” part on this forum?
(sorry for self-advertising)

They (edsms) can be implemented with more or less general purpose programming language without any fancy special support (coroutines, greenlets, fibers, async/await and alike) from the compiler side if the language provides clean direct interface to an OS API.

I never understood why languages designers try to integrate OS responsibilities into their language.

nice article
https://www.artima.com/articles/comparing-two-high-performance-io-design-patterns

I liked this article. However I think it’s also important to add that for a lot of programs it’s enough to be single threaded and just loop over poll(). When deciding what should drive your applications logic, you need to be aware of what possible events there are for it to respond to and what those responses are. How many events will you receive in what pattern? How long will it take to respond to them? Does it really make sense to off-load that to threads? Poll can be expensive, but so are threads. I feel like this is too often forgotten.

As an example, I have written applications that spawn windows on Linux (using Wayland). During an interactive resize they may get many hundreds of events per second in response to which the entire window and parts of the context needs to be re-rendered. Despite being CPU rendered and single threaded, they are all perfectly responsive on my mediocre hardware.

1 Like

Exactly!

poll() is somewhat slow

I have an example of client/server stuff (in D) for

  • Linux (epoll)
  • FreeBSD (kqueue)

for reactor pattern (most Unices)

  • read won’t block
  • write won’t block

(notification about a possibilty to do smthng without going to SLEEP/WAIT state)

for proactor pattern (Windows, true async)

  • read complete
  • write complete

(notification about the end of an operation)

I understand libuv uses the best pattern depending on the platform: reactor (epoll) on Linux / OSX, proactor (IOCP) on Windows. This sounds ideal for portability, but I have always wondered how much overhead there is from using libuv rather than the native mechanism. And if the answer to that is “little to no overhead”, would this be a good model to base a pure zig event loop on?

never put i/o logic into a library, it’s just stupid.
instead, do i/o multiplexing at an application level.