This an experiment that grew out of my frustration, when working on the NATS client for Zig. It kind of competes with the future std.Io event loop, but I wanted to see what API I could use right now. I’ll probably finish it and use in my project, until std.Io becomes ready. Feedback welcome.
This is very pedantic, and not intended to diminish what you made:
you made a wrapper around libxev that provides a similarish api to future std.Io, but it doesnt compete with it because you didnt make a runtime agnostic interface, nor did you implement std.Io to compete with the implementations provided by future std
Sorry, I meant to say that I realize I’m duplicating work that is being done elsewhere. The plan for this is to support the standard Io interfaces, but I’m currently still on Zig 0.14, so I’m using those reader/writer interfaces. Any code that uses those interfaces will work with my library. When I get to migrating my project to Zig 0.15, I’ll implement the new reader/writer interfaces. That’s how I plan to get things like TLS/HTTP support.
I only said that cause your statement gives the completely wrong impression on what you made.
I was expecting an interface similar to future std.Io
when I found a concrete (non interface) implementation, I thought it must implement future std.Io but it doesn’t.
you’re not though, because as I said already it’s not std.Io or an implementation of it. You’ve made something separate, though related.
Well, now you implanted an idea into my head, I’ll download the dev version of Zig and try implement the std.Io interface. ![]()
There happens to be a zio already, although it doesn’t appear to be a serious and ongoing project so I wouldn’t say the name clash is important. Mainly I was misremembering the name of zig-aio, which is kind of related but not, in fact, named zio. There’s also zigcoro, while I’m collecting links. The last one uses libxev, I mention these in case you might find related projects useful in writing your own.
I think “competes with” std.Io is a reasonable gloss on any of these projects, as well as yours, for the record.
Then again:
May as well skate to where the puck is heading, right?
Hm, I didn’t know about zigcoro, that would probably solve my problem. Then i wouldn’t have to do this at al.
Now upgraded to Zig 0.15 and implementing API like this on Linux/Windows/Mac. The new reader/writer interfaces are fully supported. The std.Io interface is still too unstable to target.
const std = @import("std");
const zio = @import("zio");
fn echoClient(rt: *zio.Runtime, allocator: std.mem.Allocator) !void {
// Connect to echo server using hostname
var stream = try zio.net.tcpConnectToHost(rt, allocator, "localhost", 8080);
defer stream.close();
// Use buffered reader/writer
var read_buffer: [1024]u8 = undefined;
var write_buffer: [1024]u8 = undefined;
var reader = stream.reader(&read_buffer);
var writer = stream.writer(&write_buffer);
// Send a line
try writer.interface.writeAll("Hello, World!\n");
try writer.interface.flush();
// Read response line
const response = try reader.interface.takeDelimiterExclusive('\n');
std.debug.print("Echo: {s}\n", .{response});
}
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
// Initialize runtime with thread pool for DNS resolution
var runtime = try zio.Runtime.init(gpa.allocator(), .{
.thread_pool = .{ .enabled = true },
});
defer runtime.deinit();
// Spawn coroutine
var task = try runtime.spawn(echoClient, .{ &runtime, gpa.allocator() }, .{});
defer task.deinit();
// Run the event loop
try runtime.run();
try task.result();
}
I’ve decided to go ahead with this, as it doesn’t look like std.Io will be published any time soon, I’d really to use async I/O in my server project. I’ll maintain this, as the other similar libraries either have limited scope, or use very different paradigm. My goal is essentially the same as std.Io, I want to be able to use existing blocking code, and have it run in coroutines with async I/O. The event loop is currently single-threaded, I’ll probably add the option to run multiple threads, each with its own event loop, but coroutines will not migrate freely, as they go e.g. in Tokio or Go. When the event loop version of std.Io is released, I’ll probably abandon this library, but the migration path should be easy, as they APIs are almost identical.
I’m releasing v0.1.0 today. If anyone would like to experiment with it, I’m very much looking for feeback.
Here an example implementation of mini Redis-like server:
I was inspired by the Tokio mini-redis example, so I developed my mini-redis a bit more. I’ll add BroadcastChannel to zio, in addition to the regular Queue, and then I’ll also implement PUB/SUB.
I just did a lot of work to add proper cancelation support, which is easy on the runtime level, but quite difficult in other parts of the code, especially synchronization primitives, you don’t want to leak semaphore permits, etc.
Everything is now cancelable, I/O operations are cancelled when the coroutine waiting on them is canceled, your defer/errdefer still work, if you properly handle error.Canceled. There is a “shield” mode, if you need to do clean up, that should not be affected by cancelation. In zio, this is currently mostly used by synchronization primitives, e.g. you need to re-lock the mutex in Condition.wait, even if the waiting was canceled, but code following that can still observe the error.Canceled error.
This mostly follows the cancelation logic of Python’s asyncio.
I’ve started exporting API docs: Zig Documentation
Will need to work on the actual content, and also some guides.
I’ve benchmarked a simple ping-pong pattern, two coroutines communicating over a channel, and Zio was >3x faster than Go and Tokio (both single and multi-threaded). This is only really measuring context switching speed, plus some synchronization primitives, but it still surprised me, because while Zio is single-threaded, it’s not particularly optimized for speed, I wanted to keep the code readable for now. That makes confident that the coroutine overhead is minimal, and if I add multi-thread scheduling, I should still be comparable to Go/Tokio, which is kind of amazing for an experiment like this.
Have you figured out a nice way to do DNS resolution when linking libc?
I have it implemented nicely in userland but only when not linking libc. When linking libc it seems like we have to go through getaddrinfo which doesn’t have a way to integrate with other I/O operations other than by using a utility thread.
According to a blog post I found, glibc doesn’t even implement getaddrinfo in a thread-safe manner. What a pain in the ass.
I took the easy route and just use an utility thread for now. I have spawnBlocking for running tasks in a secondary thread pool, so I just use that. In the future, I was planning on integrating c-ares, it has a way to plug into existing event loop. But then, even libuv just uses threadpool for getaddrinfo/getnameinfo, so I might keep it.
I have, but not with libc, with libanl instead.
In my recent C code (sorry
) I have a “connector” state machine,
which is doing DNS-resolution as a part of it’s job.
other than by using a utility thread
libanl just sends user defined (if I remember right) signal,
which is handled absolutely uniformly in event-driven-state-machine paradigm.
cx.c (6.3 KB)
The code attached.
I can not see no troubles in translating that to Zig.
Just merged a large change, the runtime can be now multi-threaded, with each executor thread running its own event loop, without much performance degradation, because most things are still local, but wait/select and synchronization primitives work across threads. Users can decide whether to inherit the same executor as the parent task, or distribute it. Once a task starts on one executor, it will finish on the same on, but that’s fine for typical server workload, where you have multiple tasks handling connections. I’m exploring how to do migration/work-stealing safely, most likely will end up adding it, having it on by default, but with explicit opt-out for some sections (like when interacting with the event loop directly).
And the most mind blowing thing for me, even on a benchmark doing only coordinated IO between server and multiple clients, something that Go should excel at, zio is faster than both Go and Tokio. It beat them at single thread performance, but it’s beating them at multi threaded as well. I’m honestly not sure how it’s possible.
Is the Zig userland code intended to implement DNS over TCP eventually? That was a huge paint point with musl for a long time. I’m thinking might make sense to have a lookupDnsAlloc for that use case.
If it’s out of scope for the initial implementation of std.Io, I’d be happy to take stab at it when it’s stabilized.
It already is in the Io branch see threaded for a wip implementation
I’ve just released zio 0.4.0, I’m pretty happy with the shape of the project now. With the new multi-threaded runtime, it actually became a pretty nice way of running non-IO concurrent tasks that still need some synchronization. Once I have the fully work-stealing scheduler done, to ensure proper load balancing, this will work significantly better than a regular thread pool.
Added
- Extended runtime to support multiple threads/executors (not full work-stealing yet)
- Added
Signalfor listening to OS signals - Added
NotifyandFuture(T)synchronization primitives - Added
select()for waiting on multiple tasks
Changed
- Added
zio.net.IpAddressandzio.net.UnixAddress, matching the futurestd.IoAPI - Renamed
zio.TcpListenertozio.net.Server - Renamed
zio.TcpStreamtozio.net.Stream - Renamed
zio.UdpSockettozio.net.Socket(Socketcan be also as a low-level primitive) join()is now uncancelable, it will cancel the task if the parent task is cancelledsleep()now correctly propagateserror.Canceled- Internal refactoring to allow more objects (e.g.
ResetEvent) to participate inselect()
Fixed
- IPv6 address truncatation in network operations
Wrote a little blog post about this journey: