Zio - async I/O framework based on libxev

This an experiment that grew out of my frustration, when working on the NATS client for Zig. It kind of competes with the future std.Io event loop, but I wanted to see what API I could use right now. I’ll probably finish it and use in my project, until std.Io becomes ready. Feedback welcome.

7 Likes

This is very pedantic, and not intended to diminish what you made:

you made a wrapper around libxev that provides a similarish api to future std.Io, but it doesnt compete with it because you didnt make a runtime agnostic interface, nor did you implement std.Io to compete with the implementations provided by future std

Sorry, I meant to say that I realize I’m duplicating work that is being done elsewhere. The plan for this is to support the standard Io interfaces, but I’m currently still on Zig 0.14, so I’m using those reader/writer interfaces. Any code that uses those interfaces will work with my library. When I get to migrating my project to Zig 0.15, I’ll implement the new reader/writer interfaces. That’s how I plan to get things like TLS/HTTP support.

1 Like

I only said that cause your statement gives the completely wrong impression on what you made.

I was expecting an interface similar to future std.Io
when I found a concrete (non interface) implementation, I thought it must implement future std.Io but it doesn’t.

you’re not though, because as I said already it’s not std.Io or an implementation of it. You’ve made something separate, though related.

Well, now you implanted an idea into my head, I’ll download the dev version of Zig and try implement the std.Io interface. :slight_smile:

4 Likes

There happens to be a zio already, although it doesn’t appear to be a serious and ongoing project so I wouldn’t say the name clash is important. Mainly I was misremembering the name of zig-aio, which is kind of related but not, in fact, named zio. There’s also zigcoro, while I’m collecting links. The last one uses libxev, I mention these in case you might find related projects useful in writing your own.

I think “competes with” std.Io is a reasonable gloss on any of these projects, as well as yours, for the record.

Then again:

May as well skate to where the puck is heading, right?

3 Likes

Hm, I didn’t know about zigcoro, that would probably solve my problem. Then i wouldn’t have to do this at al.

2 Likes

Now upgraded to Zig 0.15 and implementing API like this on Linux/Windows/Mac. The new reader/writer interfaces are fully supported. The std.Io interface is still too unstable to target.

const std = @import("std");
const zio = @import("zio");

fn echoClient(rt: *zio.Runtime, allocator: std.mem.Allocator) !void {
    // Connect to echo server using hostname
    var stream = try zio.net.tcpConnectToHost(rt, allocator, "localhost", 8080);
    defer stream.close();

    // Use buffered reader/writer
    var read_buffer: [1024]u8 = undefined;
    var write_buffer: [1024]u8 = undefined;
    var reader = stream.reader(&read_buffer);
    var writer = stream.writer(&write_buffer);

    // Send a line
    try writer.interface.writeAll("Hello, World!\n");
    try writer.interface.flush();

    // Read response line
    const response = try reader.interface.takeDelimiterExclusive('\n');
    std.debug.print("Echo: {s}\n", .{response});
}

pub fn main() !void {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();

    // Initialize runtime with thread pool for DNS resolution
    var runtime = try zio.Runtime.init(gpa.allocator(), .{
        .thread_pool = .{ .enabled = true },
    });
    defer runtime.deinit();

    // Spawn coroutine
    var task = try runtime.spawn(echoClient, .{ &runtime, gpa.allocator() }, .{});
    defer task.deinit();

    // Run the event loop
    try runtime.run();
    try task.result();
}
1 Like

I’ve decided to go ahead with this, as it doesn’t look like std.Io will be published any time soon, I’d really to use async I/O in my server project. I’ll maintain this, as the other similar libraries either have limited scope, or use very different paradigm. My goal is essentially the same as std.Io, I want to be able to use existing blocking code, and have it run in coroutines with async I/O. The event loop is currently single-threaded, I’ll probably add the option to run multiple threads, each with its own event loop, but coroutines will not migrate freely, as they go e.g. in Tokio or Go. When the event loop version of std.Io is released, I’ll probably abandon this library, but the migration path should be easy, as they APIs are almost identical.

I’m releasing v0.1.0 today. If anyone would like to experiment with it, I’m very much looking for feeback.

Here an example implementation of mini Redis-like server:

2 Likes

I was inspired by the Tokio mini-redis example, so I developed my mini-redis a bit more. I’ll add BroadcastChannel to zio, in addition to the regular Queue, and then I’ll also implement PUB/SUB.

1 Like

I just did a lot of work to add proper cancelation support, which is easy on the runtime level, but quite difficult in other parts of the code, especially synchronization primitives, you don’t want to leak semaphore permits, etc.

Everything is now cancelable, I/O operations are cancelled when the coroutine waiting on them is canceled, your defer/errdefer still work, if you properly handle error.Canceled. There is a “shield” mode, if you need to do clean up, that should not be affected by cancelation. In zio, this is currently mostly used by synchronization primitives, e.g. you need to re-lock the mutex in Condition.wait, even if the waiting was canceled, but code following that can still observe the error.Canceled error.

This mostly follows the cancelation logic of Python’s asyncio.

4 Likes

I’ve started exporting API docs: Zig Documentation

Will need to work on the actual content, and also some guides.

I’ve benchmarked a simple ping-pong pattern, two coroutines communicating over a channel, and Zio was >3x faster than Go and Tokio (both single and multi-threaded). This is only really measuring context switching speed, plus some synchronization primitives, but it still surprised me, because while Zio is single-threaded, it’s not particularly optimized for speed, I wanted to keep the code readable for now. That makes confident that the coroutine overhead is minimal, and if I add multi-thread scheduling, I should still be comparable to Go/Tokio, which is kind of amazing for an experiment like this.

1 Like

Have you figured out a nice way to do DNS resolution when linking libc?

I have it implemented nicely in userland but only when not linking libc. When linking libc it seems like we have to go through getaddrinfo which doesn’t have a way to integrate with other I/O operations other than by using a utility thread.

According to a blog post I found, glibc doesn’t even implement getaddrinfo in a thread-safe manner. What a pain in the ass.

1 Like

I took the easy route and just use an utility thread for now. I have spawnBlocking for running tasks in a secondary thread pool, so I just use that. In the future, I was planning on integrating c-ares, it has a way to plug into existing event loop. But then, even libuv just uses threadpool for getaddrinfo/getnameinfo, so I might keep it.