Handle clients with std.Io.Batch

How can I implement something like this with std.Io.Batch? I’m confused about its usage. This is just a draft, but it would be helpful to understand how it works.

pub const Event = struct {
    from: std.posix.fd_t,
    type: Type = .none,

    const Type = union(enum) {
        none,

        accept,
        close,
        // contains the length of data
        // to read afterwards
        data: u64,
    };
};

pub fn wait(poll_fds: []std.posix.pollfd) error{ PollFailed, UnknownEvent }!Event {
    _ = std.posix.poll(poll_fds, -1) catch return error.PollFailed;

    const accept_fd = poll_fds[0].fd;

    for (poll_fds) |*poll_fd| {
        if (poll_fd.revents == 0) continue;
        const revents = poll_fd.revents;
        poll_fd.revents = 0;

        assert((revents & std.posix.POLL.NVAL) == 0);

        const event_type: Event.Type = if (poll_fd.fd == accept_fd)
            .accept
        else if ((revents & std.posix.POLL.IN) != 0) blk: {
            var buf: [8]u8 = undefined;
            const n = std.posix.read(poll_fd.fd, &buf) catch 0;
            break :blk if (n != 0) .{ .data = @bitCast(buf) } else .close;
        } else if ((revents & (std.posix.POLL.ERR | std.posix.POLL.HUP)) != 0)
            .close
        else
            return error.UnknownEvent;

        return .{
            .from = poll_fd.fd,
            .type = event_type,
        };
    }

    unreachable;
}

This code detects I/O readiness and converts it into a higher-level event.
On the other hand, std.Io.Batch is a low-level API for running multiple read/write operations concurrently.

Therefore, it is not a good fit to replace this code with std.Io.Batch.
Instead, I would prefer using something like std.Io.async().

Which is 1) incorrect (you want concurrency, not asynchrony), and, 2) wasteful (at least until we get stackless coroutines)

It should become more or less feasible once networking gets fully migrated into Operations, however, even then it comes with caveats: you have to pre-allocate an array for all operations upfront (Batch is not resizable), which you can try to workaround by either resizing it yourself (which would require to cancel all of the in-flight operations before doing so) or booting least-active connections out - which also has an issue - you cannot cancel individual operations, only all at once.

Batch uses index’s into the storage buffer, so it can be grown, you just need to initialise the new memory correctly and add it to the unused list.

Just make sure you don’t do that while awaiting.

Isn’t this a usecase for std.Io.Select ? Correct me if I’m wrong, this is still new stuff for me

This is what I meant by “resizing it yourself”. And even then, that invalidates the internal state of batch. Look no further than std.Io.Threaded’s implementation of batchAwaitConcurrent. Once the operations exceed a certain number, it’ll allocate a poll buffer with size of the Batch storage and store it as userdata. When you will be manually resizing batch storage, the implementation won’t adjust it. This means that you still cannot avoid canceling the entire batch.

1 Like

Select will give similar semantics, but Batch should be more optimal.

1 Like