Zig Roadmap 2026

Just to make sure we’re on the same page: I’m assuming the two vtables you are referencing are

  1. std.Io
  2. std.Io.Writer / std.Io.Reader

Usually an application will have only one std.Io implementation, chosen at the beginning of the program in main. In such case, the function pointers in that vtable will all have a restricted set of size one. This means that those function calls will be direct calls, not virtual calls - before any optimizations are applied. If an application employs more than one implementation, then a virtual call is the price paid to avoid effectively having 2 copies of the entire application generated. It’s a small price to pay when you consider how expensive I/O operations are in general. By comparison, Allocator is the interface that deserves more scrutiny.

As for streaming interfaces, the key point here is that the buffer is in the interface, not the implementation. This means that the hot path for all the relevant functions do not call into the vtable. Such calls only happen when the buffer is full. For instance, take a look at writeByte:

/// Calls `drain` as many times as necessary such that `byte` is transferred.
pub fn writeByte(w: *Writer, byte: u8) Error!void {
    while (w.buffer.len - w.end == 0) {
        const n = try w.vtable.drain(w, &.{&.{byte}}, 1);
        if (n > 0) {
            w.count += 1;
            return;
        }
    } else {
        @branchHint(.likely);
        w.buffer[w.end] = byte;
        w.end += 1;
        w.count += 1;
    }
}
20 Likes

Finally I found the time to watch the video.

And I think what’s planned for Zig with the new IO seems like a very good implementation idea that matches my vague thoughts regarding automatic async.
I’m looking forward to see this in different real-world scenarios.
I’m still somewhat concerned that the new IO might result in function coloring creeping in in new ways that we don’t yet know, but it’s definitely worth a try.
Let’s explore what how coding with it feels and looks like.

I don’t mind that it’s a breaking change.
Zig is a language in its youth and will probably see other breaking changes while it evolves. Other libs change all the time, which require changes in my own code. A language change is just one more annoyance, so what?

3 Likes

I’m working on a blog post that recaps the new I/O stuff and I feel confident in stating that function coloring has been completely defeated by this new approach.

Post should be out in a couple of days at worst.

Here’s a relevant passage from it:

One of the main goals of Zig is to enable code reusability, which is always a touchy subject when it comes to async I/O.

The famous “What Color is Your Function?” blog post by Bob Nystrom explains very well the issues that come from the virality of async functions. In other languages it’s common to see a blocking and an async variant of the same thing (e.g. a database client), each maintained by different authors.

Zig has solved this problem since the beginning, as I previously explained in “What is Zig’s Colorblind Async / Await”. Thanks to Zig’s clever (and unorthodox) usage of async and await, a single library can work optimally in both synchronous and asynchronous mode, achieving the goal of code reusability, a property preserved with this new iteration of async I/O.

But this new approach pushes color blindness even further: previously the source code would be free from the virality of async/ await, but at runtime the program was still forced to use stackless coroutines, which have a viral calling convention. Now io.async and Future.await can be used with a variety of execution models, from blocking to stackless coroutines, fully freeing you from any form of virality.

With this last improvement Zig has completely defeated function coloring.

13 Likes

I had these initial concerns too, but after wrapping my head around the concept, I feel like my concerns were unfounded. It doesn’t seem it will “color” functions any more/less than the current convention of passing around allocators to functions that require them does.

The IO concept seems more akin to Go channels than the coloring patterns we get from Rust, C#, TypeScript, etc. No one considers a Go function “colored” simply for accepting a chan argument, nor does it require a cascade of changes to the functions that they touch.

3 Likes

Quite frankly, I don’t think that anyone is capable of saying that right now since there isn’t any long term real world experience with it.

Sure, it has the potential of doing it, but if it actually does is something only time can tell.

I’ve played with it a bit and it feels like a real solution to colouring. At the very least it reduces the problem significantly

2 Likes

Function coloring is part of the lore by now, and I don’t aim to taboo the entire concept. But it’s worth remembering that is an expressive metaphor from a blog post, not, say, a well-defined concept in theoretical computer science.

The thing to avoid is where someone finds a nice library, but then they realize oh, it’s from Planet Async and their codebase doesn’t live on that planet.

Solving that problem at the interface level seems basically correct to me, so I find the whole thing quite promising. I do expect the chattering classes at the Orange Website and elsewhere to earnestly, vigorously, tediously debate whether Zig has “solved” the “function coloring problem”, which, lacking any objective criteria by which to declare it solved, will allow them to reach either conclusion as they prefer.

11 Likes

the hn thread is a complete mess, people get lost even way before coming remotely close to a discussion about what should be considered function coloring.

interesting fact considering that async isn’t even one of those niche lower level concepts that only systems programmers know about.

oh well, such is the current state of our industry.

11 Likes