Multidimensional arrays with zero size

When you use comptime you tell the compiler to evaluate it at comptime, if you don’t, you leave it up to the compiler to choose based on the context and the types, for example if the types flowing into an expression are comptime-only types that means it has to be evaluated at comptime, to some degree, but you haven’t forced the compiler to make the result fully evaluated at compile time.

Keep in mind that @compileError works like this:

This function, when semantically analyzed, causes a compile error with the message msg.

There are several ways that code avoids being semantically checked, such as using if or switch with compile time constants, and comptime functions.

So by forcing it to be run at comptime it works:

pub fn typeDim(comptime T: type, comptime level: usize) ?usize {
    return comptime if (level == 0) switch (@typeInfo(T)) {
        .array => |info| info.len,
        .vector => |info| info.len,
        else => null,
    } else typeDim(std.meta.Elem(T), level - 1) orelse
        @compileError("length not comptime known");
}

I think if you don’t force the compiler, the compiler has technically a choice whether to analyze it in depth or just start generating the corresponding code and only analyze it where it needs to. However Zig tries to be lazy/incremental about things.

Basically if the compiler was more eager it could automatically check in depth that the function can be fully evaluated at comptime, but because comptime operates more lazily we reach and analyze the @compileError before the compiler could realize that the code doesn’t actually depend on runtime arguments. I think if the compiler was more eager in this case, there would be other cases where it evaluates too eagerly, so instead the programmer is required to specify that it needs to be evaluated at comptime, explicitly (to avoid the compile error).

At least that is how I currently reason about this, I am not completely sure whether this is fully accurate.

I think the compiler is just written in a way where usize is more likely to be evaluated at comptime (more deeply) by default (without asking for it), while ?usize may default to being passed to code generation to generate a branch based on the optional.

I think if you really wanna know in detail you would have to look at the compiler implementation of how the comptime interpreter works in detail.

This post is related, especially this part of it: