Runtime integer type, behaves like comptime_int

when a runtime integer type, with a comptime known value is used in a context where the comptime known value is usable, it behaves like its a comptime_int even though its not supposed to be?

this:

pub fn main() !void {
    const i: u128 = 5;
    const a = [_]u8{ 1, 2, 3, 4, 5, 6, 6 };
    std.debug.print("{}", .{a[i]});
}

compiles and runs

but this:

pub fn main() !void {
    const i: u128 = 5;
    std.debug.print("{}", .{foo(i)});
}

fn foo(i: u128) u8 {
    const a = [_]u8{ 1, 2, 3, 4, 5, 6, 6 };
    return a[i];
}

gets a compile error.
I tested with a couple of integer types, including signed ones, the behaviour is consistent.

Shouldn’t zig honour type soundness even if it knows in this case it would work with the given value?

In this case, the compiler doesn’t seem to be executing the function at compile time. If you add an explicit comptime to the call it works:

    std.debug.print("{}", .{comptime foo(i)});

yea, my point is when the value is known at comptime zig ignores the integer type I explicitly gave.

I just used a function to force it to be not known at comptime since zig won’t run them at comptime unless you tell it to, or it takes a comptime only parameter.

What I’m asking is if this is intentional?
If it isn’t, is there any reason not to change it to honour types.
If it is intentional, why?

Ah, ok, so the question is:

My initial reaction is that the current behavior seems reasonable. Can you come up with a possible footgun from this behavior?

1 Like

You change your code at some point that makes a previously comptime known value, runtime known, now your code doesn’t compile.
Not a big foot gun, more so an annoyance.

But my opinion is it shouldn’t compile in the first place because I specified a type then used it in an invalid way, I don’t see a reason why this should be different in comptime.

p.s. I did check that it does report a compile error if the value can’t be safely casted.

1 Like
a[@as(usize, @intCast(i))]

works too !

@as there is unnecessary since indexing always takes a usize, and yes that compiles and runs.

But my point is that if the value of i is comptime known, then regardless of the type of i (assuming it is an integer type) the compiler basically inserts a cast for you.

I think that’s bad, I’m asking if it’s a bug, or if there is a justification for that behaviour

2 Likes

Probably not a bug, no. I would say the justification here is that comptime is partial specialization, so everything which can be completed at comptime, is.

So the known value propagates through the indexing, and the whole thing is reduced to:

std.debug.print("6", .{});

Which is what we want.

This is ideal IMHO. For example, this doesn’t compile either:

test "won't compile either" {
    const i: usize = 12; // valid _type_, but...
    const a = [_]u8{ 1, 2, 3, 4, 5, 6, 6 };
    std.debug.print("{}", .{a[i]});
}

It doesn’t fail a safety check at runtime, it says:

error: index 12 outside array of length 7
    std.debug.print("{}", .{a[i]});
                              ^

I would describe this as having the identity ‘5’, or ‘12’, rather than as behaving like a comptime_int. For example, it can survive compilation to become runtime known, which a comptime int cannot. The behavior consistent with comptime-known information propagating and specializing code during compilation.

So it’s fine that if you change your code to something which can’t be specialized away, then the rules around what types are allowed for indexing kick in. You want that to be a compile error, and I don’t see a reason why it should become one when it doesn’t have to be.

1 Like

I am not talking about partial specialisation, I agree it absolutely should do that.

I don’t like that it is ignoring that the type I specified can’t be used the way I am using it, even if the value I provided could be if it were the right type

But isn’t that exactly how comptime_int behaves.

It’s how everything comptime-known behaves. Although I think I just found something which is a bug:

test "probably shouldn't be legal" {
    const i: isize = -3;
    std.debug.print("result is {d}\n", .{14 / i});
}

Which compiles to -4 despite the documentation saying:

Signed integer operands must be comptime-known and positive.

It chooses truncation instead of floor, I think that’s just wrong. However, if the result were exact, I think this should compile, since all possible casts would give the same result.

I’m curious what your ideal outcome would be here. Should this example:

Fail at runtime with a bounds check? The type is correct, so that’s what would happen for a runtime-known usize 12.

If the answer is “no, it should ‘behave like a comptime int’ here”, then why throw a type error when a cast would deterministically succeed? What are we getting out of that behavior?

The docs say this about type coercion:

Type coercions are only allowed when it is completely unambiguous how to get from one type to another, and the transformation is guaranteed to be safe.

I claim this is definitely true of a u128 with the value 5 coerced to usize, and definitely not true of an arbitrary u128, which must therefore be cast to usize, not type-coerced.

As an aside, I approve of the decision to require disambiguation between floor and truncation for signed denominators, but really dislike that it uses a builtin.

That’s caused me some headaches translating arithmetic to zig, because the function-call has the wrong precedence, and even once it’s all figured out, it is completely illegible. The problem is real, the solution is Procrustean.

I’d like to see /% for truncation and /- for floor, I think the mnemonics are obvious and it would solve a real problem which I’ve repeatedly had.