Inclusive range without integer overflow

How would you construct an inclusive range without an overflow here?

const std = @import("std");
test {
    const num: u8 = 255;
    for (0..num + 1) |i| {
        try std.testing.expect(i < 256);
    }
}
test3.zig:4:17: error: overflow of integer type 'u8' with value '256'
    for (0..num + 1) |i| {
            ~~~~^~~

Is there something better I can do than this?

const std = @import("std");
test {
    const num: u8 = 255;
    for (0..@as(u9, num) + 1) |i| {
        try std.testing.expect(i < 256);
    }
}
1 Like

Note that the i < 256 is already an overflow, as the comptime 256 will coerce into u8, but this will probably be a compile error.
You can’t achieve exactly what you’re looking for, because the last iteration is fundamentally different than the others. From 0 to 254, you add 1 after the iteration. In iteration 255, if you add 1, you get an overflow.
Instead, you could use the overflow itself as the check.

test {
    const num: u8 = 255;
    var i: u8 = 0;
    while(true) {
        try std.testing.expect(@as(u9, i) < 256);
        i, const overflow = @addWithOverflow(i, 1);
        if(overflow == 1) break;
    }
}
1 Like

i is always usize when using a for loop.

4 Likes

Right, forgot, my bad.

I generally write inclusive ranges as:

test {
    var i: u8 = 0;
    const num: u8 = 255;
    while (true) {
        try @import("std").testing.expect(i < 256);
        if (i == num) break;
        i += 1;
    }
}
1 Like

Until we get ranged integers as a generic solution to the problem with overflows, I would probably also go with a while loop, except I would move loop condition/incrementation to the continue expression, to remove the potential footgun where introducing a continue might inadvertently result in an infinite loop:

const num: u8 = 255;
var i: u8 = 0;
while (true) : ({
    if (i == num) break;
    i += 1;
}) {
    try std.testing.expect(i < 256);
    if (!shouldProcess(i)) continue;
    process(i);
}
1 Like

Shouldn’t using usize for the loop counter be the right default here?

1 Like

It will only constitute a solution if ranges are specified as closed intervals though.

If they make it into the language specified as semi-open, we’re just going to be dealing with this problem forever.

1 Like

Whether the ranges are defined with inclusive or exclusive end points is irrelevant. What makes a difference for this code snippet is that the ranged integers proposal will change how arithmetic operators work and make them promote intermediate results to a wider type, meaning that num + 1 will yield a value of “an integer type that can hold a value between 0 and 256 inclusive” instead of overflowing. (I’m not sure if the widening stuff is explicitly mentioned in the proposal, but it has been brought up a key aspect of ranged integers when discussed in compiler team chats.)

1 Like

An implicit promotion would be equivalent to the explicit promotion in for (0..@as(u9, num) + 1). It’s not the exact same thing as the original code, because the original works only with bytes, while this version has to use a u16, due to padding, which is an unnecessary widening in this case.

The original does not work only with bytes, the capture i is always of type usize. Only the while loops in the various replies that explicitly declare i as u8 work purely with bytes.

Yeah, but this promotion is unnecessary. Hopefully the compiler can see that i fits inside a byte and optimizes accordingly. Now when promoting to u9 and comparing it 256, I think the chances of optimizing this become even smaller.

Overflowing is also unnecessary, it’s just a side effect of the way CPU’s usually implement math operations.
Non promotion is also unnecessary.

It’s easy to say something is unnecessary, if you are concerned that you won’t be able to do operations that don’t promote, you are in luck, the existing wrapping and saturating operations won’t promote, and there will be an additional version of operators to keep the current behaviour.

The benefit of promotion is you can just write equations with less regard to the type of integers.

u9 also gets that optimisation.
The compiler would simply look at the known maximum value and use the smallest type that fits, zig’s comptime semantics make it great at passing that information on to the backend, in fact ranged integers should improve on this specific case.
IDK if that optimisation is done at all though, I think it’s more likely to use the maximum to use smaller registers, but keep the specified type for memory, which itself could be removed for short-lived data.
Changing the type would be quite detremental to you understanding of the code.

The problem is that overflowing is undefined behavior.

The problem with these is that they may incur additional costs. Also, neither saturating or wrapping addition would solve the original issue. Saturating addition would skip iteration 255, while wrapping addition would have no iterations.

The compiler could do that, but we can’t know if it actually will. Compiler optimizations are finnicky. In the general case, without benchmarking every line of code, the best we can do is use the weakest operations possible for a given problem, which gives the compiler the most room to optimize.

You will still be able to do the current behaviour, which has no cost in unsafe modes, it will just be moved to new operator variants.

I understated that this is the optimisation I think the compiler would do, as opposed to changing types under your nose.