So C has a bunch of implicit conversion rules. It’s supposed to make arithmetic expressions easy to write, and it does do that. But it’s a notorious source of bugs. The actual rules are quite complex, the results can be surprising, and it’s far too easy to hit undefined behavior.
Zig made a different choice, it has exactly one rule:
Type coercions are only allowed when it is completely unambiguous how to get from one type to another, and the transformation is guaranteed to be safe.
This is much better, and also it can be extremely annoying. You’ve hit the case I find most annoying: addition or subtraction between ‘peer’ signed and unsigned types.
This is legal:
fn unsignedMinus(a: usize, b: usize) usize {
return a - b;
}
But this is not:
fn signedUnsignedPlus(a: usize, b: isize) usize {
return a + b;
}
That’s despite the fact that both of these functions hold the same hazard: the returned value might be negative. The second one also poses a risk of overflow, but of course it’s easy to run that risk with two usize
as well.
Just because it’s annoying doesn’t mean I disagree with it. Adding more rules to make writing code more ergonomic adds back some of the complexity we’re trying to get away from.
But I have this function in probably a majority of my libraries:
inline fn cast(T: type, v: anytype) T {
return @intCast(v);
}
I think this is central enough to doing useful things with integer values in Zig that it should probably be a builtin. Before result location semantics, @intCast
used to take two arguments, and this is exactly how it worked. Writing @as(usize, @intCast(v))
is very heavyweight and ends up obscuring the equation, arithmetic bugs aren’t type conversion bugs, but they’re still bugs.