How do we do this actually?
var u: u6 = 34;
const i: i6 = -3;
u += i; // not possible
std.debug.print("{}\n", .{u});
How do we do this actually?
var u: u6 = 34;
const i: i6 = -3;
u += i; // not possible
std.debug.print("{}\n", .{u});
The problem is the type mismatch between i6 and u6. You could throw a @intCast but itâs gonna fail at runtime trying to cast a negative value to an unsigned type.
You could do:
const std = @import("std");
pub fn main() void {
var u: u6 = 34;
const i: i6 = -3;
if (i<0) u -= @intCast(@abs(i)) else u += @intCast(i);
std.debug.print("{}\n", .{u});
}
What do you want your code to do in the case where the numbers really are incompatible?
Is keeping the += operator syntax important?
Iâd probably be inclined to do something like
u = @intCast(@as(i7, u) + i);
maybe with a helper function based on @typeInfo and @Type that constructs the casted-to type.
Itâs not pretty, but mixing number types in Zig rarely is (for mostly good reason).
Had this answer based on @IntegratedQuantumâs comment, in a similar topic:
this seems doable ![]()
Yes I saw some discussions in general on ziglang github and here on the forum.
All these @@@@@ do not make it easier.
But I must say (as I mentioned before here somewhere): I never coded an overflow after a succesful compilation. Extremely safe but quite unclear code.
But:
Operations on 2 values of the same type are just as unsafe as my example.
TL;DR: I would try to avoid mixing signed and unsigned math and instead heavily prefer using signed integers for nearly everything.
âŚand thatâs why I prefer signed integers for arithmetic, and unsigned only for bit twiddling or modulo-math (e.g. where I specifically expect a wraparound across zero) - e.g. it comes down to âalmost always signed integersâ.
Now some people will say: what about indices / dimensions / sizes? For instance an array canât have a negative size, an index canât point to a negative location in an array, and a texture object canât have negative width/height, so they should be unsigned integers!
And thatâs exactly where I say: no, f*ck it, those values should also be signed integers ![]()
Because at some point youâll want to use signed integer arithmetic on those values. For instance you might want to grow/shrink a width by adding a distance value that might be positive or negative - or the same âdistance manipulationâ for an array index.
This means that range checks need to be ((i >= 0) and (i < size)) now, but guess what, every compiler worth its salt reduces this to a single unsigned comparison against size down in the assembler code, and with no additional runtime cost since CPU registers are âsign-agnosticâ anyway ![]()
(also FWIW, I have enabled -Wsign-conversion in my C projects to essentially get the same strict behaviour as in Zig, e.g. no implicit conversion in mixed-sign expression - because thatâs a typical footgun area, and this helps a lot in API design and to avoid dangerous mixed-sign expressions in the first place - but as a result of this strictness, nearly all my integers are signed now).
We donât. One of my projects is absolutely littered with this:
i = @intCast(@as(i64, i) + op.l);
Which is updating an instruction pointer by a signed label. A totally normal thing to do. It makes me a bit sad when I look at it.
Yes, yes, this only works if i fits in u63. It does.
Andrew has recently expressed interest in making the rules around arithmetic more ergonomic, at the cost of some complexity of understanding (the current rule is dead simple, and simple is, ceteris paribus, good). Look forward to seeing where that goes.
Yes⌠I want deltaâs.
Still I cannot think of a real solution.
I remember C# is an absolute mess regarding numbers.
currently I use something like:
if (white) q = q + delta else q = q - delta;
It seems like there are two main schools of thought: either type your integers for the operations they will be part of, or type your integers for the values they will store.
Itâs understandable to not want to follow the latter practice of constraining your integers as much as possible, as bit-defined integers can only crudely approximate their true bounds. This makes it slightly pointless to represent ranges like [0, 1000] as u10 outside of a packed struct, since you will still need to document the âtrueâ bounds outside of the type. Following this same logic, your example of using signed integers for indices makes sense, as itâs not like every usize index will always be valid/in-bounds anyway, and a -1 index is no more invalid than an arr.len index.
However, I think that if/when #3806 is implemented, this will all change. Once you can express integers of any range, the âproperâ way of typing integers becomes unambiguous: integers should be typed with their exact lower and upper bounds.
But there is, of course, the concern of optimization. If ranged integers are implemented before the performance pitfalls of non-byte-aligned integers are fixed, I fear that the feature will be dismissed by many, leading to even more inconsistent code between those who use ranged integers and those who use bit-defined integers.
I havenât dug too deep into the difficulties of making something like a u7 as performant as a u8, but if it truly canât be done, another option would be to allow integers to have a âbackingâ type in addition to their range, and have integers essentially compile the same as their backing type, but with additional range assertions.
Currenly I am porting my Rust chess experimental program to Zig.
I use packed unions for Square, PieceType and Piece.
But thinking about making everything a raw i32.
Or at least have a copy for checking how that works CPU wise.
I have no clue about the performance implications of using u6, u3, u4 instead of raw ints
I actually also wonder about any performance implications of using âodd-widthâ unsigned integers (with wraparound). Iâm using such odd-width integers in my emulators quite a bit and havenât noticed anything odd.
Specifically for unsigned integers with wraparound I would not expect any performance problems, since the math is exactly the same compared to the next higher âregular widthâ integer width (e.g. u20 should be exactly the same code as u32). The upper bits 20 to 31 would be modified but should be ignored.
One could even argue that storing an u20 to memory into an unpacked struct shouldnât require masking out the upper bits. E.g. the u20 should just be a âsubviewâ on a full u32.
Once overflow checks need to happen itâs different of course, I think this can be done more efficiently on âregular-widthâ integers.
const.epsq = Square.fromU6(@as(u6, @intCast(@as(i8, @intCast(m.from)) + Direction.NORTH.relative_dir(C).toI8())));
I encountered this code somewhere which is basically
+8 or -8
Quite complicated for a simple math operation ![]()
Would introducing an @implicitMath {} syntactic sugar block for wrapping mixed-type math-heavy code be a good or bad idea?
Control over that would greatly simplify things for the user.
In some cases I would definitely like it.
I think it is a very heavy task for the Zig-authors. Lots of side-effects. Lots of side-effects and possible bugs for the user.
I doubt if something like that will ever make it into this very explicit language.
Anyway: I have not enough knowledge to say something really useful about it.