This is probably supposed to be a different topic… and I know I am going to get a bloody nose for this…
I do understand the value of the implicit casts by sign, size and other, and the impact it can have to assumptions about things working and it simply does not since there is an overflow, sign bit or similar somewhere; and I love the fact that zig is very consistent with its builtin functions (the casing somewhat off sometimes with but the @… is the consistent start of looking for it)
But and here comes the bloody nose… is that the cast @…casting functions are big and it many times take a few of them to get to where you want to be (when doing arithmetic).
This means that calculations get long winded very very quickly … and the only way to break it up is to make multiple lines/variables - which again contribute to the labels involved - and sadly most of the time break down the clear intent of the calculation instead of contributing to it.
Again -I do understand the rationale… but makes things tough (unless there is something I miss) to the point where one ponders how primitive casting/basic math got so tough.
I guess it is a case again, of it is what it is… but unless I am missing some magic - this is one of my issues that, as a newcomer makes my coding harder (to read).
Is the right way here to build one’s own inline functions with ‘generic’ types or what are the options.
Sorry - this was a general reply - not to @ericlang specifically.
Agree for math. Instead of some simple formula you have a bunch of lines with @builtins. And there - instead of the targeted clarity of Zig - we get inclarity.
Even the simplest of loops can already have truncates.
Curious how other people do / solve that.
Yeah probably a split is needed for this…
It’s nice but it’s two lines instead of one which feels unnecessary for such a simple function. Can you make it shorter while still keeping it on a single line?
Readable, but not great indeed. It would be nice of the zig compiler had a flag to allow you tonignore most casts within similar numerical types, but I don’t foresee that happening. However, maybe something like a operator-and-cast could be imolemented somehow, possibly as a function:
const a: u32 = 10;
const b: i64 = 20;
const c = @addCast(u32, a, b);
Not sure things would get much better now that I’ve typed an example out, and it’s likely similar proposals have been made that I haven’t seen.
If Ada-style ranged types make it into the language, you could probably cut down on a significant amount of mid-math casts that have to do with integer sizing, because math functions and builtins could define what their valid input and output ranges are and the compiler could check if that fits in the return type.
e.g.
@mod(a, b) has return type @Int(0, b), so if b fits in a u16 then so does @mod(a, b), but currently the type system has no way to express that.
I didn’t bother to find an example that actually needs it, just wanted to to type it out to get a feel for a possible way of doing it. Replace add with whatever operator would cause issues there, maybe shiftLeft or mod like in the post I responded to, and you’ll see what I meant.
Fully agree, this is a big part of my blog post about writing home computer emulators in Zig:
I think the problem shows up in situations like:
mixed integer/float expressions
mixed sign integer expressions
mixed width integer expressions
Some of those are ‘footgun areas’ in C which also have warnings at higher warning levels so it makes sense that Zig is less sloppy in general(e.g. MSVC has warnings for implicit float/int conversions, and gcc/clang have warnings for implicit sign conversion - after enabling with -Wsign-conversion).
In my emulator I was mostly wrestling with mixed-width integer expressions (Zig requires casting on narrowing conversions - which on its own also makes sense).
In that blog post I tried to explore some directions how Zig’s mixed-width expressions might require less casting without giving up correctness (e.g. not accidentially losing bits during narrowing). It basically comes down to:
expression results could be narrowed to the smallest type required to hold the result - if the expression allows the compiler to figure this out (e.g. the type of x for an expression const x = y & 15; would be u4 - since that’s guaranteed enough to hold the result, and not whatever type y is.
expression items could be promoted to the widest type used in the expression
…this sounds like a condradiction though (first promote the widest type, then narrow to the smallest type that can hold the result?)
I would still require explicit casting between int/float and signed/unsigned though.
PS: it’s also not like Zig is the only language with this problem, Rust is the same (I think even slightly worse). At least now I understand why C has integer promotion (which IMHO is the right idea if it had a couple of restrictions, like not promoting across signs, and if it would promote to the natural word size (64-bits) instead of being stuck on 32-bit ints)
It seems like it’s only necessary to use the @intCast function when using the expression as a return value? Maybe also as an argument? Either way, it seems like a userspace function like modCast(T, a, b) would be better than a ton of extra built in functions.
The builtin casting functions used to take the type as the first parameter, but these were changed in (I believe) 0.13.0 to their current state. I think this is largely to take advantage of Result Location Semantics and remove a parameter that the compiler can infer. As this conversation has illustrated, I think the end result has been mixed. It has introduced the need for the intermediate steps as discussed above.
It could be argued that this is a good thing. One liners are notorious for being clever and hard to understand. With Zig’s goal of being more explicit, I find that splitting this over a few lines makes it more readable and understandable.
So looking at the table under this section Documentation - The Zig Programming Language, it seems like const c: u16 = x treats x the same as @as(u16, x), which makes sense based on what I was observing. It seems strange to me that a return x in a function that returns u16 wouldn’t also treat x the same way though. It’s clearly not documented to behave like that, but it’s not what I would expect. If the continuation of an expression is the return of a function, it seems natural for the result location semantics of the expression to follow that of the function.
I was thinking about eliminating @floatFromInt and allowing automatic coercion of integers to floats, while still requiring @intFromFloat to go the other way. This would also apply when mixing float and int operations (resulting in floating point values).
Rationale here is that pretty much all floating point operations are understood to round, so an explicit cast is redundant.
On the other hand, there’s an argument to be made against this. While it does not require a cast to coerce a f32 to f64, going the other way does require a @floatCast, because it can result in loss of precision. That is precisely what is also happening when coercing an integer to a float. So it’s pretty hard to justify one of those without the other.
Maybe it could work with integers with small enough bit sizes. For instance, f64 supports the entire integer range of i32 (and u32), so it seems reasonable to support implicit coercion of i32 to f64 - no information can be lost. For 32 bit floats that would allow operating on integers up to 24 bits - pretty reasonable if you’re in pixel space.
It’s not the whole story, but I think that could go a long way towards making game logic code less tedious without compromising the “no information lost by accident” design goal of the numeric type system.