Ah I see, I haven’t seen them sold as anything but named integers so I must’ve missed that hype. Certainly many ways to skin this cat, for sure.
yes! that is the point: guarding the callsites.
I don’t think heavyFunction has necessarily been called… you are telling the compiler something about the output of heavyFunction, then swallowing that output. The compiler is free not to call it if it doesn’t change the semantics of the program, e.g. if heavyFunction is pure and the mode is ReleaseFast. Whether that actually happens is another question.
But is guarding the callsite so bad?
inline fn debug() bool {
comptime return builtin.mode == .Debug; // to taste
}
if ( debug() ) assert(…);
Personally, I rather like the explicitness of this and I don’t find it too onerous to type out.
This is a view I am gradually starting to agree with — that indices are signed numbers. The argument that they are unsigned because their range is “always greater than 0” is not very convincing, since all our arithmetic needs for indices are addition and subtraction, including needs involving negative offsets, and there is almost no need for bitwise operations.
The current requirement for indices to be usize directly hinders for loops over ranges from high to low. I believe this actually indicates a problem with the paradigm of expressing indices using usize, rather than that descending range loops are not worthy of a native expression.
As for meeting the requirement that ‘the index is always greater than 0,’ it is hoped to be addressed with any ranged interger.
I disagree. I rarely if ever use signed integers. Zig has also saturating and wrapping operators unlike other languages. Something like ranged ints is what I could get behind though.
For backwards iteration, this is pattern from C that adapts well to zig:
fn loopBackwards(comptime T: type, slice: []T) void {
for (0..slice.len) |i| {
const item = slice[slice.len - i - 1];
std.log.debug("item: {any}", .{item});
}
}
However for anything else I find the explicitness of the while loop a benefit. My only problem is the leaky scope issue (when you don’t want to come up with new aliases for i every time) as regards formatting. Ideally zig fmt would use this style by default, instead of breaking it up onto many lines:
{var i: usize = 10; while (i > 0) : (i -= 1) {
// loop
}}
I rarely if ever use signed integers
I’m also strictly in the ‘almost-always-signed’ camp - but only after enforcing sign-conversion-warnings in my C code, and everytime I see a new API (like Vulkan) use unsigned types for things like texture-dimensions or buffer-sizes I roll my eyes, because by now we really should know better ![]()
Unsigned integers should only be used for bit twiddling and modulo-math, but not for anything else (it’s also arguable whether unsigned overflow should even be an error instead of always wrapping around).
The problem with unsigned integers for dimensions, sizes, indices is that quite often I want to use signed integers to manipulate them. For instance I might have a universal ‘size-change’ value which can increase or decrease a size, or a ‘distance’ value which can also be positive or negative, same with an offset on an array index, it often makes sense that such an offset is signed, e.g. even when valid sizes are always positive, valid distances/deltas are not.
In C without the signed/unsigned conversion warning this isn’t much of a problem, because all values in an expression are promoted to signed anyway, so using unsigned values but then modifying by a signed amount ‘just works’ (but is a footgun). In more modern language such things require casting, but the better solution is actually to not mix signed and unsigned types in the same expression. And that’s why ‘almost-always-signed’ makes a lot more sense than ‘almost-always-unsigned’ (besides, the distance from unsigned 5 to unsigned 3 is still -2, and not ‘underflow error’
)
Using unsigned for sizes was important in the 32-bit era (2 GB vs 4 GB started to matter at some point). Today, losing the top-most bit to the sign isn’t a problem, both 63 and 64 bits are ‘incredibly huge’.
Performance-wise, using a signed integer as array index isn’t a problem either, the ((i >= 0) && (i < size)) range check is optimized to a single unsigned (i < size) check in the compiler output (since down on the machine code level integers are sign-agnostic - signed vs unsigned only matters in one specific situation: when a ‘narrow’ value needs to be widened to more bits (e.g. whether to replicate the sign-bit or extend with zero-bits).
I completely understand this point; I used to have the same thought.
There are several reasons that changed my mind, one of which is seeing @floooh 's discussion and thinking it makes sense.
Another reason is for consistency with [*]T. I noticed that an index can actually be seen as a kind of multi-pointer that uses an integer instead. A multi-pointer can natively be added to or subtracted with a signed number, and the result of subtracting multi-pointers is also a signed number rather than an unsigned number.
Perhaps this is not the only reasonable model. I think one possibly more reasonable approach is to design an independent integer type for indexing, which enjoys unsigned bit width at the underlying level but natively supports arithmetic with signed Offset.
Pointer offsets can be tricky with pointer provenance, but on most targets you don’t really have to care about that aside from possible compiler optimization pass.
There are C apis such as fseek that do the same thing. I don’t think that’s a good API though. In the platform I’m working on the fseek equivalent is:
pub const whence_t = enum(u8) {
set,
forward,
backward,
from_end,
};
pub const seek_error_t = error{
unexpected,
};
extern fn seek(file: file_t, offset: u64, whence: whence_t) seek_error_t!u64;
(I did not want to have this api at all, but I’m implementing libc for this platform as well and this function is useful for porting existing programs…
, though I may emulate this completely in libc later..)
I’m not saying I don’t ever use signed integers, they are useful for some formulas, but it’s pretty rare I even have to use a signed integer. Having to work with java in past has also made me more sure about the fact that not having unsigned is bad.
I actually wonder how the planned ranged-integer-types feature comes into the signed-vs-unsigned discussion, because then I could have a signed integer type for valid texture dimensions that is in the range [1..max_i32], and that could be signed - or at least compatible with ranged types that might have the range [-max_i32 .. maxi32].
AFAIK with ranged ints the only case you need to care about this is if you are truncating to a shorter range and that would be explicit @intCast or @truncate. Aside from that it would be like you were using bigints.
There’s some good insight from Matthew in the proposal https://github.com/ziglang/zig/issues/3806#issuecomment-1597701162 I personally would love to see ranged ints tried even if it doesn’t work out.
Isn’t it interesting that the feedback of the often dismissed peanut gallery is largely the same as that of a developer that’s written a whole game and used zig extensively?
IME the ‘peanut gallery’ likes to latch on to single topic (a while ago there was no HN thread about Zig without some peeps complaining about unused variables being errors - at least this seems to have died down). It’s the same with WASM - each time it comes up the main issue seems to be that WASM doesn’t have direct DOM access (usually from people who obviously never actually used WASM in realworld projects).
Hello Andrew, I’m not sure where is the line of “who actually used Zig” and if you’d be interested in reading my feedback too - I am author of few Zig OSS projects like Ava, tokamak, fridge, napigen, and I’m also using Zig in our startup where we have our core backend implemented in it.
So far, I have been using Zig since Jul 31, 2022 (when I had hand injury and I’ve decided to re-implement some CSS parsing from Rust to Zig and I was able to do that single-handed in a week). I think that alone is a huge testament - sure, it was rewrite, but I knew next to nothing about Zig at that point.
What I love:
- auto-formatting based on trailing comma
- being able to introspect & construct new types imperatively, raise compile errors, …
- payload-free errors (being able to upcast to anyerror is way more useful to me than being able to pass payload occasionally)
- anytype duck-typing
- lazy-compiled functions
Things I don’t like:
- having to come up with unique identifiers because something is already in the scope. it’s frustrating and it easily breaks my focus entirely. it also makes refactoring harder as I can’t now just move whole portions of code easily because there might now be a name clash.
- math casts (the code is too noisy, and I am reluctant to touch such parts)
- compiler forcing me to change const/var when I want to try out something (noted by others too)
- comptime not able to see docblocks (there’s some github issue with reasoning about what people could misuse this for but the truth is that this is the only reason why I can’t easily generate openapi.json along with the respective doc comments)
- no varargs for printing/logging
- weird dependency loops (hopefully fixed already?)
- unclear how
@setEvalBranchQuota()works, and if there’s a reliable way to make it multiple of someNso that people don’t have to set it to a million - unused vars (autofix helps but it’s still noisy and this holy war against warnings is just weird)
- files as structs - I understand it makes compiler simpler but I don’t see how it could be useful to me in my everyday code - and whenever I did this, I’ve been regretting it later.
- not being able to call (
self: *T) methods on some owned value that is on the stack (builder pattern)
I really like all your arguments, I think they are well thought through, and insightful, I absolutely agree with you on a lot of your points, namely I think Zig should really implement :
- ranged integers/float because they improve type safety, explicitness, and DX a lot.
- Distinct types, with methods support would be huge I dream of being able to do this
pub const Counter = alias(i8) {
pub const init : Counter = 0;
pub fn increment(c : Counter) Counter {
return c +% 1;
}
pub fn decrement(c : Counter) Counter {
return c -% 1;
}
}
test Counter {
const counter : Counter = .init;
try std.testing.expect(counter.increment() == 1);
try std.testing.expect(counter.decrement() == 0);
}
This isn’t the best example obviously, but in certain cases I think this would be really nice. Distinct or Alias types in general would be huge in terms of usability, readability and DX. I know this looks a lot like non-exhaustive enums, but I think they have really poor DX, and add a level of friction I don’t think makes sense
-
I also think that numeric types and their aliases should be able to “overload” operators, I most certainly don’t want ever to see something like
std::cout << "foo" << std::endl;I’m not a psycho, but I’m sure there’s a good balance hiding underneath all the controversy, something that would just feel right with numeric types, without going all in and allowing it everywhere. -
I also believe that assert should be treated differently, I’m not a fan of macros in general, but I do think assertions in general are useful, I’m probably in the minority but I really like builtin, I like how they stick out, and
@assert()would personally be very convenient -
I also really don’t like anytype, I’m happy that Zig is moving away from it in the std, I do understand why it’s here, but it feels suboptimal.
Having said that I don’t disagree with the rest of your point, but I also want to share some of my feedback around using Zig, you are a game developer, and I work in embedded for the defense industry so our perspectives are different and I’d like to say that most of the other things that you dislike, or find annoying as a gamedev (which are totally valid there’s no denying that) are actually godsend for people like me on the other side (where correctness is actually the most important aspect), my understanding is that in gamedev iterative development is key, being able to move fast, break fast, recover quickly is important, and part of the process.
And I would agree that Zig doesn’t really favor that style, especially in math heavy / cast heavy workflows, but in my case, I’ve rewritten alone a firmware at work from C++/python to pure native C, which is now about 38k loc, with a lot of quite complex and rigid/constrained parts, because the hardware is really pushed to its limit. The reason I did that rewrite to C was to fix correctness, and architectural mistakes, while also improving performance, legality (since technically we are only allowed to ship binaries because of export control), compile time, and portability, and secretly because I was planning a rewrite in Zig long term, because I knew this is the language that would give me the best chance at delivering the absolute best firmware possible, without losing my sanity or my ability to sleep at night.
And although I’m not done, I’m well into the 16k loc currently, and I have to say most of the things that you point out as annoying, the things that get in the way, are actually really beneficial, and helpful for cases where code correctness is central.
I hope you don’t take this as a dismissal of your opinion, and the good points you bring, I’m just trying to share that indeed Zig might not be the “best” for gamedev, but that all the things that make it not the best at gamedev, are actually really useful for code where correctness matters a lot more than “agility”.