Negative values for unsigned integers are treated by C compilers as equivalent to the maximum possible value for that integer (plus one, so in this case 2^32), minus whatever number you typed in.
Since C compilers also have no safety checks for integer overflow, this means that adding a negative unsigned number (such as -5) to a positive unsigned number (such as 10) gives a result that makes sense (such as 5).
You can imitate this behaviour by using ~@as(u32, 0) (which is equivalent to -1 as a uint32 in C) and wrapping addition/subtraction operators.
The problem is that the enum is anonymous, so it’s essentially the same as #define MZ_DEFAULT_COMPRESSION -1. In C, the size and signedness of enums is implementation-defined (except that the type must be able to represent all enum values). Had the enum been defined like typedef enum mz_compression_levels { ... } mz_compression_levels and the function taken mz_compression_levels level_and_flags Zig would have no problem using the translated code.
C and Zig have fundamentally different ideas about implicit conversions so there’s no way around this without changing the original C header. You will have to use @bitCast(MZ_DEFAULT_COMPRESSION) in Zig when using that enum constant where an unsigned integer is expected.
It’s usually a shortcut for ‘set all bits to 1’ without having to worry about the actual width of the integer type, and the underlying type of enum in C is poorly defined anyway.
Some compilers (like GCC or Clang) have a -Wsign-conversion to warn on implicit signed/unsigned conversion and at least in my own C projects I have that warning enabled (unfortunately it’s not in any of the common warning sets like -Wall -Wextra).
Both 0 and 1 are comptime_int, so the result of 0 -% 1 is a comptime_int. It doesn’t underflow, so it gets evaluated to -1. And only then is it evaluated attempted to make it fit in a u32.
Has worked for me.
I think the issue is that 0 is assumed as comptime_int by default and zig can’t do a bitwise not on a comptime_int because it doesn’t have a specific number of bits.
I don’t know, I think it depends. If x is a bitmask I would argue that expressing it with a binary operation communicates intent more clearly.
In the general case I think it’s pretty subjective. Personally, I prefer writing it down on my own instead of calling a function when they take the same amount of space because I can verify the code is correct by looking at it instead of going to the function definition.