for the following line of c code:
int e = u.i.se & 0x7fff;
where u.i.se is a uint16_t.
Is this the equivalent zig code?
var e: i16 = @bitCast(u.i.se & 0x7fff);
for the following line of c code:
int e = u.i.se & 0x7fff;
where u.i.se is a uint16_t.
Is this the equivalent zig code?
var e: i16 = @bitCast(u.i.se & 0x7fff);
It depends, what do you want the result type to be?
const e: c_int = u.i.se & 0x7fff;
Should work, the rule is that if the integer type you want your answer to be can represent all possible values of the input then you don’t need to use casting.
In the first example e is of type int in c, the type of that is c_int in zig, and you don’t need a cast because c_int can represent all possible values of u16.
If you want to convert to i16 keeping the same bits then what you wrote works.
Edit: I should mention, for math we usually use @intCast for conversion from u16 to i16, that is used when you are positive that your number’s value can be represented by an i16, so if it doesn’t use the last bit basically. 1000 0000 0000 0000 with intcast would be an error for example. I assumed You just want to move the bits since it seems like you are doing bit manipulations, in which case bitcast would convert it to an integer.