Compile time numeric literal calculation

this code works

const x: f32 = 1 + 0.1;
_ = try stdout.print("{d}\n", .{x});

but why does this code produce compile time error?

const x: f32 = 0.1 + 1;
_ = try stdout.print("{d}\n", .{x});
error: float value 0.100000 cannot be coerced to type 'comptime_int'

I have no idea why this is but if I had to guess I’d say that in the first example the plus operator adds 1 (int) to 0.1 (float) which results it no loss of precision as a float is larger than an int. In the second example adding a float to an int would cause that float to be rounded, hence the error. You can probably just avoid this by doing 1.0 + 1.0. Why exactly 0.1 + 1 isn’t coerced to the bigger type of float, especially when x is supposed to be a float, I don’t know. Could be a feature to force the user to be more explicit with 1.0 + 1.0 or could be a bug. ¯\_(ツ)_/¯

1 Like

This is a bug with how peer type resolution works in the bootstrap compiler.
I just tested it with the self-hosted compiler and both versions work as expected.