This is (or will be) relevant: Disallow NaN and inf for `comptime_float` · Issue #21205 · ziglang/zig · GitHub
There appears to be no reason given for doing this.
Why burden the language with nonstandard floating point behavior? What’s gained?
I think it comes from a desire for comptime to be independent of the builder machine characteristics or behave like an “ideal computer”. Kind of like how comptime has bigint implemented, comptime can also have softfloat, or whatever thats called.
We’ve been down that road, the result was IEEE 754.
Every visible characteristic of floating-point calculations has to be accounted for. There’s no Platonic implementation, very much unlike bigints: every arbitrary-width integer library either gives the same results as any other, or it has a bug.
Can’t do that with real numbers, because infinite precision is impossible on finite computers. So that leaves floats. Now there are two options: comply with IEEE 754, or invent your own weird floating-point a couple decades after everyone else stopped making that mistake.
So what’s the goal? Do we want every Zig compiler to get the same results for floating-point calculations at comptime? I think so, hard to call the language standard without that. So. How to get that? Either use 754, or write a deviant specification with as much detail as 754, and everyone has to implement it identically.
So, one sensible choice, in other words. Which does not involve changing the result of n / 0.0
.
No reasoning was presented for going off the reservation like this. That concerns me. Numeric stability is a useful feature in a programming language!
I agree.
There could be a true analogue to comptime_int
for floats: comptime_real
, which would have an exact symbolic representation that is only fully resolved when it needs to be coerced into a runtime type. This would actually be doable even though, as you mention, “infinite precision is impossible on finite computers”, since this only applies when the ‘infinite’ precision isn’t representable with a finite amount of information. Storing all decimal digits of 1/3 is impossible, but storing ‘1’, ‘/’, and ‘3’ isn’t.
I assume that those who designed Zig considered having something like comptime_real
at first, but due to either the difficulty of implementation or the compile-time cost, they settled for comptime_float
instead.
However, it feels like comptime_float
is still clinging on to the idea that it is an analogue to comptime_int
for real number types, and so we get these odd rules that try to pretend that comptime_float
isn’t basically just an f128
that can coerce to smaller float types.
Also, one flaw with comptime_float
that I haven’t seen mentioned is that the results of arithmetic with comptime_float
can change when larger float types are added to the language. For example, let’s say you divide 1
by a very large comptime_float
, giving you 0
, then multiply it by the same large comptime_float
, resulting in 0
. You store this result and use it to dictate the logic of your program. Now, if Zig decided to add f256
to the language, which should be a non-breaking change, the logic of your program could fundamentally change.
Unless Zig defines truly unchanging semantics for comptime_float
, it should at least describe how to use comptime_float
in a way that won’t change when new floats are added, unless you presume that even in, say, 40 years from now, there won’t be demand for a larger float type. Of course, these long-term guarantees are not an actual concern right now (there are a million larger breaking changes that will be made before 1.0), but it’s worth thinking about before the language is set in stone.