The inconsistency of "floatMin" can be a footgun

This is an unfortunate problem that effects many programming langues… C++ being another one of them. The issue is that floatMin returns the smallest positive number for the given floating point value. If this was just an issue of convention, I wouldn’t have an issue, but it’s literally inconsistent with other things in the system already.

Here are two examples to show this:

First, notice the difference between minimum integer and minimum float:

const i = std.math.minInt(i32);
const f = std.math.floatMin(f32);
std.debug.print("Min int: {}\nMin float: {}\n", .{i, f});

This prints the following:

Min int: -2147483648
Min float: 1.17549435e-38

As you can see, minInt returns the most negative number that the integer can represent. Meanwhile, floatMin returns the smallest positive value that it can represent. This seems like a pretty clear inconsistency to me.

Second, this means that @min will behave differently that expected for many cases…

const x: f32 = -1.0;
const y = std.math.floatMin(f32);
const z = @min(x, y);
std.debug.print("Result of @min compare: {}\n", .{ z });

This prints:

Result of @min compare: -1.0e+00

So in other words, @min means “Most negative floating point number of the two arguments” while floatMin means “smallest floating point value”. This bothers me because it means min does not have a consistent interpretation even among operations on floating point numbers.

I think floatMin should be renamed to floatSmallest and not repeat the same mistakes that C++ has. I genuinely believe that Zig is trying to have less footguns than the C variants - this seems like one gigantic footgun.



Probably should be called “floatEpsilon”. And probably needs “floatEpsilonDenormal”, as well.

1 Like

Unfortunately, that name is already taken:

/// Returns the machine epsilon of floating point type T.
pub inline fn floatEps(comptime T: type) T {
    return reconstructFloat(T, -floatFractionalBits(T), mantissaOne(T));

I suggest floatSmallest because of the comment above the function:

/// Returns the smallest normal number representable in floating point type T.
pub inline fn floatMin(comptime T: type) T {
    return reconstructFloat(T, floatExponentMin(T), mantissaOne(T));

As a general programming term, just epsilon is as ambiguous and overloaded as minimum value in this context and could mean multiple things. For example, double.Epsilon in C# returns the smallest subnormal number while DBL_EPSILON in C or Number.EPSILON in JavaScript returns the machine epsilon (the difference between 1 and the smallest representable number greater than 1).

I agree with the view that minimum means “closest to negative infinity” and that smallest means “lowest magnitude”, so I would suggest the following renamings:

  • floatTrueMin(f64)smallestSubnormal(f64)
  • floatMin(f64)smallestNormal(f64)
  • floatMax(f64)largestNormal(f64)
  • floatEps(f64)machineEpsilon(f64)

This communicates intent more precisely, and because subnormal, normal and machine epsilon don’t make sense in the context of integers we can also drop the “float” prefix.


I agree that what you are saying is worded precisely. Personally, I’m in favor of having a floatMin and floatMax because that’s what programmers are going to reach for when they need a really positive or negative number.

I think of the case where someone wants to provide an initial value to a function that finds the minimum number of a floating point range.

So let’s say I want to use built-in min, @min, for my implementation. Well, I’d think to myself “to make sure that the initial value reveals the minimum to whatever is in my array, I’ll set initial to max.”

The problem is, doing that the other way around (finding max and initializing to min) doesn’t work.

For me, it’s the existence of builtin functions that imply something opposite of that which really sells the issue for me. At the very least, I think that inconsistency should be handled.

I agree that floatMin is a footgun, however I have a tangential point:
I think you should use negative infinity for your example.

Language Reference: Float Literals

const std = @import("std");

const negative_inf = -std.math.inf(f32);

pub fn findMax(values: []const f32) f32 {
    var max = negative_inf;
    for (values) |v| {
        max = @max(max, v);
    return max;

pub fn example(values: []const f32) void {
    std.debug.print("findMax({any}): {}\n", .{ values, findMax(values) });

pub fn main() !void {
    example(&.{ -20, -40, 4, 2232323.2323, -0.0000012121, -565656.265 });


findMax({  }): -inf
findMax({ -2.0e+01, -4.0e+01, 4.0e+00, 2.23232325e+06, -1.21209995e-06, -5.6565625e+05 }): 2.23232325e+06

I like negative infinity because if you use it to reduce an empty collection to its max value it still gives you negative infinity, else you might think there may have been an element at the most negative value.
Also makes it easy to convert it into some explicit error, if negative infinity isn’t expected in later parts of the program.

1 Like

I take your point about negative infinity - I think it’s a good one. I still think that the ambiguity in using operations called min and initial values called min (that can mean different things) is still my main contention. Otherwise, yes, this is a work around for the case of generic initial values.

That said, we still need to make it clear that people using min/max algorithms need to know about this. For the average case, it’s really easy to just pairwise match names in ones’ head and end up with the wrong result.

1 Like