This is an unfortunate problem that effects many programming langues… C++ being another one of them. The issue is that floatMin returns the **smallest positive number for the given floating point value**. If this was just an issue of convention, I wouldn’t have an issue, but it’s literally inconsistent with other things in the system already.

Here are two examples to show this:

First, notice the difference between minimum integer and minimum float:

```
const i = std.math.minInt(i32);
const f = std.math.floatMin(f32);
std.debug.print("Min int: {}\nMin float: {}\n", .{i, f});
```

This prints the following:

```
Min int: -2147483648
Min float: 1.17549435e-38
```

As you can see, minInt returns the most negative number that the integer can represent. Meanwhile, floatMin returns the smallest positive value that it can represent. This seems like a pretty clear inconsistency to me.

Second, this means that @min will behave differently that expected for many cases…

```
const x: f32 = -1.0;
const y = std.math.floatMin(f32);
const z = @min(x, y);
std.debug.print("Result of @min compare: {}\n", .{ z });
```

This prints:

```
Result of @min compare: -1.0e+00
```

So in other words, @min means “Most negative floating point number of the two arguments” while floatMin means “smallest floating point value”. This bothers me because it means min does not have a consistent interpretation even among operations on floating point numbers.

I think floatMin should be renamed to floatSmallest and not repeat the same mistakes that C++ has. I genuinely believe that Zig is trying to have less footguns than the C variants - this seems like one gigantic footgun.

Thoughts?