Without the - page_size the reason would be simple:
If n is also an usize then you wouldn’t be able to test for n > maxInt(usize), since maxInt(usize) + 1 is either zero or undefined (overflown).
With the - page_size that explanation doesn’t make sense anymore though, and I guess > vs >= wouldn’t matter.
But apart from that, on a 64-bit system this check is more of a theoretical nature anyway I guess, since no CPU uses the full 64 bits address range. For instance x86-64 CPUs typically only have a 48-bit virtual address range, and AArch64 between 48 and 56 bits (all according to Wikipedia: 64-bit computing - Wikipedia)
The next allocation happens after page_size bytes (for alignment reasons).
The area page_size - 1 must be usable, that means that n maximum value must be: maxInt(usize) - (page_size - 1)
The following if statements are all correct:
if (! (n <= maxInt(usize) - (page_size - 1))) return null;
if (n > maxInt(usize) - (page_size - 1)) return null;
if (n > maxInt(usize) - page_size + 1) return null;
if (n - 1 > maxInt(usize) - page_size) return null;
if (n >= maxInt(usize) - page_size) return null;
You are right, that this make no sense. My reasoning was flawed.
By reversing the condition you get what is allowed: n < maxInt(usize) - page_size
That means that an entire page is reserved.
The actual allocated size in increased to be a multiple of page_size for alignment reasons, but the maximum increase is (page_size - 1).
Perhaps there is another reason for that condition that I cannot see, otherwise you are right about the equal aspect of the condition.
OK I see, now, since the idea is to bail early so we check we have enough bytes to begin with, maybe we would want to also consider the requested alignment because it’s even more constraint that could lead to returning early…
Just thinking out loud, I’m mostly navigating code here and there for now…