Why u32/i32 and not u32/s32?

In u32/s32 u means “unsigned (integer implied)” and s means “signed (integer implied)”, these are antonyms and this is ok. In u32/i32 u, again, means “unsigned (integer)” and i means just “integer”, which is a bit inhomogeneous, it’s about different things.

Of course, I can always redefine this with const s32 = i32;, but, anyway, why i32 but not s32? After Rust? :slight_smile:

Integer in math can be positive or negative, that justifies the i prefix. Also Natural Number is a better name for positive numbers. n32/i32 is the best :slight_smile:
I believe that i comes from C int and u comes from C unsigned.


Yes, but integers in a computer may be signed or unsigned (and it is very important for a programmer, but not for a mathematician), that justifies u32/s32. :slight_smile: It’s not my “invention”, as you guessed… Linux kernel uses such typedefs.

// redef-i32.zig
pub const s32 = i32;
// s32.zig
const std = @import("std");
const int = @import("redef-i32.zig");

pub fn main() void {
    const n: int.s32 = -7;
    std.debug.print("n = {}\n", .{n});


int.s32 looks pretty well - “integer”, “signed”, “32 bits wide”.

… or even like this, kinda joke:

// integer-types.zig
pub const @"без-знака" = struct {
    pub const @"16-бит" = u16;
    pub const @"32-бит" = u32;

pub const @"со-знаком" = struct {
    pub const @"16-бит" = i16;
    pub const @"32-бит" = i32;

and then

// s32-b.zig
const std = @import("std");
const @"целое" = @import("integer-types.zig");

pub fn main() void {
    const n: @"целое".@"со-знаком".@"32-бит" = -7;
    std.debug.print("n = {}\n", .{n});

LOL. :rofl:

  • “целое” - “integer”
  • “без знака” - “unsigned”
  • “со знаком” - “signed”

10 posts were split to a new topic: On Vector Syntax

I can’t tell you how many codebases I’ve read that use s… for ‘string’.

I don’t know if this is the real answer but it could be based on C, int vs unsigned.

1 Like

I assume it is because that is what LLVM uses (which is probably based on C like you said).

1 Like

Linux picked the less usual convention, but the whole u16 / u32 / i16 etc. thing has a really old history in C. If you needed integer types of a defined width, you used to have to do some platform specific macro trickery to get them in a portable way.

That’s why the later standard went with the more verbose uint8_t thing actually, so it wouldn’t interfere with the shorter versions which were already in heavy use in existing code.

So yeah, that’s why LLVM is like that as well.

1 Like