What's up with `@alignOf(u128)`?

On 0.9, @alignOf(u128) is 16. On 0.10 and 0.11, it is 8 instead. Looking at make the alignment of >= 128 bit integer types 16 bytes to prevent failure to properly align u128 when using 16-bit cmpchxg · Issue #2987 · ziglang/zig · GitHub, it sounds like 16 was considered the explicit right answer a while ago.

So what’s the long-term plan here? Is 8 the final answer here, or is it a bug, and the alignment would grow to back 16 at some point in the future?

EDIT: not an expert, but it sort-of feels like 16 should be the right answer here: i128 / u128 are not compatible with C's definition. · Issue #54341 · rust-lang/rust · GitHub.



$ cat main.zig
const std = @import("std");
pub fn main() void {
    std.debug.print("{}\n", .{@alignOf(u128)});
    std.debug.print("{}\n", .{@alignOf(extern struct { u128 })});

$ zig run main.zig

Alignment is supposed to tell the requirements for efficient loads and stores of a given type. LLVM reports 8 for alignment of i128 (on some targets- it’s different depending on CPU, OS, and ABI). That means LLVM is communicating that the CPU only needs 8 byte alignment in order to do efficient loads and stores of this type, on this system. As far as I’m aware, that is indeed accurate.

C ABI compatibility is a different concern. So that’s why extern struct reports a different value - it’s ensuring that your code will adhere to the C ABI of the code it interfaces with. However, 128-bit integers don’t actually need more than 8 bytes according to the CPU.


Does LLVM also consider optimal alignment for SIMD loads/stores when reporting 8?

For example: Why should data be aligned to 16 bytes for SSE instructions? - Intel Community

A stricter “higher resolution” alignment of 16 can always be downscaled into a “lower resolution” of 8 byte alignment, but once conceded to 8 cannot always be upscaled.

Erring on the side of higher resolution, as well as C ABI alignment, seems like it would be most compatible and future proof.

1 Like

Indeed, when talking about an ABI, decisions can be carved in stone and difficult to update. However, the alignment of 128-bit integers in Zig (outside of an extern struct) is not part of any ABI, so the compiler is free to change it at any time without changing the semantics of the language. So there is no need to err on any side of the fence - if 16 is better we can switch to 16 at any time without breaking anyone’s code; if 8 is better we can keep it at 8.

With that said, if there’s an argument to make that 16 is the better number for x86_64, then let us make that change.

1 Like