How could
MultiArrayList.bytes’ alignment result in a dependency loop that wouldn’t already be a loop in another aspect?
Zig’s comptime semantics make type resolution harder to do. Consider the definition const A = struct { ptr: *align(@sizeOf(A)) u32 };. Many programming languages would be able to look at (their equivalent of) this code and resolve that (assuming pointers are 8 bytes) A has a size of 8 bytes and ptr therefore has type *align(8) u32. This is because these language can determine that ptr is a pointer without evaluating that @sizeOf(A) expression. Unfortunately, this doesn’t work in Zig, because types don’t have a fixed grammar: comptime evaluation means that they are specified as arbitrary expressions which can do basically anything. As a result, the only reasonable choice Zig can make is to evaluate the type expression in full. So in this instance, to determine the type of the fields of A, the compiler needs to know how big A is, which triggers a dependency loop. This is pretty much exactly the case that MultiArrayList finds itself in (sometimes).
The question then becomes why this used to work. In short, this worked because the compiler had accumulated various hacks and special cases to make this example (and many others like it) work. Unfortunately, these special cases came with unacceptable downsides. For instance, they made the compiler behave differently depending on the order in which code was semantically analyzed, which is something Zig cannot allow since it completely breaks incremental compilation (and some other stuff). These special cases could also lead to really confusing compile errors, were a frequent source of compiler crashes, and all in all were kind of fundamentally broken. Sacrificing the abilities they gave us is obviously unfortunate, but in this case the loss is small enough that it definitely looks to be worth it for all of the positive effects.
How does accessing a field (with a runtime allowed type) of a comptime only type at runtime semantically work?
Let’s say you have the type const Foo = struct { a: comptime_int, b: u32 };, and a global constant const val: Foo = .{ .a = 123, .b = 456 };. Even though Foo is a comptime-only type, the Zig compiler will actually emit the value val into the final binary anyway! It does so by pretend that “primitive” comptime-only types—in this case comptime_int—are zero-width types (like void), and just lowering everything else. So, the compiler will lower val into constant memory as just a single u32 with value 456. Then, when you do &ptr.b on a *const Foo to get a *const u32, the compiler continues to pretend that comptime_int is zero-width, and gives back the address of the field b under that assumption (today that’d be the same address as ptr itself, although of course Zig doesn’t guarantee that since Foo is a normal struct). At that point, you have a valid pointer to a u32, and can just load from it as usual!
This “pretend comptime-only primitives are void" trick is all there really is to it: it lets you nest comptime-only types, have slices of them, etc, all with just one rule. We actually already had all of this logic in the compiler—it’s necessary even without the changes I made here, because if you take &val.b at comptime, that needs to be a valid pointer at runtime!—so this wasn’t so much an intentional choice as just something which naturally popped out when I changed the compiler so that it considered *const Foo a runtime type.
Oh, and the final piece of the puzzle is that x.y means (&x.y).*; that is, it first takes a pointer, and then dereferences it. So if you have ptr: *const Foo as above, then you can just do ptr.b at runtime (no &) and get a u32 out.