Memory allocation after deinit results in trace trap

Hi. I’m using zig 0.13.0 and ran into something weird while exploring.

I don’t know if it works as intended, but shouldn’t it return an error from try, instead of printing trace trap?

% zig run ./src/main.zig
zsh: trace trap  zig run ./src/main.zig
const std = @import("std");

pub fn main() !void {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    const allocator = gpa.allocator();
    _ = gpa.deinit();
    const memory = try allocator.create(u8);
    allocator.destroy(memory);
}

Running the same code, I am getting:

General protection exception (no address available)
/home/din/zig/0.13.0/lib/std/heap/general_purpose_allocator.zig:524:91: 0x107c90f in allocSlot (test)
            if (self.cur_buckets[bucket_index] == null or self.cur_buckets[bucket_index].?.alloc_cursor == slot_count) {
                                                                                          ^
/home/din/zig/0.13.0/lib/std/heap/general_purpose_allocator.zig:1015:44: 0x106b445 in allocInner (test)
            const slot = try self.allocSlot(new_size_class, ret_addr);
                                           ^
/home/din/zig/0.13.0/lib/std/heap/general_purpose_allocator.zig:974:30: 0x103b014 in alloc (test)
            return allocInner(self, len, @as(Allocator.Log2Align, @intCast(log2_ptr_align)), ret_addr) catch return null;
                             ^
/home/din/zig/0.13.0/lib/std/mem/Allocator.zig:86:29: 0x103d0a4 in allocBytesWithAlignment__anon_4008 (test)
    return self.vtable.alloc(self.ptr, len, ptr_align, ret_addr);
                            ^
/home/din/zig/0.13.0/lib/std/mem/Allocator.zig:105:62: 0x1037b58 in create__anon_3490 (test)
    const ptr: *T = @ptrCast(try self.allocBytesWithAlignment(@alignOf(T), @sizeOf(T), @returnAddress()));
                                                             ^
/home/din/test.zig:7:40: 0x10379ac in main (test)
    const memory = try allocator.create(u8);
                                       ^
/home/din/zig/0.13.0/lib/std/start.zig:524:37: 0x1037885 in posixCallMainAndExit (test)
            const result = root.main() catch |err| {
                                    ^
/home/din/zig/0.13.0/lib/std/start.zig:266:5: 0x10373a1 in _start (test)
    asm volatile (switch (native_arch) {
    ^
???:?:?: 0x0 in ??? (???)
fish: Job 1, 'zig run test.zig' terminated by signal SIGABRT (Abort)

EDIT:
When calling const memory = try allocator.create(u8); the allocator was de-initialized from the previous statement.

You forgot to add defer in front of _ = gpa.deinit();

I usually check for leaks:

defer if (gpa.deinit() == .leak) {
    @panic("Memory leak has occurred!");
};
2 Likes

I’m not testing for memory leak here. I’m testing allocation, after gpa was deinit.

Interesting, what system did you test it on?

Once you deinit gpa, you can’t be allocating with it.

I know and I’m expecting to see an error, but I see trace trap instead.

1 Like

Debian GNU/Linux 12 (bookworm) 6.1.106-3 (2024-08-26) x86_64

You are right, I tested it on x86_64 and it actually returned error message with trace.

But when I ran it on aarch64 it just hanged. And on my mac arm it returns trace trap.

I guess it can be the arm specific problem?

1 Like

Looks like Apple OSs use SIGTRAP, which is normally debugger-only signal on Linux, to indicate an unhandled exception:

2 Likes

No. Using the general purpose allocator after deinit is undefined behavior. It does not check whether it is initialized or not, so it can’t return an error. You’re supposed to just not do this.
In the deinit method, you’ll see that it sets itself to undefined, to help catch bugs like these. When you use it, it’s trying to use a very high index (0xAAAA…), hence the trap.

3 Likes

I understand it shouldn’t be done like this. It does not help to catch bugs, since you cannot debug something, when there is no trace or at least some error message on one architecture and it hangs forever on another. My question is why it relies on undefined behaviour when it is clearly an error? And why on x86_64 I can see at least some output, when on some other archs cannot? What if I simply forget to add defer?

It trapped. You can inspect the data at that point and you’ll se a bunch of 0xAA, the hallmark of using an initialized value.
The variation between platforms is what happens when you have undefined behavior.
Even if all else fails, at the very least you know that that specific piece of code is to blame. And it consistently fails, even if the method of failure varied amongst platforms. If the allocator didn’t set itself as undefined, you’d get silently working code, that sometimes breaks, classic Heisenbug, the worst thing that can happen in a program.

It’s clearly an error for you, human, not for a program. How can the allocator know if the index it’s trying to use is an unitialized value or a legitimate index? If the allocator kept track of it’s state for you, you’d have to pay for this, in the form of, i.e., a boolean, that it checks before every operation. It doesn’t make sense to pay this cost. We could add this boolen only in debug builds, and then assert if it has the correct value at every function call, but the result would be the same, you’d get a trap somewhere because the assertion failed, not returning an error. The best we could do is display a nice error message if the boolean is not holding the value we expected. It’s probably not worth the effort. Just set it to undefined, the user will figure it out when they see a bunch of 0xAA.

4 Likes

Thanks for your time and explanation