How can you allocate big chunks using FixedBufferAllocator?

I have declared a FixedBufferAllocator this way:

 var buffer: [81920000]u8 = undefined;
 var fba = std.heap.FixedBufferAllocator.init(&buffer);
 const allocator = fba.allocator();

And then, later on, used it this way:

pub fn boolGenerate(allocator: std.mem.Allocator, random: std.rand.Random, string_length: u16, num_strings: u32) ![][]bool {
    var stringArray = try allocator.alloc([]bool, num_strings);
    for (0..num_strings) |i| {
        stringArray[i] = try allocator.alloc(bool, string_length);
        for (0..string_length) |j| {
            stringArray[i][j] = random.boolean();
    return stringArray;

This crashes, dumping a core. The buffer size should be enough for 40000 * 512 boolean strings, it’s 4 times as big. Maybe boolean need bigger sizes?

You are overflowing your stack by allocating such a large array statically, and that is the reason for the segmentation fault. The code itself is fine, if you lower your buffer size or allocate it on the heap somewhere, it’ll work.

You can also work around the stack size limit by running something like ulimit -s unlimited before executing your program, but I think reducing your stack size from within the program is the better idea.


I used a bigger buffer size (marginally bigger, in fact) without an issue, here it is energy-ga-icsoft-2023/code/zig/src/chromosome_generator.zig at main · JJ/energy-ga-icsoft-2023 · GitHub
In that case I was effectively allocating an u8. Didn’t dump any core.

It depends on your default stack size limit, how the compiler lays out the stack and what your program is actually doing.

For me on the default value, that’s 8192K, which is only slightly over your buffer size, and under that limit it segfaults. If I lift the stack size limit, it works with 40k x 512 boolean strings as expected.

For you it may have worked before because the stars aligned and the compiler laid the stack out differently, with different alignments or your program not happening to smash through the limit, but here it does not. 81920000 bytes for a single stack object is dangerously close to the default limit of 8388608 so you are playing with fire either way.

  1. Make a single allocation from std.heap.page_allocator.
  2. Use one of std.bit_set

Note that you can convert two dimensions array to single dimension by using x+y*width as index.

Example: Idiomatic way to allocate memory for bitfields - #7 by dimdin


It’s ten times as big, as a matter of fact. I’ll try that. Thanks!

I wanted to try this allocator hoping it would be faster; in fact, that was what I was benchmarking.

Tried this (in a different computer), it still segfaults. Maybe compiling without optimizations will give us a bit more information?

Compiling with debugger options gives you this,

[1]    83612 illegal hardware instruction  zig-out/bin/bool_chromosome_generator

Because of the stack overflow, the code is jumping to garbage, triggering the illegal instruction. You can increase the stack size with Build.Step.Compile.stack_size.
Whether you will crash or not is not directly related to the size of the buffer. If you allocate a huge buffer but never write beyond the stack limit, you won’t crash. Therefore, if you called this code with less data or if the stack size was bigger, you might not get a crash. Note that the default stack size on Linux is 10x bigger than Windows, so this could explain why it didn’t crash before.