What are my alternative here for allocator? It was pointed out to me that the page allocator essentially reserved chunks of memory in fixed sizes, which I gather is way too slow and cumbersome for this example. So my other choices would be FixedBufferAllocator, ArenaAllocator, and GeneralPurposeAllocator. I think I want to avoid the ArenaAllocator for now, since this is a primitive example. There is also std.mem.Allocator, which I don’t have much information on. I would be leaning towards GPA, but possibly the fixed buffer as well might be useful in this case. Also I think I read somewhere that GPA is still slow, but I didn’t confirm that. Some clarification here would be helpful.
If you haven’t seen it already, this section of the Language Reference is very good for understanding memory and allocator usage in Zig.
Specifically, regarding FixedBufferAllocator, the use case is basically any time you know in advance the maximum amount of memory required. You create a buffer (usually an array of u8) and then use that as a backing store for the FixedBufferAllocator.
ArenaAllocator is really good when you have a task (maybe the whole program) that has a finite lifespan, like say, parsing some code into an AST. Once you initialize the ArenaAllocator you can use it for any data structures (i.e. ArrayList) your task may need, without worrying about freeing the memory they use; the ArenaAllocator will free everything at once when its deinit method runs. Aside from making memory management trivial in these cases, it can also provide a significant performance boost due to it avoiding constant allocation / de-allocation during task execution. But beware, if you do have a long-running task, like a web server for example, an ArenaAllocator can gobble up a lot of memory as time goes on. Another thing on ArenaAllocator is that I’ve seen it initialized frequently with std.heap.page_allocator as its backing allocator; and I think this is safe because the arena will manage the requesting of memory pages from the OS in an efficient manner for you.
The GPA is supposed to be a good middle-ground allocator, as its name suggests. Last time I tried it, it was indeed slower when compared to using other allocators, but that was quite a while ago so this may have changed. It does offer added features like memory leak detection.
I read here on ZigLearn that FixedBufferAllocator would be useful for writing a kernel. I didnt see it mentioned in the docs, so i wasnt sure if it was correct. Also what other use would there be for FBA besides writing a kernel?
In theory, (I haven’t actually benchmarked), FixedBufferAllocator (FBA) should be the fastest allocator. Think about it, it uses a pre-allocated byte buffer as its backing memory, so all allocations and frees that it performs are actually within that already allocated space, removing the overhead of interaction with the OS to ask for more memory or give it back. And if the buffer is a stack allocated array, you also entirely avoid heap allocation overhead. So it should be blazingly fast compared to the other allocators that require heap allocation or OS calls.
Okay. So I will go with FBA for now. Im thinking the GPA might be too complex for this example. I am also trying to implement an allocator, so i will be looking into that later as well.
Hello, FixedBufferAllocator precisely allocate a memory space, this memory space is it a buffer as if we were doing an allocBuffer, or is it an address pool from which we can get supplies:??? Need clarification please
I refer to this post The Curious Case of a Memory Leak in a Zig program | Krut's Blog
FixedBufferAllocator is a bump allocator. It’s not a pool, so allocations can have different sizes. However, deallocations need to happen in the reverse order of allocation (first allocation, last deallocation). If you deallocate something out of this order, it will be a no-op, so memory will leak, at least until the buffer itself gets deallocated.
FixedBufferAllocator is commonly used with a stack allocated array. In typical use, you can deallocate out of order or simply not care about deallocations. When the buffer goes out of scope, everything will be freed.
should basically be your default unless you know you need (or can use) something else. The GPA will catch bugs for you in modes with safety enabled, and it is the most future-proof since once implement a fast general purpose allocator · Issue #12484 · ziglang/zig · GitHub is addressed it will be a good choice for release modes, too (this is also why I think the GPA should be used in example code).
Note, though, that for Zig code that conforms with the ‘pass an allocator for anything that needs to allocate’ convention, in my experience it’s basically never necessary to choose an allocator beyond main (except for the odd ArenaAllocator wrapping a passed in allocator here and there). And when writing a library, you basically don’t have to worry about choosing an allocator at all–that’s for the user of your library to decide.
I had a thought about the page allocator. It was brought up earlier that it would be inefficient for many cases, since it essentially pages the os to reserve chunks of data in fixed sizes. However i was wondering if this inefficiency would be reserved only to the initial allocation in the program. Subsequent calls to already-allocated memory may be much faster. This might have implications related to using the page allocator for initialization purposes. I.e. if you are just allocating the memory a single time at startup, then the slowdown would not have any significant impact on the program.
I wanted to mention this because i am thinking to write a series of examples showing how to use each allocator, and to show what scenarios you would want to choose one over the other. I wasnt sure what situations where the page allocator might be preferred. But it might be useful to reserve larger chunks of memory when initializing a program, or loading configuration files. It would depend on how fast the allocator performs after the memory is already allocated. So there might also be an opportunity there for running some benchmarks, if it hasnt already been done.
The page allocator is inefficient because it makes a system call for every allocation and deallocation, and it also uses an entire page for the allocation. A page is usually a minimum of 4kb, so you would be wasting most of the space if you aren’t working in 4kb increments. Note that this is due in part to the fact that a page allocator is also one of the simplest possible allocators - the entire implementation is only 109 lines long.