Crashing globally is exactly my intention. Otherwise I’d need to handle the error, reverting all allocations and other state changes and coding a fallback behavior, like reducing render distance or whatever. This is not worth the extra effort.
I don’t exclusively use a single global allocator. I do have a couple of functions that use an ArenaAllocator and I also have a global stack-like allocator for each thread that gets used for temporary runtime-known fixed-sized allocations. And I do plan to use MemoryPool
s more often, which would probably include tasks.
The threadpool owns the task, but the task is responsible for cleaning up its own memory. At the end of the task.run()
method it deallocates its memory. It also has a separate task.cleanup()
method which gets called by the thread pool if task.isStillNeeded()
returns false.
The task.run()
method also gives its result to a list in mesh_storage
to be sent to the gpu and stored.
Basically the task itself is responsible for managing its memory and the result of its execution.
So the code bit you sent schedule(allocator: ...)
isn’t really applicable. This would create the illusion that the memory is somehow owned by the caller, or discarded after the end of the function. Instead the memory is supposed to be shared with other threads and the calling will probably never see it again.
On preallocating and reusing memory
Overall you seem to be advocating for allocating as much memory as needed upfront and then using that memory for memory pools, circular buffers and arenas.
While I agree that this is generally a good measure to reduce the number of potentially failing allocations, it doesn’t work for everything. E.g. there is no good upper estimate for mesh memory. It depends not only on render distance (configurable at runtime), but also on where you are in the world. For example if you are in a large cave system, mesh memory can easily quadruple.
So my question remains: What should I do in these cases? I don’t want to implement fallback behavior for all cases. That would make my code more complicated, for the gain of having a game that gets uglier(missing/LODed meshes or wrong lighting data) instead of crashing.
On mixing allocators
You both seem to agree that using different allocators inside the same function is bad.
But I think often having multiple allocators is the best solution. Let’s say you have a function that takes some data, processes it (processing needs extra memory), and returns the final result using an allocator that was passed in. Using the passed in allocator is not a good idea in my opinion.
fn process(allocator, data) !@TypeOf(data) {
const internalData = try allocator.alloc(...);
defer allocator.free(internalData);
// processing...
const returnData = try allocator.alloc(...);
// more processing...
return returnData;
}
By using the same allocator for both allocations we have introduced a potential performance bottleneck(if the allocator is slow, such as a GPA) or a potential memory fragmentation(if the allocator is an Arena).
Instead it would be much better to allocate with a stackfallback allocator or another stack based allocator. For example in my game I’d do something like this:
...
const internalData = try main.stackAllocator.alloc(...);
defer main.stackAllocator.free(internalData);
...
Where the main.stackAllocator
is a threadlocal stack emulating allocator (which also falls back to the globalAllocator when the allocation is too big). This makes it much faster than the GPA, while not having the fragmentation problem of the Arena. And additionally it basically cannot fail unless the system runs out of memory.