Measure don’t guess.
If they are so easily exchangable it should be easy to measure them.
One of the strengths of Zig and manual memory management is that you can choose to use many different allocators and also have multiple instances, it is on the programmer to know when to use what, that has been discussed a lot previously, so I won’t repeat that here. See some of these topics:
- Choosing an Allocator
- Should I mix use `std.heap.c_allocator` with other std.heap allocators when linking libc?
- Need clarification regarding allocators
- Newcomer: Getting into Allocators
It seems you intend to use some rust-style lifetime analysis / constraints, for that it may make sense to adapt the chosen allocator such that you are able to group multiple allocations to have the same lifetime, for example by using an arena allocator where appropriate.
Beyond that I don’t think there is much that can be said generally, except that it depends on the code how it is structured and the resulting variable lifetimes. I think to tweak performance in that area you will need a detailed understanding of the allocators you use anyway.
I would prefer:
var x = try _zinc_alloc(i32);
defer _zinc_dealloc(x);
x.* = 10;
While transforming every variable on its own is probably an easier translation, I would expect it to result in slower code, personally I would try to have more analysis in-between and group multiple variables (with similar lifetimes) to share one memory allocation. The style of using new/free for every single thing seems terrible in practice, especially if you pick an allocator that can’t mitigate the individual-object-thinking-based loss of performance.
And if you do lifetime analysis you already should have some way to do some grouping, further if you use RAII (I am not a huge fan but that is off-topic), you could use nested arenas. The benefit of arenas would be that you can collapse many deallocations into a single arena deallocation.