This is a nice thought/hope, but I haven’t found any languages that do this for ownership and borrowing safety without GC or copy-on-write. Do you know of any? If not, it falls into the category of possible future research, unfortunately.
This can be addressed through allowing the fixed buffer to grow by allocating a new fixed buffer and attaching it with a linked list (The size can either be the same, or be smaller / larger than the previous one, or be determined by some heuristic factors). This is how ArenaAllocator work behind the scene. If the sum of all allocations cannot be reliably predicted, it’s better off to start with some reasonable size and grow as needed. The good thing is that if there are exceed memory being allocated, it can either be retained for future use, or be partially retained, or be a full reset.
IMO, the real downside of using this type of allocation strategy is dealing with data structures like ArrayList that require reallocation when it needs to grow. Since ArenaAllocator and FixedBufferAllocator do not support out of order free, it will create wasted gap.
There are usually two solutions I have in mind with this: 1. follow the rule of which “user’s memory is a shared resource with others and it should be used with care”, set up a memory budget for dynamic data structures so that it can be initialized with a hard upper capacity; 2. implement an allocator like ArenaAllocator but it has a free list. It provides both the benefit of regular ArenaAllocator and more flexible allocation pattern. There are some problems with this type of allocator such as fragmentation caused by small allocations, list traversing, additional space overhead for tracking free space but they all can be mitigated by more sophisticated implementation like categorizing allocation by size, pointer compression for specific groups of allocations so that pointer address don’t have to be always full 64-bits or 32-bits.
Even if I opt into solution two, it’s still good to always think about solution one.
If users’ machine has some sort of persistence secondary storage, you can do on-demand de-allocation and page out the allocated memory to secondary storage that isn’t needed immediately. I learnt the overall idea and how it work of this system from Handmade Hero but never implement one before. However, unless I’m building an editor-like application that allows users to keep adding more and more data, I’m probably in a pretty bad spot when I need to reach out solution like this.
I’ve been getting good mileage out of option 3: an Allocator for arrays, and then an arena for everything else. If memory needs to leave the subsystem, the allocator is for that as well.
It’s nice, because you can retain the arrays and the arena, so everything heats up, and if the code has unbounded bloat risk it’s still not a lot of surface area to set maximum capacities.
It’s low complexity, gives a bounded lifetime with a handful of frees, and it encourages good practices.
Yeah, I forgot about this as well because I did something somewhat similar where I will mainly have two categories of arena allocator, one for allocations that have known lifetime and fixed or predictable upper bound, and one for allocations that have unknown lifetime and unpredictable size. You essentially isolate allocations by lifetime and size characteristic so that one doesn’t wreck havoc over others.
I double down for option two for my recent project (a data editor for editing data in assets of a closed source sound engine). Majority of the data structures in each of this asset are modeled as dynamic size array or dynamic size map. In some situations, some data structures, which are modeled as dynamic size array, also have fields that need to be modeled as dynamic size array, which is a nesting situation. The spec of the sound engine do not set a hard limit on how much one can add more data for those fields. And, since the project is to build an editor, it needs to allow users to freely add, delete, and change anything they want. I spent good chunk of times with option three but it wasn’t well fitted solution overall.
That article rubs me the wrong way. He may be right that PAs aren’t “everything” to take into account, but there are many real world examples of engineering that has used formal methods successfully to bridge the gap between math and code - CompCert, seL4, HACL*, the CakeML stack, Project Everest, the Rust borrow checker (which is descended directly from substructural type theory research nobody thought was “practical” in 1995) — these are all decades of patient, really unglamorous work by people who took the math seriously and cared about the metal.
Some of his arguments are just out of date with what’s possible in 2026. I think we’re going to see an uptick in the math meets metal style of programming. If he wanted to express an opinion that he doesn’t want to use Coq or whatever and doesn’t like people saying he should, cool, but I wouldn’t take his arguments as gospel by any means.
I thought it was excellent, personally.
In my read, the author was addressing the topics you mention, not dismissing them.
Yeah, sometimes opaque handle (like ID) is more well suited over handing out raw pointer. Using handle comes with its own problems. It evolves around managing handles but it’s usually can be addressed during the implementation. Another thing is that if your architecture is design to follow the “input → output “ pattern, you can centralize / collapse and batch all accesses of raw pointer within a system. For most cases, you provide the necessary input to that system and let it perform operation on the data of referenced by those pointers instead of asking the system to hand out raw pointers and performing operations else where.