I see your point.
I added this for consistency with the APIs in the Zig stdlib because I’m erroring on the side of consistency.
But I see this function as a micro-optimization for when you’ve reserved a large capacity of memory up front and then perhaps in a tight loop are adding items per loop iteration. In that case you get rid of capacity checks entirely.
But a few more points on this.
-
If people really care they should benchmark and prove out the performance for their workload.
-
One could argue that any hash-based data structure already is not super cache friendly to begin with but they are one of the best data structures ever invented and like everything else in life it’s all about trade-offs.
Just my 2 cents but I see your point.