SUMMARY: How far should I trust the compiler to be able to determine that comptime-known values passed to runtime function parameters can produce (alternate) optimized function calls that prune out unecessary branches/logic, and should it be considered a better practice to provide library functions that always take runtime parameters and trust the compile to fix it, or force function parameters to be comptime if I always want the comptime optimizations to occur.
Hi, I’ve been writing a number of my own tools/libraries in zig and one thing that has consistently bothered me is if I should or should not be forcing certain function parameters to be declared as ‘comptime’.
Like, if I expect 99.9% of users to know a parameter MyOptions
at comptime,
I might just simply force the parameter to be comptime:
pub fn myOperation(comptime options: MyOptions, value: anytype) void {
switch (options.mode) {
.MODE_1 => {
// comptime selected algorithm using 'MODE_1',
// this branch should not exist when providing option 'MODE_2'
},
.MODE_2 => {
// comptime selected algorithm using 'MODE_2',
// this branch should not exist when providing option 'MODE_1'
},
}
}
Written like this, depending on what ‘mode’ the user provides as an option,
the compiler would be expected to prune the branches that are never applicable at their call-sites (maybe not in ReleaseSmall, but that’s fine).
But this puts a hard constraint on how an end user can use the code: they MUST know the comptime parameters at compile time, and maybe they wanted to use my library function with runtime MyOptions
The solution as a library developer is to provide a version with runtime-allowed parameters, but providing 2 separate functions that do the same thing with the exception that one takes comptime parameters and one takes runtime parameters seems like a bad/bloated pattern, even if I expect only 0.01% of users to want to use it in this way.
That means the ‘better’ choice seems like it might just be to provide the runtime options version as the ONLY option:
pub fn myOperation(options: MyOptions, value: anytype) void {
switch (options.mode) {
.MODE_1 => {
// probably accessed by a jump table based on the integer value of options.mode
// but *might* be pruned away if options is comptime-known to be 'MODE_2'?
},
.MODE_2 => {
// probably accessed by a jump table based on the integer value of options.mode
// but *might* be pruned away if options is comptime-known to be 'MODE_1'?
},
}
}
It is not obvious in this example, but what if comptime pruning of unused branches is significantly better/faster/smaller than the runtime alternatives, for whatever reason.
My question then is this: If there is clearly a benefit to the compiler pruning branches based on comptime-known values, but I still want to allow end users to be able to call the same function with runtime values without writing a runtime version with duplicated code, how far should I trust the compiler to be able to determine that comptime-known values passed to runtime function parameters can produce (alternate) optimized function calls that prune out unecessary branches/logic, and should it be considered a better practice to provide library functions that always take runtime parameters and trust the compile to fix it, or force function parameters to be comptime if I always want the comptime optimizations to occur.