In my code I have a comptime struct that I use to storing various info about different types. I use the following function to find the matching entry:
fn indexOf(comptime self: *@This(), comptime T: type) ?usize {
return inline for (self.types.entries[0..self.types.len], 0..) |td, index| {
if (td.Type == T) {
break index;
}
} else null;
}
Because it’s so frequently called it just eats up the eval branch quota. I have to give @setEvalBranchQuota()
a large number like 5000 just to get relatively simple code to compile. I notice that if I unroll the loop, I don’t need to set the quota as high:
fn indexOf(comptime self: *@This(), comptime T: type) ?usize {
comptime var i = 0;
const e = self.types.entries;
const l = self.types.len;
return while (i < l) : (i += 8) {
if (i + 0 < l and e[i + 0].Type == T) break i + 0;
if (i + 1 < l and e[i + 1].Type == T) break i + 1;
if (i + 2 < l and e[i + 2].Type == T) break i + 2;
if (i + 3 < l and e[i + 3].Type == T) break i + 3;
if (i + 4 < l and e[i + 4].Type == T) break i + 4;
if (i + 5 < l and e[i + 5].Type == T) break i + 5;
if (i + 6 < l and e[i + 6].Type == T) break i + 6;
if (i + 7 < l and e[i + 7].Type == T) break i + 7;
} else null;
}
That seems like a pointless exercise though. I mean, the computer is actually doing more work here. Is the quota really just there in case you write something that loops forever by mistake? If you know your code is correct, it’s okay to set it to a really high value?