When calling functions like std.mem.findScalar(comptime T: type, slice: []const T, value: T) and when declaring variables that need to match the type of another, do you typically find yourself hardcoding the type T, or using @TypeOf(value)?
// Hardcoding T
_ = findScalar(u32, slice, value);
const tmp: u32 = undefined;
// Using @TypeOf
_ = findScalar(@TypeOf(value), slice, value);
const tmp: @TypeOf(value) = undefined;
// Doing stuff
_ = tmp + value; // If type is a primitive like u32
value.foo(tmp); // If type is a struct with methods
There are pros/cons for each, and I’m sure there are strong arguments for either approach depending on the scenario. That being said, what’s your default/preferred approach until there’s a compelling reason to do so otherwise?
Some of my thoughts:
- Readability:
- Hardcoding is easier to read and understand.
- @TypeOf would require finding a concrete declaration or using LSP tooltip to know exactly what the type is.
- Ease to refactor:
- Hardcoding is probably more LOC to change. If the type is a “common” one like u8/usize and not a custom userland type, then grep becomes a less powerful refactor tool. Using compiler errors to address one block at a time is slow but sure, though could be accelerated via AI if the developer so chooses. Keen to learn if there are tools out there that make this refactor easier.
- Assuming the refactor doesn’t change the abstractions of
@TypeOf, then code refactors should be trivially easy. To be fair that can be a pretty strong assumption, and I think it’s a bad idea to make that assumption in complex programs. For small & simple libraries, however, I think it’s feasible for humans to verify the abstractions using@TypeOfare airtight.
- Build option
- A use-case that hits close to home is that many math libraries/applications can choose to use single or double-precision floats during build. I’ll admit my gut response was that
@TypeOfis the superior choice, but after a little thought I think the correct zig approach is to just use a comptime type, i.e.
const f_build = if (config.double_fp) f64 else f32;
- A use-case that hits close to home is that many math libraries/applications can choose to use single or double-precision floats during build. I’ll admit my gut response was that
Or perhaps I need to fundamentally change how I’m writing my code to not have to deal with this conundrum?
Slightly related discussion: `comptime T: type` arguments feel redundant at the callsite