@TypeOf vs. hardcoding type in comptime type arguments

When calling functions like std.mem.findScalar(comptime T: type, slice: []const T, value: T) and when declaring variables that need to match the type of another, do you typically find yourself hardcoding the type T, or using @TypeOf(value)?

// Hardcoding T
_ = findScalar(u32, slice, value);
const tmp: u32 = undefined;

// Using @TypeOf
_ = findScalar(@TypeOf(value), slice, value);
const tmp: @TypeOf(value) = undefined;

// Doing stuff
_ = tmp + value; // If type is a primitive like u32
value.foo(tmp); // If type is a struct with methods

There are pros/cons for each, and I’m sure there are strong arguments for either approach depending on the scenario. That being said, what’s your default/preferred approach until there’s a compelling reason to do so otherwise?

Some of my thoughts:

  • Readability:
    • Hardcoding is easier to read and understand.
    • @TypeOf would require finding a concrete declaration or using LSP tooltip to know exactly what the type is.
  • Ease to refactor:
    • Hardcoding is probably more LOC to change. If the type is a “common” one like u8/usize and not a custom userland type, then grep becomes a less powerful refactor tool. Using compiler errors to address one block at a time is slow but sure, though could be accelerated via AI if the developer so chooses. Keen to learn if there are tools out there that make this refactor easier.
    • Assuming the refactor doesn’t change the abstractions of @TypeOf, then code refactors should be trivially easy. To be fair that can be a pretty strong assumption, and I think it’s a bad idea to make that assumption in complex programs. For small & simple libraries, however, I think it’s feasible for humans to verify the abstractions using @TypeOf are airtight.
  • Build option
    • A use-case that hits close to home is that many math libraries/applications can choose to use single or double-precision floats during build. I’ll admit my gut response was that @TypeOf is the superior choice, but after a little thought I think the correct zig approach is to just use a comptime type, i.e.
      const f_build = if (config.double_fp) f64 else f32;

Or perhaps I need to fundamentally change how I’m writing my code to not have to deal with this conundrum?

Slightly related discussion: `comptime T: type` arguments feel redundant at the callsite

2 Likes

I hardcode the type all the way, I don’t recall having ever used @TypeOfin this situation. Realistically refactoring is a non-issue, as refactoring the type of something and the functions being called using it still being valid is a rather rare occurrence. I would rather have clearer source code all the time, and spend an extra second per-usage when refactoring in the event of that rare scenario.

2 Likes

I personally lean towards @TypeOf, or if I need to use this type multiple times, I might give the type of this value an alias. This can help reduce related bugs in the future if I want to change the type of this value.

It might not be very relevant, but I feel the principle is the same. In the Linux coding-style standards, when using kalloc, it is required to use sizeof(*p) instead of sizeof(*struct S).

Thanks all for the perspectives so far.

In the few refactors I had to do, I sometimes wonder whether I made the wrong choice and should have abstracted using @TypeOf. But now that I have a few occurrences of @TypeOf sprinkled in the codebase, I agree that it is starting to inhibit readability. Thank you for your opinion!

Having taken a very brief look at the standards, I think I’ll agree that it’s not very relevant here. I think the style guide’s choice is a result of C’s limited type checking.

// C refactor
new_type *p = null;
p = kalloc(old_type, ...); // Will not warn/error the bug

// Zig refactor
p: *new_type = undefined;
p = arena.alloc(old_type, ...); // Will not compile to prevent the bug
1 Like