I’ve found it handy to have a function for use in unit tests that:
Asserts that its argument is non-null
Returns the unwrapped value in the optional
This was my first implementation of this function:
pub fn expectNonNull(T: type, val: ?T) !T {
return val orelse {
std.debug.print("val was null\n", .{});
return error.TestExpectedNonNull;
};
}
This worked! But it was verbose, requiring me to name the type each time:
const foo = try expectNonNull(Foo, maybe_foo);
I dipped my toe a little further into the generic waters and came up with a second implementation:
fn ExpectNonNullReturnType(comptime T: type) type {
const type_info = @typeInfo(T);
return switch (type_info) {
.optional => type_info.optional.child,
else => @compileError("Can only call expectNonNull() with optional."),
};
}
pub fn expectNonNull(val: anytype) !ExpectNonNullReturnType(@TypeOf(val)) {
return val orelse {
std.debug.print("val was null\n", .{});
return error.TestExpectedNonNull;
};
}
This second version doesn’t require the explicit type argument. But if I pass in a value for val that isn’t a valid optional, it fails (at compile time) with my custom error rather than a nice normal type error.
The second implementation also doesn’t work with the literal null, e.g. expectNonNull(null), whereas the first implementation does.
Is there a better alternative to these two implementations of expectNonNull()?
While both last comments works for assertion, it doesn’t give you back the value as in first post. To get it back I’d do something like you did propose with 2 small additions:
comptime in front of T: type is redundant as type is always comptime
Regarding making it work with null, it works if you do try expectNonNull(@as(?void, null)) for example, but I don’t see why you would like to test null literals in the first place.
You can also add the case to OptInnerType:
fn OptInnerType(T: type) type {
return switch (@typeInfo(T)) {
.optional => |opt| opt.child,
.null => void, // .null is part of `typeInfo`
else => @compileError("Expect an optional type"),
};
}
I don’t think the extra prints add anything useful, just use -freference-trace to see the line number where it failed, to be able to distinguish it from other test failures. If you need it anyway for some reason you could replace error.TestExpectedNonNull with a function call that prints something and then returns the error.
Putting the orelse inside a function doesn’t seem worth it to me, it just adds more complexity for not much benefit. Having the orelse right in the test also makes it more readable because you directly understand the control flow, without needing to lookup what your custom function does.
Also with the testing.expect functions I have the expectation that they return !void, because the other ones do, so if you still want to make this unwrapping into a function I would call it something like unwrap, tryUnwrap or testUnwrap, but as said I think using orelse directly is preferable.
Another thing to think about is whether you actually want a function that is meant to be used in test code only, or whether you just want a normal function that can fail and happens to be used in a test. Not every function needs to be designed for test usage, I think it is fine to use “normal” functions that return errors and have that as test-error (without fancy error messages).