To test that a function returns roughly what is expected, the standard library’s
inline fn expectApproxEqAbs(expected: anytype, actual: anytype, tolerance: anytype) !void
looks promising. In master (0.12) there is a problem in my understanding for the following simple case (test.zig
):
const std = @import("std");
fn thrice(number: f32) f32 {
return 3.0 * number;
}
test "Expecting this to compile and to pass" {
try std.testing.expectApproxEqAbs(3.0, thrice(1.000001), 0.01);
}
I.e., $ zig test test.zig
test.zig:8:50: error: unable to resolve comptime value
try std.testing.expectApproxEqAbs(3.0, thrice(1.000001), 0.01);
~~~~~~^~~~~~~~~~
test.zig:8:50: note: value being casted to 'comptime_float' must be comptime-known
Wouldn’t the function’s return type (f32
) make second argument’s type comptime known? Why must its value also be comptime known?
Switch the first two arguments and the test passes:
const std = @import("std");
fn thrice(number: f32) f32 {
return 3.0 * number;
}
test "Expecting this to compile and to pass" {
try std.testing.expectApproxEqAbs(thrice(1.000001), 3.0, 0.01);
}
$ zig test testworks.zig
All 1 tests passed.
Although the value 3.0 would be the expected result within the specified tolerance, why does it have to be passed as actual
to make the test work?