Hi, I’m currently in the process of rewriting the entire build system of my C library, and I want to use zig for that purpose, I also want to use the zig build system as an easy way to build a nice and easy to use testing suite.
The library in question is a simple library comparable in the spirit to the Zig’s std although with a much more limited scope. Anyway I want to collect tips and tricks, from the community to see how I can do that. So far by reading and researching, I’ve understood that in order to have “pretty printing” for the tests I would need to write a custom runner in order to not interfere with the way the default runner is using the stdout. I also came to an understanding that for pretty printing I would probably need to write a custom writer too.
I know this is a very broad and open discussions, because the truth is I’m not sure what I want out of it, ideally If you have experience with building a static C library with Zig, or writing custom runner, or if you have any suggestions on how I might do pretty printing, or any shed of light on how to do good testing of C code and make good use of the tools provided by the std I would really appreciate any response, thanks everyone.
No, I also thought that this was the case and had written one test runner. The builder when running the tester is interfering with stdout and you cannot change that with custom runners.
Definitely no, for “pretty printing” of test failures you need your own expect statements and not another writer (where “pretty printing” means different way of printing).
Ok so if I get it I don’t need to write a custom runner. But I need to write my custom expect statements for data types, and invoke @panic() my self or is there another way ? Also about the pretty printing part I just want to have sort of nice formated output in the form of the test case, the arguments, and the expectations, nothing fancy just some regular common practice.
If you remove the ! inside xor and run zig test xor.zig again, you get:
Test [1/1] test.xor... expected true, found false
Test [1/1] test.xor... FAIL (TestExpectedEqual)
/home/din/zig/lib/std/testing.zig:93:17: 0x1028c52 in expectEqualInner__anon_993 (test)
return error.TestExpectedEqual;
^
/home/din/zig/lib/xor.zig:18:9: 0x1028e08 in test.xor (test)
0 passed; 0 skipped; 1 failed.
error: the following test command failed with exit code 1:
/home/din/.cache/zig/o/da9ca675ced29bd7204b3983afafc330/test
You don’t need anything special for using the test facility.
Thanks for the response but I had another idea in mind I like to get visual feedback, and what I would like is for test to return their name and OK when they are done, or KO when they are failing, with some detailed but formatted output, the error you just showed, is not providing the information that I care about, functions like expectEqualString or expectEqualSlice are more what I would expect, but maybe that means that Zig test is not a good fit for that, and I should just build an additional program to do that myself ? spawning process to execute individual tests ?
When zig test is running under some continuous integration process it does display the test and OK:
> zig test xor.zig 2>&1 | /bin/cat -
1/1 test.xor... OK
All 1 tests passed.
But to get KO instead of FAIL you need your own test_runner.
It compares boolean values, that’s why it says the failed test name and expected true, found false. If some of the expectString does not suit your needs you can copy it from the std lib and change it, no big deal.
An example of custom expect is:
fn expectEqual(expected: Decimal, actual: Decimal) !void {
if (actual.isNaN() and !expected.isNaN()) {
std.debug.print(
\\
\\------------------------
\\ expected {}
\\ found NaN ({!})
\\------------------------
\\
, .{ expected, actual.unwrap() });
return error.TestExpectedEqual;
}
if (actual.identical(expected))
return;
std.debug.print(
\\
\\------------------------
\\ expected {}
\\ found {}
\\------------------------
\\
, .{ expected, actual });
return error.TestExpectedEqual;
}