The test runner is just as lazy as the compiler. See How do I get zig build to Run all the tests?
This means I have to know the intricacies of conditional compilation / lazy compilation to even begin to reason about which tests are “detected”.
The observabilty of tests is poor. Even with --summary all and --summary new it is still difficult for me to know what tests have ran. This has resulted in me consciously and subconsciously distrusting the test runner. For example, here is the output of the test runner:
eff@jeff-debian:~/repos/zecm$ zig build test --summary all
Build Summary: 3/3 steps succeeded
test cached
└─ run test cached
└─ zig test Debug native cached 10ms MaxRSS:37M
jeff@jeff-debian:~/repos/zecm$ zig build test --summary new
Build Summary: 3/3 steps succeeded
So when I create a new file and add tests to it, I immediately ask myself, did my test even run? It did not (as seen by --summary new) but then I ask myself, did I accidentally run --summary new twice? My small projects tests only take a few ms to run anyway.
This makes me want to write tests to assert that my tests have run!
Philosophically:
Conditional complication is hidden from the programmer, it is not “readable from the code”. This is antithetical to tests, which I think should be “human auditable” (most people do not write code to test their tests, they look at their test code very carefully, and make sure “success” is printed at the end).
Conclusion being that I propose the following:
Testing should be as eager as possible. It should be harder for me to blacklist a test than include one. Currently, it is too hard to include a test for a novice like myself. It requires:
put addTests in my build.zig
@import the file in my root.zig
reference declarations to thwart conditional compilation
Perhaps the test runner should accept entire directories of files? I just think that I should be able to look at whatever is configuring my test runner and immediately know from this configuration what files it is going to run.
The first thing that came to my mind was to essentially make refAllDecls(@This()) implicit for test running. A quick search on github shows that there is already a proposal for this. I have no idea what the Zig team sentiment is on that proposal, but it is still open at least.
Zig hasn’t quite reached the stage where there are nice hand-holding tutorials for everything (although Ziggit has a nice Docs section). I think that’s a good outcome to aim for, programming is difficult enough that learning should be as easy as humanly possible.
But I think that testing Zig should work the same as the rest of Zig: lazy. There’s a one-liner to force execution of all reachable tests, that seems good and sufficient to me.
I agree with this. There should be a zig build test --summary tap flag to print test output in TAP format, at least. I don’t think this would be hugely difficult to add, it’s just that the last release saw major changes to how builds of every sort are reported, and it might be best for that to stabilize before anyone tries to add it.
But about this:
I subscribe to the school of thought which says that until a test has failed, at least once, it does not qualify as a test. As a matter of course, I always write a test to fail, observe that it has, and change the test to pass, in that order. I’m not otherwise especially influenced by TDD doctrine, but I’ve learned that this avoids an entire host of problems which will otherwise reliably arise. It tells you: you have a test, the test is being run, the value that the test produces, and what it looks like if the test fails later. Well worth it!
Even if Zig had TAP output, you must admit, this is an easier and faster way to see that the test exists and is being run, compared to scanning a growing list of OKs and making sure the one you added is also included.
I’m not sure how feasible this actually is. A few examples to consider:
Deprecation is typically handled by doing something like this:
pub const MAX_PATH_BYTES = @compileError("deprecated; renamed to max_path_bytes");
With implicit refAllDecls, this would make this file unable to compile during testing.
Something may be defined for some platforms but not for others, and again it might be a compile error on certain platforms only if it is referenced. Here’s an example:
Again, implicit refAllDecls would mean this file would be unable to compile during tests on platforms that don’t define PATH_MAX. This particular example is unlikely to cause a problem in practice, but consider something like:
pub const Foo = if (builtin.os.tag == .windows)
Bar
else
@compileError("Foo is windows-only");
Namespaces like std.os.linux and std.os.windows expect functions within them to only be referenced when targeting the relevant platform. An implicit refAllDecls would mean that these sorts of namespaces would cause compile errors, too.
Note: If you add an explicit
test {
std.testing.refAllDecls(@This());
}
to any of the above examples, you can confirm that they would give compile errors.
Not as a obviously so, it says “refAllDecls”, and it does that. But blowing up with compile errors isn’t really the most useful semantics. Without some way to introspect a type at comptime and determine that referencing a decl will terminate compilation, it’s difficult to provide something more useful here.