Testing: disable memory leak report when test fail

I tend to have a good amount of (unit) test cases. While adding new functionalities to the code base, or changing something, one or more tests fail. Which is why I have all the test in the first place. However, often 2/3rd (the majority) of the (detailed) traces/messages printed to the console, are because of memory leaks, and I need to scroll back 2 pages to search for the error trace which I’m actually interested in. Memory leaks which occurred, because a “testing.expectXXX(” failed. Hence, memory leaks are to be expected. I’m little hesitant to add “defer xxx” and “errdefer xxx” just to prevent the memory leaks messages. Sometimes executing the “deinit()” just cause new errors, because the code is not yet ready.

Question: Is it possible that:

  • If a test succeeds without error, possible memory leaks are reported. Because it means, something is not yet working
  • If a test failed, only the trace related to this error is printed, but memory leaks reports are not printed (disabled).

The traces come from the DebugAlocator which is what testing.testing_allocator is an instance of, it just outputs them via the logging api.
The test runner does override the log function, it tracks the number of error logs, and uses its own testing.log_level to determine if it should be printed.
You can change testing.log_level though, it resets to .warn before every test.
Unfortunately, the leak traces use log level err so they can’t be disabled
Fortunately, you can write your own test runner to do whatever you want, it’s just a normal zig program.
I recommend copying the current one from zig_lib_dir/compiler/test_runner.zig and changing it to capture the log output, which you can then print after the test, depending on your conditions.

you can get the zig_lib_dir via zig env

1 Like

I suggest getting over the hesitation. Making a test not leak when it throws errors is the same thing as making a function not leak when it throws errors, because a test is just a function.

It takes some getting used to, but it’s essential for writing stable and reliable Zig code. The leak detection is part of the test suite for a reason: turning off alarms because they keep going off is normalization of deviance.

Once you’ve done that, you could write some checkAllAllocationFailures tests for your main code. The tests might not be the only functions which leak.

5 Likes

You are right, I don’t want “normalization of deviance”. I definitely do not want to disable memory leak detection. zig test should only be successful when all test (incl. memory leak detection) are sucessfull. But I want better focus, and not distraction. I want to focus first on getting the business logic right, and only then (second) make that every possible error is properly handled and no memory leak occurs. checkAllAllocationFailures is a good hint for that, thank you. But until I get there, the 2 pages of memory leak error message per test, are distracting me, while still trying to get the business logic right.

1 Like

Then just use an arena allocator until you are ready to deal with the memory errors.

7 Likes

I think this is an acceptable answer to the immediate problem of “here’s a bunch of tests, they cover important things but the tests weren’t written to handle leaks, what’s the fastest way to get around that problem”.

The larger problem is how to write tests which don’t leak, just as a matter of course.

So here’s a secret which fixes many problems with testing before they happen: never write a passing test, ever, no exceptions. If a test hasn’t failed at least once, what’s it testing? Is it a test? is zig build test even finding it? Is the test logic backward, so I’m passing a test which should fail?

I don’t do test driven development, usually, and certainly not in a rigid way. But this is a precept of TDD which I’ve found very useful indeed. ohsnap is designed to make fail-first the short path; this also happens to be the easier way to write the library, but if it were the opposite, it would still fail first.

If a test starts out failing, then any test leaks are visible right on the spot, when one is already thinking about the test rather than the business logic. So it’s pretty straightforward to fix the leaks once you see them. Since every modification to a test block (barring updating the value of existing tests which start failing) should involve an additional try statement, which will by policy fail, any later changes which introduce leaks can also be handled by the same policy.

Fail-first has caught so many mistakes over the years that I’ll never do testing any other way. It’s especially valuable in Zig because of lazy compilation: just because you wrote a test, doesn’t mean that test gets executed, the test program has to actually reach it.

2 Likes