How to enable more logging and disable caching with "zig build test"?

I notice that when I try to run tests for a project I got from github, the zig build test output contains… well, nothing:

C:\Users\janne\dev\ziglua>zig version
0.12.0-dev.1861+412999621
C:\Users\janne\dev\ziglua>zig build -Dversion=luau test

C:\Users\janne\dev\ziglua>

How can I get zig to print a log of all the test names it ran? Also, how can I disable the “test cache” that tries very hard not to run my tests?

I use tests when developing and testing my code, and would always like to run everything and would like to be 100% sure that tests actually did run. A log and no cache is a good way to ensure that. Alas, the defaults seem to be opposite of what I’d like.

1 Like

You can pass a test_runner option to your b.addTest call. Then you can specify your own runner. I’m using: Custom Zig Test Runner · GitHub

3 Likes

I believe adding .has_side_effects = true; to the run unit tests step prevents caching.

1 Like

Thanks, I’ll try that. I find the default a little weird, it seems like the defaults are better suited for some type of a CI environment rather than local developer.

The custom runner seems very useful too. But frankly, why is there no test output by default now? How can I be sure that some tests were executed if nothing gets output? IMO the current behavior is now just… weird.

3 Likes

This is in the long tradition of Unix commands, which print no output in the default (successful) case.

You can always use the exit code to decide to print something on your own. For example, this will only print the message if the tests pass:

$ zig test module.zig && echo "All's well that ends well"
All 4 tests passed.
All's well that ends well

I know it’s the tradition in Unix tools but IME it’s not the tradition in test suite frameworks. Furthermore, even Unix tools tend to support a verbose flag to turn on helpful diagnostics. Right now Zig tests produce so little output that my first instinct is to suspect that nothing actually ran due to a tooling or test bug.

2 Likes

zig build test --summary all gives you a similar verbose output.

When not cached:

Build Summary: 3/3 steps succeeded; 1/1 tests passed
test success
└─ run test 1 passed 547ms MaxRSS:2M
   └─ zig test Debug native success 695ms MaxRSS:179M

When cached:

Build Summary: 3/3 steps succeeded
test cached
└─ run test cached
   └─ zig test Debug native cached 25ms MaxRSS:29M
1 Like

Let me emphasize that I’m definitely looking for even more verbosity, at the level of printing the name of each executed test as they run. I find this extremely helpful when debugging what tests are running, what particular test started failing, in what order did the tests run, etc. I can’t imagine being the only one who finds this useful?

IMO the current test outputs are not even that quick to parse by quick eyeballing. For example, I was just debugging a crash in the Windows NTDLL due to a non-portable free() pattern in some (not even my) Zig code and most of the output here is just noise to me.

C:\Users\janne\dev\ziglua>zig build -Dversion=luau test
run test: error: while executing test 'test.executing string contents', the following command exited with code 116 (expected exited with code 0):
C:\Users\janne\dev\ziglua\zig-cache\o\38678a4405922854a5bc1e8ab7040acf\test.exe --listen=-
Build Summary: 6/8 steps succeeded; 1 failed; 37/37 tests passed (disable with --summary none)
test transitive failure
└─ run test failure
error: the following build command failed with exit code 1:
C:\Users\janne\dev\ziglua\zig-cache\o\8261478f89c8809547760f55485ade04\build.exe C:\Users\janne\scoop\apps\zig-dev\0.12.0-dev.1861\zig.exe C:\Users\janne\dev\ziglua C:\Users\janne\dev\ziglua\zig-cache C:\Users\janne\AppData\Local\zig --seed 0xaae3b0b1 -Dversion=luau test

Have you tried to build a custom test runner that gives you the output you want?
Be the change you want to see, just my 2 cents.

I will for my own projects.

The context for my post was making a PR for the Ziglua project. Projects that I didn’t author wouldn’t then support the verbose test runner unless I integrate it into the project first. Hardly the end of the world but raises the bar for starting to debug someone else’s code.

1 Like

@karlseguin thanks for the test runner tip and code! I adapted the code to print a test summary like in the below screenshot.

I shared my version here: zig test runner · GitHub. This version changes the output format and also ported to work on Windows (the std.os.getenv API is not available on Windows).

5 Likes

By default zig build test caches and does not run the tests, unless you set has_side_effects to true.
Example code:

pub fn build(b: *std.Build) void {
    const target = b.standardTargetOptions(.{});
    const optimize = b.standardOptimizeOption(.{});

    const tests = b.addTest(.{
        .root_source_file = .{ .path = "src/decimal.zig" },
        .target = target,
        .optimize = optimize,
        .test_runner = "test_runner.zig",
    });
    const run_tests = b.addRunArtifact(tests);
    run_tests.has_side_effects = true;
    const test_step = b.step("test", "Run decimal library tests");
    test_step.dependOn(&run_tests.step);
}
2 Likes

are you sure this actually prevents caching, i.e. causes a re-compilation each time you call zig build test?

btw. I played around with a custom test_runner yesterday, inspired by @nurpax gist :slight_smile: I’ve added buffered printing but the output is still messed up sometimes (only tested on Linux), it interferes with the “normal” output from the test compilation (LLVM emit object... and so on). It gives me the impression that this really isn’t intended, to have such verbose output :wink:

It does not re-compile but it runs the tests again.

has_side_effects = true was suggested earlier in the thread, I tried it and it works as I’d expect it to work: nothing is recompiled but the tests run each time.

Or maybe it’s not not battle tested enough to cover all these cases in different usage?

I hit the “output messed up” problem when test runner’s output interleaves with the Zig build LLVM output. I think the problem is just that Zig should print a new line after it’s done compiling. But I fixed this problem with (zig test runner · GitHub) by a carriage return a “clear until end of line” ansi code:

    printer.fmt("\r\x1b[0K", .{}); // beginning of line and clear to end of line

IMO this shouldn’t be necessary but it does fix the problem. :slight_smile:

Yeah that does work nicely if you have all tests passing. As soon as a test fails, it’s all over the place again for me. Could that be a concurrency issue, i.e. calls to stdout / stderr would have to be protected by a mutex or something along that line?

I didn’t look into it deeply but I think it’s just stdout getting messed up due to ANSI codes moving the cursor to the beginning of the line and/or not inserting a new line. The same problem repros also in situations not involving zig tests:

C:\Users\janne\dev\ziglua>zig build -Dversion=luau run-example-luau-bytecode
steps [8/10] zig build-exe luau-bytecode Debug native... LLVM Emit Object... result is positive:        yes

C:\Users\janne\dev\ziglua>zig build -Dversion=luau run-example-luau-bytecode
result is positive:     yes

Here the result is positive: yes is from my executable’s output. At least in this case it looks like Zig’s build output should simply insert a newline after it’s done compiling.

Per the issues with stdout being jumbled, this is know behavior. prevent confusing debugging experience when users write garbage to stdout, interfering with the build runner / test runner protocol · Issue #15091 · ziglang/zig · GitHub

Essentially, zig uses stdout for the interface I think between zig build and zig test, so writing to stdout in your tests is unsupported. The other option is to write to stderr

My latter example is not actually a test. It’s just “build and execute” steps. It seems to behave the same as the earlier case with the custom test runner.

From my experience, this does not make a difference. I sometimes get messed up output from print, no matter if I use stderr or stdout.

Thanks actually for linking that github issue. This comment mentions the introduction of parallel test runners, which suggests to me that you might encounter concurrency issues if you have print statements in your tests. Also, Andrew Kelly mentions std.log in this comment, which I haven’t tried extensively. In any case, I think maybe the language documentation could benefit from some additional info / how-to around the topic of having console output from tests.

2 Likes