Speeding up debug compilation


I have toyed around with small programs in Zig but unfortunately I’m used to compiling Go code and thus I find the compilation time of ”zig build” unbearable, like 7s compilation time after changing a single character in a debug print format string in a tiny program using std.json and http client. My laptop has a multicore i7 and 32 GB RAM.

Are there some adaptations I can make to build.zig to speed up the debug iteration time? My project is pure Zig, no C or C++ as of now.


The main problem here is that Zig doesn’t have incremental compilation yet. So even if you just change a single character, it will recompile the entire thing.

But there might be ways to reduce the compile time.
There is a few possible problems:

  1. You do a lot of stuff at compile time. Here the solution would be to push heavy calculations into the runtime which even in debug should be faster than the comptime interpreter.
  2. You generate a ton of code through generic function calls. A big problem here are usually printing/formatting functions.
  3. Even an empty program does a lot of stuff, to for example print stack traces and stuff. You can avoid some of that by stripping the debug info, which can be done from the build.zig with exe.strip = true;. However this means errors will be less helpful.

I have a very similar CPU to you and I observe 835ms for building the zig init-exe example project with zig build after making a change to src/main.zig. I would expect similar times for a tiny program that messes with std.json and the http client. Would you be willing to share your code?

Long term, check out the performance roadmap. The difference in perf from where zig is now to where zig will get to as some of these roadmap items are achieved is enormous. I share your perspective that waiting 7s for a tiny program is unbearable.

No matter how fast the compiler is, it is no match for a programmer putting some comptime code in there that wastes an arbitrary amount of time, so medium-term what needs to happen is making -ftime-report more useful in pointing out where the compiler is spending relatively high amounts of time.


Hi Andrew! I have put my code into this repo GitHub - krumberg/test_json_zig: test repo for zig.

Yesterday I was on battery power, but with my power supply plugged in the compilation time of “zig build” for any change is still about 5 seconds. Just to be clear - I didn’t mean for this post to come out as bashing the Zig project, I love Zig, the community and the amazing toolchain. You are doing a fantastic job with such limited resources, I’m just happy if my sample can help resolving some performance bottlenecks in the compiler.

1 Like

I can confirm that your project takes a long time to compile.

I took a closer look at your binary:
In total it is around 4MB in size (hello world is around 500-ish KB).
A lot of the expensive functions seem to come from std.crypto., most notably crypto.tls.Client.init__anon_10784 which takes 700 KB of the binary.
Another 900 KB come from a handful of functions from crypto.pcurves., most notably crypto.pcurves.p384.p384_scalar_64.mul with almost 100 KB.

So any optimization attempt should probably start there.


FWIW I am about to start working on upstreaming the ELF linker to Zig (from GitHub - kubkon/zld: Zig's ld drop-in replacement) and this should init the work on getting the self-hosted x86-64 backend on Linux into a more usable state which will feature incremental in-place binary patching which should vastly reduce compilation times. And much like Andrew, I also share your sentiment that 7s build times are unacceptable.


I can confirm that adding a call to http.Client.fetch() in my tiny program immediately increases compilation time by about 5 seconds:

Initial compile times (after changing 1 char) for time zig build:

Executed in    1.36 secs    fish           external
   usr time    1.08 secs  669.00 micros    1.08 secs
   sys time    0.32 secs    0.00 micros    0.32 secs

After I add http.Client.fetch() call:

Executed in    6.37 secs    fish           external
   usr time    5.52 secs    0.00 micros    5.52 secs
   sys time    0.86 secs  702.00 micros    0.86 secs

lines added were:

var result = try client.fetch(allocator, .{ .location = .{ .url = url } });
defer result.deinit();

zig version:


Kudos to @kubkon - since the earlier messages in this thread, he finished and landed the ELF linker, with only a few follow-up tasks left until this major milestone:

Meanwhile the x86 backend has seen major progress, and anyone interested in compilation speed should keep an eye on this issue:

It will be a great day when this milestone is reached!