My zig version is 0.16.0-dev.1898+04226193c; I’m running builds of the compiler which I’ve made locally. I’m doing this because I wish to get more involved with Zig, but I guess it’s also possible I’ve got a weird setup. This is how I build zig, fwiw;
update_zig () {
(
set -euxo pipefail
cd ~/repos/zig
mkdir -p build
cd build
git pull
cmake -DCMAKE_PREFIX_PATH=/opt/homebrew ..
make
cd ~/repos/zls
git pull
zig build -Doptimize=ReleaseFast
)
}
Some more context – building the default project takes 25 seconds.
cd "$(mktemp -d)"
➜ tmp.E5JfVwql3A zig init
info: created build.zig
info: created build.zig.zon
info: created src/main.zig
info: created src/root.zig
info: see `zig build --help` for a menu of options
➜ tmp.E5JfVwql3A time zig build
zig build 25.60s user 0.47s system 99% cpu 26.108 total
yes, Zig executables are one compilation unit. if you use functions from the standard library, naturally the compiler must compile them in.
the compiler does a good job of caching things, however, and usually (on my mac) by far the longest amount of time compiling any project is spent on LLVM Emit Object
compiling a new project, by the way, requires compiling the build script and then running the build script to compile the project.
also some aspects of the compiler are shipped as source that only needs to be compiled once across all projects.
This is the signature of what’s going on here. print is comptime-specialized for every statement. That prevents the entire class of format string attacks, among other benefits, but it also results in print getting compiled a lot, during which it does ‘comptime stuff’, which is not exceptionally fast in the grand scheme of things.
OK sounds good, I mostly wasn’t sure whether this was par for the course or something funky with my setup.
Any tips on where to look to learn more on speeding up compilation? I’m looking at 22 second compile times on a quite simple project which only depends on the standard library. I sort of get the gist that maybe I can get a big boost by breaking up into smaller compilation units with a more sophisticated build script? One of my modules uses some std.crypto functions and I notice that’s a big chunk of compilation time, so I think it would benefit me to break that out into a separate unit to get more cache hits?
is it 22 seconds every time you make a change? that might surprise me. 22 seconds for the first time after updating the compiler or cloning the repo seems more forgivable.
Actually, here are my results with an empty project –
➜ tmp.pbsWBBUWHc zig init
info: created build.zig
info: created build.zig.zon
info: created src/main.zig
info: created src/root.zig
info: see `zig build --help` for a menu of options
➜ tmp.pbsWBBUWHc time zig build
zig build 26.04s user 0.49s system 100% cpu 26.501 total
➜ tmp.pbsWBBUWHc echo "" >> src/main.zig
➜ tmp.pbsWBBUWHc time zig build
zig build 4.86s user 0.15s system 100% cpu 4.980 total
➜ tmp.pbsWBBUWHc echo "" >> src/main.zig
➜ tmp.pbsWBBUWHc time zig build
zig build 4.69s user 0.15s system 100% cpu 4.798 total
The 22s re-build is with my current project where I’m using some of std.crypto, and I think that’s a big contributor.
Depending on what OS you’re on, you may be able to take advantage of --watch -fincremental and save a lot of time. Best combo right now is x86_64-linux. aarch64 backend and MachO linker are not quite there yet.
Try this: zig build --webui --time-report. This will give you an idea of what’s taking so long:
FWIW I see a massive difference in build time of my stuff between UNIX (macOS/Linux) and Windows (where Windows is slow to a point where one wonders what the compiler is doing all the time - e.g. quite a bit slower than building similar C/C++ projects via MSVC), I haven’t investigated yet but the usual culprit on Windows is any code that needs to touch the filesystem and I guess most Zig team members don’t use Windows as their daily driver and thus don’t “feel the pain”
…OTH I’ve seen Windows lose so much popularity for devs in the last decade that I wonder myself whether treating Windows as a “tier 1” platform makes a lot of sense. E.g. even for game dev I would probably treat the Windows PC like a console development system (cross-compile and remote-debug from a Linux machine).
You can speed up Windows builds significantly by turning off Defender for all the folders involved: your project, the compiler, and especially the cache. And of course, never have them anywhere auto-backed up to the cloud.
There is also the Dev drive feature (which is a different file system, not NTFS), which I haven’t tried, but supposedly performs better for dev processes.
I just laugh when people complain about 20 second build times. Back in the day I worked on a project where on a regular developer machine, a full debug build would take about a week. In practice, you never need to do that, and a full checked build would run too slowly to be usable anyway. You’d do that only for the most difficult of bugs, and you’d pull that off the automated build machines. Typical incremental build times for my component would be less than a minute, and a full build about 20 minutes.
It is by no means designed to be slow. It just hasn’t been a priority focus to make it faster because many optimization efforts would be wasted when some compelling language change comes along, and there is lower-hanging opportunities to improve turnaround.
by the way, to solve this specific problem, you could probably choose to (only in debug mode if it is nondeterministic and different results are desired—i don’t pretend to know anything crypto) make that precompute function run at build time instead and avoid having to recompute it at comptime every time.