Complicated pre-build and post-build steps in build.zig

Continuing my mission to slowly replace a pile of ad-hoc bash and cmake with “pure” Zig, I’m wondering how to best go about longer pre- and post-build steps.

Simple command line tools can be called with Build.addSystemCommand, but for something like a bash script that modifies a lot of files or calls a lot of external tools, are there any good options?

you can run your complex bash scripts with Build.addSystemCommand. can you better explain what it is you are trying to do, and what you need more than addSystemCommand?

What I would do is rewriting bash scripts in Zig, and then, in build.zig, use addRunArtifact to run it. I’d then pass everything that the script needs as command-line arguments.

If my script takes too long to execute, I’d split it into several Zig strips, to create several addRunArtifact command and wire them together. That way, I’d lean on build.zig to cache partial results.

This is how, for example, multi version builds of TigerBeetle work, where we need to use llvm-objcopy to “season” the binary appropriately:

7 Likes

At my work I have an embedded linux application that I am the maintainer of.

Currently the codebase is a mix of C and Python, and the build system a mix of bash and cmake, with some other tools for testing and verification. Zig seems like a perfect candidate for me to replace all of it with a single thing, eventually, so I’m trying to get future Zig development supported by replacing the build system first.

Currently it looks something like this:

  1. Untar a filesystem
  2. Download and compile dependencies
  3. Build the program I maintain
  4. Install the program in the filesystem
  5. Modify a bunch of system files on the system to configure it
  6. Retar the filesystem

I think I’ve got a handle on everything except 5, which is currently handled by several bash scripts. That’s what I’m looking at zigifying now.

2 Likes

i think for #5 in your list i’d go with what @matklad suggested and rewrite that portion in zig. i’m not aware of anything in build declarations that would make that step any easier than just running the bash script (I would love a correction on this though if there is a way :slight_smile: )

Other approach:

In this case, You can use nektos/act to run the Github Actions workflow.

I’ve used mlugg/setup-zig to build Zig sources in this workflow.

I’m going to try this, it seems like a good way forward.

If my script takes too long to execute, I’d split it into several Zig strips, to create several addRunArtifact command and wire them together. That way, I’d lean on build.zig to cache partial results.

That is really clever, in a good way. How do you verify that you’re not caching things that have been invalidated?

You mostly get it for free, if you pass dependencies as CLI arguments. For example, my script needs to use llvm-objcopy, which is two steps:

  • download obj copy
  • use obj copy

The “use objcopy” script takes path to objcopy as a CLI argument here:

And that path is a LazyPath from Zig build system, which means that it tracks dependencies, and Zig takes care to re-run the download step if its dependencies change and then my “use” step if the resulting lldb changes.

The dependency loop bottoms out at the explicitly-specfied hash for the file downloaded from the internet:

The important thing is that I haven’t written a line of code which tracks if anything is fresh or not, I just very carefully used build.zig to make sure to re-use built-in dependency tracking.

5 Likes

Admittedly it’s a bit of a hack, but what I ended up doing was creating a couple of wrappers for std.process.Child.run, that either run a bash script, or run a list of shell commands like this:

const std = @import("std");
const run = @import("shell-cmd.zig").run;

const script = @embedFile("path/to/script.sh");

pub fn main() !void {
    var arena_root = std.heap.ArenaAllocator.init(std.heap.page_allocator);
    defer arena_root.deinit();
    const arena: std.mem.Allocator = arena_root.allocator();

    var commands = std.mem.tokenizeScalar(u8, script, '\n');
    while (commands.next()) |command| {
        try run(arena, command);
    }
}

That way, I can access command line utilities in a way that’s opaque to the build system, which means I can start chipping away at making pure(r) zig implementations of the different build steps eventually without needing to replace all of it now.

Zig successfully builds my project end-to-end now! :partying_face:

(If I never have to write another line of cmake in my life, I’ll die slightly happier)

1 Like