Anyone made OCI (docker) Images with Zig yet?

I’ve got this massive dependency on docker for an image that is literally:

FROM scratch
COPY zig-out/release/gatorcat-0.4.7-x86_64-linux-musl gatorcat
ENTRYPOINT ["/gatorcat"]

I’ve wrapped this in a zig script, like tigerbeetle, but I still don’t like requiring docker to be installed on my system, or in CI if I can get away with not doing that. my docker release script

My understanding is OCI images are just tar files with some json… anyone tried making one without docker build yet or is interested in collaborating?

Some things I would specifically need:

  1. multi-platform images

I founds this but it appears to just be spec information: GitHub - navidys/oci-spec-zig: OCI Runtime, Image and Distribution Spec in Zig

buildah can create OCI images from Dockerfile like: buildah build -f Dockerfile . without running a daemon.

3 Likes

A oci builder sounds like a fun project, but quite involved…

I did the same for my Tobias Simetsreiter / zig-mirror · GitLab , but dont need multi platform.

But for a simple multi platform solution i’d rather

  • Write the exe and Containerfile in a directory using a WriteFiles step
  • create a run command in build.zig
  • map the zig target to buildahs –platform arg rather than adding suffixes to the exe name.
1 Like

If we take a look at the API for Bazel’s rules_img, a decent start would be supporting the following dockerfile-like directives:

  1. FROM (download image, might be able to use existing zig b.dependency)?
  2. COPY (copy zig artifact into image layer at specific path)
  3. ENTRYPOINT (likely just some json generation)
  4. CMD (likely just some json generation)
  5. multi-platform (likely the output of mutliple image builds with some json)?
# Create a layer from files...
image_layer(
    name = "app_layer",
    srcs = {
        "/app/bin/server": "//cmd/server",
        "/app/config": "//configs:prod",
    },
    compress = "zstd",  # Use zstd compression (optional, uses global default otherwise)
)

I think stuff like RUN could be left out for a very long time (not even sure what that would look like if you desired reproducible builds…)

I don’t know how stuck you are with needing Docker at all ? Or even Linux ? I know it’s become an entrenched part of most systems and impossible to avoid now, especially if you are working for a corporate.

podman might be able to simplify your environment.. maybe ?

Not the most useful comment - but if you can deploy to BSD / illumos or whatever, these Docker dependencies are no longer needed at least.

1 Like

a docker image and thus the linux kernel is a prerequisite to my application running. I am targeting OCI runtimes which run on top of the linux kernel. I could of course just publish my single, beautifully statically linked binary, but sometimes a user (including myself), just wants to download a docker image, and may use docker images as part of a larger configuration management and deployment system.

3 Likes

I started the project:

Inspiration for what the API might look like (very similar to existing build system API):

pub fn build(b: *std.Build) void {
    const executables_layer = oci.build.image.createLayer(b, .{ .srcs = &.{
        .{ "/bin/my_executable", my_executable },
        .{ "/bin/my_executable2", my_executable2 },
    } });

    const configs_layer = oci.build.image.createLayer(b, .{ .srcs = &.{
        .{ "/etc/config.json", my_config_file },
        .{ "/etc/config2.json", my_config_file2 },
    } });

    const manifest = oci.build.image.createManifest(b, .{ .layers = &.{
        executables_layer,
        configs_layer,
    } });

    const install = oci.build.image.addInstallArtifact(...); // makes tar.gz (or tar?) file that can be loaded with docker load

    const upload = oci.build.image.addUploadRunArtifact(...); // upload image to registry?
}

Obviously not implemented yet but I got the preliminary spec information in there for the JSON files. This actually doesn’t feel out of the realm of possibility. All we have to do is some tar, compression, json files, and hashing in the right order.

1 Like

I’d say createLayer should also be able to accept a directory and to me .{ source, destination } whould make more sense then .{ destination, source }.

The build configuration needs to somehow map to arguments of a custom build command.

For that we can compile an exe with the `b.graph.host` target, and run it with an output argument.

1 Like

I’m not sure the zig build system has the concept of directories?

It sure does, you can assemble (with b.addWriteFiles and wf.getDirectory()) or commit (and use b.path) a whole filesystem tree and provide it using a LazyPath.
I dont think there is a way to get the automatic (os specific) subpaths of zig-out though.
My use case would be for example to provide all the ssl certifictates to /etc/ssl so the http client can handle https, or to provide a static webroot for a server to host instead of bundling into the exe

here (zig std command) and here are examples on how to stream a directory into a tarball :wink:

1 Like