Comptime memory references and side effects

I’m attempting to set up a large operation graph at compile time to later be executed at runtime. The goal is to statically embed this graph into the executable, however, I’ve encountered challenges with Zig’s comptime restrictions, particularly around mutable memory and pointer constness, which complicate creating a mutable runtime state.

While there are other ways of solving the problem and trying to do this at comptime may be considered abusive, but I am curious if it is possible… So is there a Zig-idiomatic way to achieve mutable runtime states from comptime configurations, or any advice on managing mutable memory initialized at comptime for runtime use?

I am led to believe the answer is no based on my limited understanding as well as the following posts:

Here is some code that sketches out the rough concept, this is the closest I’ve gotten (the only thing I think I achieved is working around an alpha-stage compiler). This will yield a bus error.

const std = @import("std");

const Value = struct {
    data: i32,
};

pub fn Operation(comptime T: type) type {
    return struct {
        const Self = @This();
        runFn: *const fn (a: T, b: T) T,
        inputs: [2]*const T,
        output: *T,

        pub fn run(self: Self) void {
            self.output.* = self.runFn(self.inputs[0].*, self.inputs[1].*);
        }
    };
}

pub fn Add(comptime T: type) type {
    return struct {
        const Self = @This();
        // cannot pass a reference to a mutable output container type at comptime
        // but, why cant output be an address to some memory baked in the executable that is mutable at runtime?
        pub fn init(a: *const T, b: *const T) Operation(T) {
            return Operation(T){
                .runFn = &run,
                .inputs = .{ a, b },
                .output = @constCast(&std.mem.zeroes(T)), // try to make an empty container as a workaround
            };
        }

        pub fn run(a: T, b: T) T {
            return T{ .data = a.data + b.data };
        }
    };
}

pub fn OperationGraph(comptime T: type, comptime n: u32) type {
    return struct {
        const Self = @This();
        operations: [n]Operation(T),

        pub fn execute(self: Self) void {
            for (self.operations) |op| {
                op.run();
            }
        }
    };
}


pub fn main() void {
    // build the op graph at compile time, making space for inputs and outputs
    const input1 = Value{ .data = 1 };
    const input2 = Value{ .data = 2 };
    const GraphT = OperationGraph(Value, 2);
    const addOp = comptime Add(Value);
    const op1 = comptime addOp.init(&input1, &input2);
    const op2 = comptime addOp.init(&input1, op1.output); // using reference to some op output
    const g = comptime &GraphT{
        .operations = .{op1, op2},
    };
    // execute at runtime, mutating existing memory
    g.execute();
    std.debug.print("{}", .{op2.output});
}

Comptime computation graph… fun stuff :slight_smile:

Part of your issue here is that your g.execute is running specifically at run time. Have you tried moving the actual execution of your graph to comptime entirely? If you want to have memory that you’re passing through the graph pipeline at runtime, you’ll probably need to provide that at runtime.

I guess my question here is what exactly are you intending to accomplish? Are you trying to end up with a singular result that gets calculated at runtime or are you trying to do your calculation at comptime? It looks like that’s actually what’s causing this issue to occur.

1 Like

It might not be possible unless you copy the created comptime state at runtime to create a mutable copy of it. I’m guessing that any state created at comptime goes in the rodata section (so gets mapped readonly).

A really hacky solution would be to remap yourself as writeable and private, then find the data in the now writeable copy. Super hacky, but could work.

1 Like

I thought it would be neat to generate a static graph at comptime and then at runtime run arbitrary data through it multiple times. The actual idea is to construct a neural net at comptime, just as a fun project, but I think it may prove a bit silly.

if you don’t have to modify the struct at runtime it is probably doable. I’ve done similar messed up shit in c++ templates at compile time. If you need to modify the weights though you need to copy them from the const data the compiler inserts into a runtime var. Training the network in comptime would be painfully slow. Better to write a program that outputs zig code that then compiles if you really want to do it that way,

Oh just inference, agreed it would be a terrible idea to try and train this way. In practice, I suppose that it begs the same question though. I’m thinking there is some creative way of making a hashmap that at runtime can be used to map runtime heap allocated memory, have not given it too much thought I’ve pretty much concluded my idea is trash haha.

I wouldn’t say it’s a trash idea, but there’s nothing magical about comptime vs runtime in this case. Your computer will do the same work to compute outputs for something like tensor products. It’s just going to be a lot less flexible, really.

Making space inside the object for mutable data complicates things, as the entire object has to become mutable. You could create a constant object that describes the operation and have this objct operate on data that is external to it. This way, you can have any number of mutable “dataset” objects, that get passed through a constant pipeline. For simples examples, this is just what the compiler is doing with extra steps. But if you’re using more complicated functions, you can use you pipeline to optimize things that the compiler can’t. C++ community uses something similar to this called expression templates, to cut down on intermediary results in linear algebra.