Parameter passing

I am happy with status quo zig, I think for parameters I like that they are immutable and can’t be mutable.

For types I think it makes sense in zig that they are mutable as default and const is added as a modifier, because const-ness isn’t enforced via a complex type system, it is more like a handrail that you have to install and use correctly.

I think immutable as default makes more sense for purely functional languages, that then have modifiers/higher-ordered-type-constructors that result in special types that allow to model effects or monadic things.

I think the parameters being immutable is a useful special case, which enables the feature of being able to easily skip parts of long functions, if specific parameters are just passed directly to other functions, you can be sure that they didn’t change. Which is a feature that doesn’t exist in C, instead you need to check the type of the parameter.

In Zig it would be awkward if you had to check the type, because of comptime the type may be a generic variable and thus you may not easily able to see what the exact type is. Maybe it would be possible to then instead have constraints on the type, but that would make the language quite different.

Some people have expressed that they want constraints on types and I also find the idea interesting, but I am not sure about it, currently it seems like type constraints are rejected as a thing to be added to the language and instead you can implement constraints by writing type assertions that run at comptime.

1 Like

200% agree!!!

1 Like

Interesting, I never knew that old style existed.

In Oz which is a quite unknown programming language, you can declare a variable by stating it’s name and then you can eventually bind a value to it, if two different threads try to bind a value to it, the first one binds the value and the second one runs a “unification” algorithm to unify the value it wants to bind with the value that is already bound to it, if the values are incompatible/not-unify-able, then that is a unification error, which basically is a runtime error in the thread that tried the unification, but if the unification works then the values effectively get merged.

This allows you to write programs that deal with partial values quite easily, infinite lists and lazy data structures and so on.
You can write different parts in different styles, have eager parts, or lazy parts, it also has some logic and constraint programming things (I haven’t tinkered with those a lot yet).

You can write a lazy function and in case it returns a lazy data structure, that depends on some code that eventually results in a runtime error, the error gets wrapped in a “failed value” which then basically packages the error as a value so that it is then triggered wherever that lazy data structure is consumed.

It’s variables are data-flow single assignment variables, that means that you can declare a variable that will become determined at some future point in time, but once it becomes determined it never changes. This allows for interesting programming patterns where you can use these variables to cooperate between cooperative threads and you can write programs that act deterministically while also being concurrent without explicit synchronization operations (except just the default semantics of these variables).

Overall I find that language quite interesting, it also has a huge text book that explains all the core features of the language and how they interact to create a language that allows for many different paradigms to co-exist and be combined in different ways within one language.

I find it a bit sad, that almost nobody seems to know that language, I found it helped me understand a lot of programming concepts better and it is a lot of fun to play around with some of its ideas and concepts. The language also seems quite dead, it seems like it is an academic language that never found somebody to continue its maintenance or evolve its ideas.

I sometimes tinker on an interpreter that is based on the language (implementing a small part of it), but I currently don’t know when that will reach a point of quality/usability, where I will consider making that available.


To bring it back to the topic, with that language you have procedures and functions, functions are just an expression syntax on top of procedures that have a single out parameter.

And because of the crazy/interesting dataflow variables, every parameter can be undetermined or bound to a specifc value, if the program hits a variable that is needed (the value needs to be determined so that evaluation of the program can continue) then that thread is suspended until the variable becomes determined.

Then when the variable is determined it is const from that moment onward, but if it is an aggregate data structure it could still contain holes which are still undetermined.

This means you can write programs that just go to sleep/deadlock quite easily, but on the other hand it is harder to write programs that fail because of evaluation order. (And you could write an interactive editor for the language where such a deadlock would act more like a breakpoint, where you could then edit the program in some way so that it can continue)

So in a way in that language everything can be asyncronous or syncronous and it all interacts with another via these variables. (I guess you could say everything is asyncronous but a good implementation could make a lot of important parts syncronous for speed improvements) I guess the tricky thing is how to write an efficient implementation for that, that actually does the thread scheduling etc. behind the scenes without causing huge slowdowns compared to languages that use eager evaluation, it calls its own evaluation strategy dataflow.

Then additionally Oz doesn’t really have static types, it is one of those uni-typed dynamic languages, however in later parts of the book it also discusses constraint programming and if you tilt your head and squint your eyes, then constraints on variables in an interactive editor could potentially look awfully similar to static types if that constraint information was also used to optimize the program behind the scenes, just that the constraints could be more general than types, more like dependent types and probably also very hard to implement well.

So I guess to summarize, if we are already discussing all kinds of things, why not also discuss evaluation methods: eager, lazy, dataflow

1 Like

Just a note here: Nim has procedures and functions, the latter being basically procedures without side effects. Parameters are immutable unless you need mutability and declare them with var before the type:

proc divmod(a, b: int; res, remainder: var int) =
  res = a div b        # integer division
  remainder = a mod b  # integer modulo operation

var
  x, y: int
divmod(8, 5, x, y) # modifies x and y
echo x
echo y
2 Likes

This is an error in the API design, according to most guidelines and style guides. Always pass by value or const reference if the receiving function won’t modify the source. That’s a convention more effective than documentation.

The side point of whether or not the receiving function can mutate its arguments internally, without affecting anything outside the function (i.e. a pass by value argument), is not an API design question, because there are no external effects to worry about.

Yes, this is why I think const by default is good. It makes the easy thing the correct thing.
But as pointed out by Sze, the const pointer doesn’t protect anything referenced by the thing pointed to, so maybe it’s more relevant for a language like rust where you have more checks.

Comptime programming is a big part of Zig. Having the distinction between function and procedure allows for simpler rules concerning comptime vs runtime. Take the following for example:

const std = @import("std");

fn foo(arg: f64) f64 {
    return std.math.sin(arg);
}

pub fn main() void {
    const a: f64 = 0.0;
    const b = std.math.sin(a);
    const c = foo(a);
    @compileLog(b);
    @compileLog(c);
}
Compile Log Output:
@as(f64, 0)
@as(f64, [runtime value])

Why is c and runtime value while b isn’t? I have no idea. So “runtime” here mean c will be generated at runtime? Well, not really.

In a world where functions behave like their counterparts in mathematics, the rule can be simple and intuitive: a function returns a comptime-known value when all arguments passed are comptime-known.

I believe restrictions on what functions can do will help resolve issues with comptime pointers as well.

If I change c to this it returns 0 too:

const c = comptime foo(a);

So I think it has to do with how deeply the compiler tries to resolve things at compile time when you aren’t explicitly telling it to do it at comptime.

It also may be possible that [runtime value] in compile log just means that the compiler hasn’t tried to resolve the value to a comptime value at that point of the process, I could imagine that a later part of the compile step actually triggers it to be evaluated at comptime, so it could just be an inconsistency in what @compileLog shows you vs what the compiler actually ends up doing.

Maybe this is a case of improving the clarity what the compiler outputs.

It seems to me like Zig is already quite close to that, just that it doesn’t restrict you to purely comptime functions, instead it tracks how values flow in and out of comptime and based on that it decides whether it is allowed or not.

It has more freedom, which can make it more difficult to understand, but I also think it is a big part of what makes comptime very expressive, you can put the comptime aspects very close to the runtime parts that are affected by it and can mix comptime parameters with runtime parameters where it makes using the construct more convenient.

I think if you restrict it too much into 2 separate worlds of computation you would lose the parts that make comptime better from other forms of meta programming.

For example I think you would end up with something that is more like racket macros where you can create towers of macros that all have their separate little worlds and get transformed from one to another and you may end up liking and preferring that, but I just found it exhausting after a while.

1 Like

This isn’t actually necessary, a function pointer type having *T accepts *const T:

const Mutant = struct {
    fun: *const fn (*u32) bool,
};

fn noMut(p: *const u32) bool {
    return p.* == 5;
}

fn withMut(p: *u32) bool {
    p.* += 1;
    return p.* == 5;
}

test "fn pointer compat" {
    const with_no_mut = Mutant{
        .fun = &noMut,
    };
    var five: u32 = 5;
    try expect(with_no_mut.fun(&five));
}

So it would be fine if Zig added a “mutable pointer is never mutated” error to functions which specify a mutable pointer and don’t take advantage of that mutability. It would be consistent with how const and var work, and I would support that change.

1 Like

100% agree, it’s really ugly, but… let’s place var keyword on the rhs of :, right before a type and let’s use other keyword for “introducing” data description, let it be let, then everything becomes “homogeneous” and super-consistent:

data (variables and constants)

let name: const T;      // immutable data
let name: var T;        // mutable data

let ptr: const * const T; // immutable pointer to immutable data
let ptr: const * var T;   // immutable pointer to mutable data
let ptr: var * const T;   // mutable pointer to immutable data
let ptr: var * var T;     // mutable pointer to mutable data

Just look, it reads exactly as it’s written (in English, of course):

const         *         var       T; 
immutable pointer-to  mutable data-(of-this-type)

function arguments

just a copy-past of : rhs above:

fn func(arg: const T) R {}
fn func(arg: var T) R {}

fn func(arg: const * const T) R {}
fn func(arg: const * var T) R {}
fn func(arg: var * const T) R {}
fn func(arg: var * var T) R {}

types (structs) and theirs fields:

let Struct: const type = struct { // : var type?!? what would that mean?
    // struct-global stuff
    let a: const u32 = 3;
    let instances: var u32 = 0;

    f1: const T, // assign once?
    f2: var T,
    f3: const * const T,
    f4: const * var T,
    f5: var * const T,
    f6: var * var T,
};

Language designers, don’t just walk past, grab the idea! :innocent:

Why not. My favorite (semi-humorous) definition of “philosophy” is - “Philosophy is a science about proper usage of words”. In respect to PL syntax the usage of Pascal style var/const words is a bit improper in the sense that they are used for two different purposes:

  • introducing data description
  • indicating whether that data may be changed or not

Rust with it’s let and let mut is a little bit more closer to what I’m trying to say, but…:slight_smile:
let’s place mutability keyword on the : rhs anyway.

Too many const/var, ok, let’s make everything immutable by default, then const keyword is not needed at all, just let and var/mut will do (if really needed):

let name: T;           // immutable data
let name: var T;       // mutable data

let ptr: * T;          // immutable pointer to immutable data
let ptr: * var T;      // immutable pointer to mutable data
let ptr: var * T;      // mutable pointer to immutable data
let ptr: var * var T;  // mutable pointer to mutable data
1 Like

I do not care THAT much about how the syntax is, as long as it is clearly defined.
I saw something was going on about Parameter Reference Optimization.
I was so happy that, when my (maybe big) struct has to be immutably passed as an argument I could just write:
fn foo(arg: MyStruct) void; // case 1
instead of
fn foo(arg: *const MyStruct) void; case 2
assuming that it would always be passed as a reference in case 1.
And in the flowing state of the compiler I do not know yet…

Case 2 is the one which guarantees a pass by constant reference. Case 1 is currently “let the compiler decide”, but it isn’t at all clear that it’ll stay that way. Either way that works out, *const MyStruct guarantees a pass by reference, and clearly signals to anyone reading the code that this is the intended behavior, so it’s the one to use when that’s what you’d like to have happen.

1 Like

Yes I was thinking that as well. Still thinking what to do with - for example - a Rect structure which is 4 x i32;
But I would like to use the same ‘calling convention’ systematically inside my code.
Once *const Struct always *const Struct… (if immutable).

No clue how the compiler thinks about that one. and this one:

Returning struct instances from a bunch of cached structs.

fn select_my_inner_cached_struct(arg: whichone) *const InnerCachedThingy {}

should maybe do the same trick: always use references…

1 Like