So I’ve been thinking about how different zig users often ask for operator overloading for a while now, a bit after I started using it at around 0.13. It always gets shot down, leaving those users to think that their particular domain problems will only be taken seriously in c++ or rust. There may be more domains, but the most common ones I see are the game, math and gpu devs. In all these cases, the common underlying demand/itch is to be able to express matrix operations. It’s not to say that those problems aren’t solvable, there’s plenty of zig repos for gamedev and math libs that try to make things as elegant as possible with comptime and type inference, but that’s still a far cry from what operator overloading would provide. They look at how all the basic operators work on @Vector and want it to work on matrices too.
When faced the reality that there’s little probabilities of operator overloading ever happening, they generally ask for something like @Matrix. But then that’s shot down because, just off the top of my head, which operator do we use for cross product if we use * for dot product? And then the math and gpu people come in and ask for arbitrary dimentional matrices (tensors?). So that’s not going to work out either. Thinking about it a bit more, there’s quite a few issues. People will want to wrap/subclass those Points/Vectors/Matrices inside their own struct types, which will not inherit the operators. People will also want those types to have methods like “invert()”, “norm()” or “det()”, and while it’s probably feasable for the community to provide a good implementation in the stdlib for this, I wouldn’t want to burden the core team to maintain it. And then there’s all those other math fields that would be well served by their own math specific types or operator overloading and then it’s obvious that @Matrix would have just been another short sighted band-aid. I’ll give @Vector as pass because zig is a very cpu-centric language, with things like @branchHint and @prefetch and vectorization is probably here to stay for another 2 decades.
So more time passed, I worked a bit more with hy (and elisp, because emacs is the only usable editor for lisp-like languages), and I think something clicked in my head recently. Let’s take the addition operator as example, here are the ways we can express it:
a + b // grade school and common programming languages
sum(a, b) // basic function
a.add(b) // everything is an object / java style
reduce(0, sum, [a, b]) // more flexible function / functional style
add a b // asm style, result stored in b
(+ a b) // polish notation / lisp style
a b + // reverse polish notation / hp & swissmicros calc style
Now before anyone freaks out, I’m not advocating to lispify zig, I just want to show how much visual variance there is to express adding 2 numbers depending what you’re most used to. That brings me to the next thought which is that what people want is to be able to express their operations “grade school” or “academic equation” style, which really just boils down to infix style. So what all these operator overloaders want to do is to express their functions with infix operators. Hmmm…
The first quick thoughts is that all infix functions conveniently only take 2 parameters, left and right expressions, or lhs and rhs. Everything after this is shaky ideas, needs critique, which is why I’m posting this in the first place.
So my first reflex was to ask for an @infix builtin function. That way you could do:
fn add(lhs: T, rhs T) T {}
c = a @infix(add) b;
or
struct Matrix {
// stuff...
fn dot(lhs: Self, rhs Self) Self {}
}
m3 = m1 @infix(Matrix.dot) m2;
But then I realized that these functions might be hard to implement as functions, because they’re actually more like a comptime “macro” that desugars the previous examples to:
fn add(lhs: T, rhs T) T {}
c = add(a, b);
which while outside the capabilities of comptime, is also probably out not feasable to implement as a builtin function.
So the next idea was infix sigils (colon used as example, anything is fine), like so:
m3 = m1 :Matrix.dot: m2;
I feel that something like this is still retains the simplicity/readability goals of zig. Some doubts I have is I’m not sure how well this reads over longer equations, and also I’m getting c++ method and rust turbofish vibes because of the ::.
Something I did not want was “tagging” functions with decorators, such as:
@infix
fn add(lhs: T, rhs T) T {}
because then you end up with this sort of unreadability and compiler/parsing nightmares:
m3 = m1 Matrix.dot m2;
There would be restrictions of course, the first one being that infixable functions to take 2 parameters only, I think. I’m not sure if their types have to be comptime known. In regards to operator precedence, whichever is easiest for the zig ast parser, probably the weakest, people can use parens if they want to chain with builtin operators. I’m not sure if bias/gravity is needed, or we can just left hand side everything. Something I’m concerned about is if this violates some self-imposed language restrictions such as context free grammars, in which case my entire idea is a dead end, I’m not a programming language designer.
So yeah, that’s the meat of it. This would allow math and other based domains to be able to implement their own sub-field’s operators and write code in a way that looks more like their equations. This would open zig for maximum domain diversity without sacrificing readability or adding compiler complexity. I’m sure someone is going to ask for some modifications which would allow for pipe style chaining, but I’m kind of against it, I don’t think it fits in zig procedural identity.