One thing that sidesteps me is the order of errors and return values and the order of handling it. If I know/understand the rationale maybe I can remember it better.
In GoLang, the norm is to put the error last; which if that was the case, the above would be much more obvious (for me anyway); as the function definition order is ERROR|NormalResult, this while the handling of errors are
NormalResult then ERROR.
when you are reading a function definition you are looking for information, zig decided that errors are more valuable information than the success type.
Rust did the same thing with its Result<Error, Success> type.
when calling a function, you care more about the successful result than the error.
I think it also just looks and feels nicer this way.
Regardless of reasoning itâs still pretty arbitrary
Arbitrary indeed, not logical or consistent also, as you can have a return without an error⌠but if you have to add an error you donât append to end, you put in front!?
Again, it is what it is, but that one stifles me a bit, for my head is trying to unwrap in the order of its handling and flow from normal return to that with error.
To use Rust as an example for good practice is also arbitrary - as I am here - and not with Rust for a bunch of reasons (The long winded error handling being one of them).
I used rust as an example of another language that did the same thing, putting error type before success type.
Ofc you can have a function that doesnât return an error, why wouldnât you be able to do that? or are you referring to inferred errors?
Regarding consistancy, if you reverse the calling to be Error, Sucess, then you get rust.
Reversing the return type definition, would probably look nice, but thats just asthetics, this part is very arbitrary.
I dont understand how this makes it hard to unwrap the order of error handling/ control flow.
It also is logical, part of the logic is the programmers experience, and imo I like the way it is. Iâd assume andrew does as well.
Bottom line - if it is just aesthetics by what you say then there is no need for me to try and understand it. Not going to argue - I will get used to it even though the choice of order makes no sense to me.
The documentation is very clear on it now that I read it with critical eyes.
Notice the return type is !u64. This means that the function either returns an unsigned 64 bit integer, or an error. We left off the error set to the left of the !, so the error set is inferred.
Within the function definition, you can see some return statements that return an error, and at the bottom a return statement that returns a u64. Both types coerce to anyerror!u64.
You know what⌠I just had coffee and the likely answer came to me.
If I was to design a two-value option consisting of a value and an error⌠it would be much less work for me to handle/check the fixed length portion (the error int) at a fixed location, as the result may be variable from a memory organisation point of view.
(not that that should be the determining factor in language design⌠but maybe so)
<error set>!<type> is an expression that creates an error union type. It is a single value, it is just a single union value. Just like a regular union can be instantiated as any of its members:
const EitherAOrB = union {
a: u8,
b: i8,
};
// A single type, but two different valid ways to instantiate it
var a: EitherAOrB = .{ .a = 255 };
var b: EitherAOrB = .{ .b = -1 };
so can an error union. The statement var: a !u8 = undefined is incomplete, since youâre missing half of the error union expression.
In function definitions, !<type> is allowed as a convenience, because the compiler can construct the error set for you - it knows what errors that different functions called in the scope of the defined function body can return, so it can construct the set based on that.
In situations where it canât do this, like dealing with function pointers or variables, you still have to define the error set explicitly.
The âcorrectâ way to instantiate the type in your example there would probably be Err!u8 instead of anyerror!u8, since you know what the valid set of errors is.
Thanks for the clarification. Being silly, but, just for my understanding, is that the error type is a tagged union right? (otherwise how does it know the difference between error and what is good).
Also, when you say construct the error set; you refer to the global errors set which is just a running counter - meaning that it boils down to a simple int of some type, which in turn means that it is from what I can assess from your statement.
A tagged union: with a âfixed/always presentâ int for the error and the success result type?
it goes a step further and figures out the possible error values, if you never propagate an error that could be returned by a call the compiler knows your function canât return that error, it isnât included in the inferred error set.
no, it creates an error set, specific to the function, that contains only the errors that the function does return.
yes, error sets are special enums, with values taken from a special global enum that contains all possible errors.
because the values are defined by the global enum, errors can be coereced from one specific error set to another provided they contain the same error (same name).
also to fuel the fire a bit more >:3 this is valid zig too