Rationale behind order of error as a return

One thing that sidesteps me is the order of errors and return values and the order of handling it. If I know/understand the rationale maybe I can remember it better.

To illustrate:

fn get_age(start: u32) AnError!u32 { ... } 

const a = get_age(10) catch 999;

In GoLang, the norm is to put the error last; which if that was the case, the above would be much more obvious (for me anyway); as the function definition order is ERROR|NormalResult, this while the handling of errors are
NormalResult then ERROR.

when you are reading a function definition you are looking for information, zig decided that errors are more valuable information than the success type.
Rust did the same thing with its Result<Error, Success> type.

when calling a function, you care more about the successful result than the error.

I think it also just looks and feels nicer this way.
Regardless of reasoning it’s still pretty arbitrary

Arbitrary indeed, not logical or consistent also, as you can have a return without an error… but if you have to add an error you don’t append to end, you put in front!?

Again, it is what it is, but that one stifles me a bit, for my head is trying to unwrap in the order of its handling and flow from normal return to that with error.

To use Rust as an example for good practice is also arbitrary - as I am here - and not with Rust for a bunch of reasons (The long winded error handling being one of them).

I used rust as an example of another language that did the same thing, putting error type before success type.

Ofc you can have a function that doesn’t return an error, why wouldn’t you be able to do that? or are you referring to inferred errors?

Regarding consistancy, if you reverse the calling to be Error, Sucess, then you get rust.
Reversing the return type definition, would probably look nice, but thats just asthetics, this part is very arbitrary.

I dont understand how this makes it hard to unwrap the order of error handling/ control flow.

It also is logical, part of the logic is the programmers experience, and imo I like the way it is. I’d assume andrew does as well.

Bottom line - if it is just aesthetics by what you say then there is no need for me to try and understand it. Not going to argue - I will get used to it even though the choice of order makes no sense to me.

1 Like

I don’t think so. All types in Zig are prefix: *u8, []u8, ?u8, !u8.

5 Likes

Interesting… Do you then see !u8 as a type, or ! merely as a separator? And if so, is error not also a type?

!u8 is a type. This is valid zig

fn safe_div(x: u8, y: u8) !u8 {
    if (y == 0) return error.DivisionByZero;
    return @divFloor(x, y);
}

Will have to look at it. I thought u8 is a type and ! is merely a separator (or signalling an error)


pub fn main() !void {
    var a: !u8 = undefined;

    std.debug.print("{}", .{a});
}

src/main.zig:6:12: error: expected type expression, found '!'
    var a: !u8 = undefined;
const Err = error {
    err1,
    err2,
};

pub fn main() !void {
    var a: Err!u8 = undefined;

    a = 10;
    std.debug.print("{!}", .{a});
}

This compiles.

const Err = error {
    err1,
    err2,
};

pub fn main() !void {
    var a: anyerror!u8 = undefined;

    a = Err.err1;
    std.debug.print("{!}", .{a});
}

The documentation is very clear on it now that I read it with critical eyes.

 Notice the return type is !u64. This means that the function either returns an unsigned 64 bit integer, or an error. We left off the error set to the left of the !, so the error set is inferred.

Within the function definition, you can see some return statements that return an error, and at the bottom a return statement that returns a u64. Both types coerce to anyerror!u64. 
1 Like

You know what… I just had coffee and the likely answer came to me.

If I was to design a two-value option consisting of a value and an error… it would be much less work for me to handle/check the fixed length portion (the error int) at a fixed location, as the result may be variable from a memory organisation point of view.

(not that that should be the determining factor in language design… but maybe so)

To clarify:

<error set>!<type> is an expression that creates an error union type. It is a single value, it is just a single union value. Just like a regular union can be instantiated as any of its members:

const EitherAOrB = union {
  a: u8,
  b: i8,
};

// A single type, but two different valid ways to instantiate it
var a: EitherAOrB = .{ .a = 255 };
var b: EitherAOrB = .{ .b = -1 };

so can an error union. The statement var: a !u8 = undefined is incomplete, since you’re missing half of the error union expression.

In function definitions, !<type> is allowed as a convenience, because the compiler can construct the error set for you - it knows what errors that different functions called in the scope of the defined function body can return, so it can construct the set based on that.

In situations where it can’t do this, like dealing with function pointers or variables, you still have to define the error set explicitly.

The ‘correct’ way to instantiate the type in your example there would probably be Err!u8 instead of anyerror!u8, since you know what the valid set of errors is.

7 Likes

Thanks for the clarification. Being silly, but, just for my understanding, is that the error type is a tagged union right? (otherwise how does it know the difference between error and what is good).

Also, when you say construct the error set; you refer to the global errors set which is just a running counter - meaning that it boils down to a simple int of some type, which in turn means that it is from what I can assess from your statement.

A tagged union: with a “fixed/always present” int for the error and the success result type?

it goes a step further and figures out the possible error values, if you never propagate an error that could be returned by a call the compiler knows your function can’t return that error, it isn’t included in the inferred error set.

yes it is

no, it creates an error set, specific to the function, that contains only the errors that the function does return.

yes, error sets are special enums, with values taken from a special global enum that contains all possible errors.
because the values are defined by the global enum, errors can be coereced from one specific error set to another provided they contain the same error (same name).

also to fuel the fire a bit more >:3 this is valid zig too

pub fn main() !void {
    // stuff
    return error.InferedErrorHeHe;
}
1 Like

Just pedantic but

I don’t think you can describe ! as a type prefix, at least I wouldn’t.

! in the context of types is a binary operator, with an optional left-hand side/prefix

If you want to be pedantic, ErrorSet! is a type prefix.