I tried writing some code that used Linux-specific syscalls and was surprised that the implementation of these syscalls returned actual usize
integers as return codes and not zig error-sets as with most other functions in std
. I will try and present my objections, and hope someone will refute them.
I tried looked into it and found this issue which reorganized std.os
, and it states:
In Zig-flavored POSIX [what is now
std.os
], errno is not exposed; instead actual zig error unions and error sets are used. When not linking libc on Linux there will never be an errno variable, because the syscall return code contains the error code in it. However for OS-specific APIs, where that OS requires the use of libc, errno may be exposed, for example asstd.os.darwin.errno()
.
I personally do not understand the reasons for the choice of making std.os.linux syscall return a number instead of an error set, and would like someone to explain to me why it makes sense. It degrades syscall error handling to a C-style error handling style. And so for system-specific syscalls Zig shares the faults of the C error handling system (or at least up to ignoring return values):
// C raw syscalls
int main() {
int fd = open("does_not_exist/foo.txt", O_CREAT);
write(fd, "hi", 2);
close(fd);
return 0;
}
// Zig raw syscalls
pub fn main() !void {
var fd: isize = @bitCast(linux.open("does_not_exist/foo.txt", 0, linux.O.CREAT));
_ = linux.write(@truncate(fd), "hi", 2);
_ = linux.close(@truncate(fd));
}
(code example from road to zig 1.0)
This style of course half-implicitly ignores any errors returned from these without any try
keywords. This leads people who write code which heavily uses OS-specific interfaces to write a wrapper file with error-set wrappers like this one with code that should honestly be in std
.
Another problem created which one might notice in the above code is the current interface creates a lot of redundant casting, it would make more sense for std.os.linux.open
to return an i32
type with an error set instead of a usize
.
A supposedly correct program which at least panics on errors (not even gracefully handles cases) from these functions will look like this:
const std = @import("std");
const linux = @import("std").os.linux;
inline fn panic_on_err(value: usize) void {
if (linux.getErrno(value) != .SUCCESS) {
std.debug.panic("There was _some_ error, but to know which enum values to check we must consult manpages! {}", .{linux.getErrno(value)});
}
}
pub fn main() !void {
var ret = linux.open("does_not_exist/foo.txt", 0, linux.O.CREAT);
panic_on_err(ret);
var fd: i32 = @truncate(@as(isize, @bitCast(ret)));
ret = linux.write(fd, "hi", 2);
panic_on_err(ret);
ret = linux.close(fd);
panic_on_err(ret);
}
Which, for me feels too unziggy. I would like to understand the reasons for this design which (for me at least) feels like it makes it harder to write code which uses syscalls.