Ok, one more time to be sure…
When something goes wrong and we retrieve the error, it is absolutely impossible to get ParseIntError.InvalidCharacter instead of the not very descriptive InvalidCharacter?
fn parse_csv() !void
{
// now in the middle of this mess we have a int parsing error.
}
All Errors belong to the global set, and ErrorSet is only a subset of those errors. Subsets are not disjoint, in that different sets can contain the same error (think OutOfMemory.
That being said, I’d rather we keep the ability to use ErrorSet.name rather than force the use of error.name. I think the latter promotes the use of Anonymous Error Sets and implicit Error Unions.
I’m here to promote my diagnostic pattern again…
In this pattern, you push an InvalidCharacter first, then throw a ParseIntError . This manually maintained error stack records the source of each error, allowing errors thrown independently at higher levels to be functionally considered the error set of the lower level.
fn parse_csv(last_diagnostic: *Diagnostic) !void {
// now in the middle of this mess we have a int parsing error.
try last_diagnostic.enterStack(error.InvalidCharacter);
return error.ParseIntError;
}
fn baz(last_diagnostic: *Diagnostic) !void {
parse_csv(last_diagnostic) catch |err| {
try last_diagnostic.enterStack(err);
return error.ParseCsvFailed;
};
}
test "new diagnostics" {
const root_allocator = std.testing.allocator;
var arena = std.heap.ArenaAllocator.init(root_allocator);
defer arena.deinit();
var diagnostics: Diagnostics = .{ .arena = arena };
baz(&diagnostics.last_diagnostic) catch |err| {
diagnostics.log_all(err);
diagnostics.clear();
};
}
But still… Would we be able to check the original type? I don’t think so.
The main problem is the loss of “scope” for my feeling. We get the name but not the type.
Btw: std is full of different types of InvalidCharacter errors.
If you’re writing generic functions or using a function pointer interface, it’s important to explicitly specify the error set to which the returned error belongs in the function signature, as using the inferred error set directly can be unstable.
If you’re using a function that’s called directly, rather than as a polymorphic interface, the error set can be transparent in this scenario.
Currently, the main use of error sets is to unify the return error types of an interface. Otherwise, don’t try to check the relationship between errors and error sets; generally speaking, they’re “not related.”
Why do you care about error name “scoping”, either way the error trace will tell you exactly where it generated from. If you don’t have a clear understanding of what’s an error trace, it’s definitely worth learning more about it.