(xpost of a comment I made on the Software You Can Love discord, expanded somewhat)
Thesis: Coroutines combined with defer could be used as a way to unify normal resource cleanup with erroneous cases.
Combining a resource acquisition with an immediate defer is a way to clean up resources when leaving that scope.
fn foobar(allocator: Allocator, foo_len: usize, bar_len: usize) !void {
const foo = try allocator.alloc(u8, foo_len);
defer allocator.free(foo);
bar = try allocator.alloc(i16, bar_len);
defer allocator.free(bar);
// do things with foo and bar here.
}
This is particularly effective for two reasons:
- The acquisition and release of resources are specified together, reducing potential for programming errors.
defer
runs from all subsequent exit points of the function. The defer for freeingfoo
doesn’t need to care that there’s atry
when acquiringbar
, which helps in avoiding bugs due to error cases. Your error case and the happy path use the same logic forfoo
’s cleanup.
However, eg with an init function, your resource might outlast the current code scope and you lose the above benefits. In that case, you’ll often need to use an errdefer to handle error cases, and then repeat the same cleanup in a separate function. For example:
const FooBar = struct {
foo: []u8,
bar: []i16,
fn init(allocator: Allocator, foo_len: usize, bar_len: usize) !FooBar {
var fb: FooBar = undefined;
fb.foo = try allocator.alloc(u8, foo_len);
errdefer allocator.free(fb.foo);
fb.bar = try allocator.alloc(i16, bar_len);
// errdefer could go here, but it would only be dead code.
return fb;
}
fn deinit(fb: *FooBar, allocator: Allocator) void {
allocator.free(fb.bar);
allocator.free(fb.foo);
fb.* = undefined;
}
}
Indeed, despite the split here, at some level above, the usage will likely look something like:
{
const foobar = try FooBar.init(allocator, foo_len, bar_len);
defer foobar.deinit(allocator);
// do things with foobar here.
}
Now, if only we could shove the // do things with foobar here.
code into the bottom of the init function, we would be in the simpler realm of the first example, one function with no need for two methods of resource cleanup.
Enter coroutines.
const FooBar = struct {
foo: []u8,
bar: []i16,
fn existenceScope(fb: *FooBar, yield: fn() void, allocator: Allocator, foo_len: usize, bar_len: usize) !void {
defer fb.* = undefined;
fb.foo = try allocator.alloc(u8, foo_len);
defer allocator.free(fb.foo);
fb.bar = try allocator.alloc(i16, bar_len);
defer allocator.free(fb.bar);
yield();
}
}
With usage like:
{
var fb_init: InitDevice(FooBar, FooBar.existenceScope) = undefined;
const fb = try fb_init.init(.{allocator, foo_len, bar_len});
defer fb_init.deinit();
// do things with foobar here.
}
In this example, InitDevice.init
calls existenceScope
as a coroutine, returning either an initialized *FooBar
or an error. Execution of existenceScope
pauses at yield
. Once InitDevice.deinit
is call, execution of existenceScope
resumes, and all of the defer
s trigger.
Misc Notes:
- Memory management, though used as my example, isn’t where this technique would be the most relevant. Favoring fewer but larger allocations, eg of arrays over individual objects, reduces memory management to be less significant. At a scale of a few allocations, this idea is an overcomplication. However, I’m also writing some OpenGL resource acquisition that I expect to hit at least dozens, if not hundreds of objects.
- In practice, the InitDevice would probably be hidden from the API usage in some way, with allocation or as a member of Foobar instead of containing FooBar (though that might cause a dependency circle, it’s not entirely clear to me without a lot more digging into this).
- I don’t think any of this actually requires language support that’s above and beyond what’s planned for io, provided that coroutines aren’t exclusive to io.
- Overall I think this is a technique which would be useful in some specific use cases, but shouldn’t be used for everything. Eg, ArrayList[Unmanaged] definitely doesn’t need this.
- I’d be unsurprised if this isn’t an original thought. On the other hand I don’t know of any languages that have manual resource management, stackless coroutines, and also have defer.
A, leaning into pseudocode territory, implementation of InitDevice
based on the current OP of #23446:
pub fn InitDevice(T: type, existenceScope: anytype) type {
const FnReturn = @typeInfo(existenceScope).@"fn".return_type.?;
const Error = @typeInfo(FnReturn).error_union.error_set;
// Ok, you can't subslice a tuple type like this, but you get the idea.
const AdditionalArgs = std.meta.ArgsTuple(existenceScope)[2..];
return struct {
value: T,
buffer: [@asyncFrameSize(coroutine)]u8,
async_frame: *std.builtin.AsyncFrame,
pub fn init(id: *@This(), additional_args: AdditionalArgs) Error!*T {
id.async_frame = @asyncInit(&id.buffer, coroutine);
const args: std.meta.ArgsTuple(existenceScope) = .{&id.value, yield} ++ additional_args;
const err: *Error!void = @ptrCast(@asyncResume(id.async_frame, &args) orelse unreachable);
try err.*;
return &id.value;
}
pub fn deinit(id: *@This()) void {
const err: *Error!void = @asyncResume(id.async_frame, &void{}) orelse unreachable;
err.* catch unreachable; // returning an error after the yield is a programming error.
}
fn coroutine(args_opaque: *anyopaque) void {
const args: *const std.meta.ArgsTuple(existenceScope) = @ptrCast(args_opaque);
const err: Error!void = @call(.auto, existenceScope, args.*);
_ = @asyncSuspend(&err);
unreachable;
}
fn yield() void {
const no_err: Error!void = .{};
_ = @asyncSuspend(&no_err);
}
};
}