Currently, my library has zero allocations, and an API that looks like this right now:
pub fn init(
port: *Port,
settings: Settings,
subdevices: []SubDevice,
process_image: []u8,
frames: []telegram.EtherCATFrame,
) !MainDevice {
if (frameCount(@intCast(process_image.len)) < frames.len) return error.NotEnoughFrames;
assert(frameCount(@intCast(process_image.len)) <= frames.len);
return MainDevice{
.port = port,
.settings = settings,
.subdevices = subdevices,
.process_image = process_image,
.frames = frames,
};
}
This is the most important API of the library, and will be called by all (non-existent) users. It provides the main âobjectâ of the library, and this is where all the variable-length mutable memory that the library will use is provided, (the slices).
This means a typical use case looks like this:
const std = @import("std");
const gcat = @import("gatorcat");
// large const network configuration struct
const eni = @import("network_config.zig").eni;
pub fn main() !void {
var raw_socket = try gcat.nic.RawSocket.init("enx00e04c68191a");
defer raw_socket.deinit();
var port = gcat.Port.init(raw_socket.networkAdapter(), .{});
try port.ping(10000);
// Since the ENI is known at comptime for this example,
// we can construct exact stack usage here.
var subdevices: [eni.subdevices.len]gcat.SubDevice = undefined;
var process_image = std.mem.zeroes([eni.processImageSize()]u8);
const used_subdevices = try gcat.initSubdevicesFromENI(eni, &subdevices, &process_image);
assert(used_subdevices.len == subdevices.len);
var frames: [gcat.MainDevice.frameCount(@intCast(process_image.len))]gcat.telegram.EtherCATFrame = @splat(gcat.telegram.EtherCATFrame.empty);
var main_device = try gcat.MainDevice.init(
&port,
.{ .recv_timeout_us = 4000, .eeprom_timeout_us = 10_000 },
used_subdevices,
&process_image,
&frames,
);
try main_device.busINIT(5_000_000);
try main_device.busPREOP(10_000_000);
try main_device.busSAFEOP(10_000_000);
try main_device.busOP(10_000_000);
}
Notice that all the slices are being initialized from a comptime known eni
(âethercat network informationâ) which is a large const struct containing a variable length of configurations. This means the slices are highly related, and need to match up with each other. The true âsouce of truthâ here is the eni
.
An alternative approach would be to accept the eni
and an allocator
for my API:
pub fn init(
port: *Port,
settings: Settings,
eni: ENI,
allocator: std.mem.Allocator,
) !MainDevice {
// ... do all the initialization and allocation of the
// slices in here so my users don't have to care or
// see them
}
What would you do? The slices or the allocator?
I see the following factors:
- Slices has a pro that users can more easily see and more easily reason about how much memory they are using, which is important for embedded folks. Most applications will only use the stack and not heap allocate for maximum real-time performance.
- For the allocator option, users who really care about memory usage (embedded) will have to use a fixed buffer allocator or similar and âguessâ how much memory their implementation will require. I could perhaps provide an API that could calculate the memory usage from the
eni
to assist this? - The allocator option could reduce my public api surface dramatically and reduce the potential for bugs from users initializing these slices incorrectly.
- The library has added benefits for comptime known
eni
(the example shows minimal stack memory usage), but I do not want to require comptime knowneni
in general, even though must users will have comptime knowneni
, because I want to retain the ability to dynamically scan networks and construct aneni
at runtime for edge-case advanced users.