Nice work! Zig is a lot of fun, and it gives you so much control and opens your mind to how computers function in general.
So that I can test token expiration, otherwise the library users would have to provide the current unix time, and I didn’t like that approach.
I noticed exp is passed as an i64, so I feel there would be more parity for iat to also be an i64? Zig relies on documentation to communicate the exact intent behind those values, so if you explain clearly that iat is a Unix timestamp, I think people will get it.
Does it make sense to allocate one big buffer and use like parts of it?
You probably can find a way to do this in one big buffer, but let’s try something a little simpler before jumping that far (if you decide you need to).
Your _sign() and verify() functions have a lot of allocPrint()'s and gpa.alloc()'s in them as they’re composing the web token/base64 elements and stitching everything together. If you’re looking for a good starting place for fewer allocations, limit yourself to use a single Io.Writer.Allocating for the output. Aside from that, the only other allocation should be for composing the data with reusable scratch space that you’ll ideally overwrite several times:
var output: Io.Writer.Allocating = try .initCapacity(gpa, some_reasonable_estimation);
defer output.deinit(); // <-- don't worry, just return an owned slice at the end
var json_scratch_stream: Io.Writer.Allocating = try .initCapacity(gpa, some_reasonable_estimation);
defer json_scratch_stream.deinit();
// compose header JSON with the scratch stream
// ...
// write base 64 of header JSON directly to the output stream WITHOUT allocating
// (taken from Io.Writer's implementation of the b64 specifier)
var chunker = std.mem.window(u8, json_scratch_stream.written(), 3, 3);
var temp: [5]u8 = undefined;
while (chunker.next()) |chunk| {
// I'm assuming your Base64Encoder is initialized at this point
try output.writer.writeAll(encoder.encode(&temp, chunk));
}
try output.writer.writeByte('.'); // trailing dot
// reset the scratch stream for the next portion
json_scratch_stream.clearRetainingCapacity();
// ... the rest
// obviously omitting the payload and HMAC part, but just providing a general idea here
return try output.toOwnedSlice();
I personally love initCapacity() when you can estimate your data’s size relative to the parameters passed in. There’s little penalty to over-allocating, and if your calculations are correct, each stream’s underlying byte array is allocated/freed exactly once, which is a happy place to be.
This single output stream + scratch space pattern is really nice with dynamic data.