I have a binary data format that I’m reading from and writing to. Strings are stored by writing the length as a u16 in little endian, followed by the bytes making up the string. I have this, and it’s passing my tests.
const std = @import("std");
const mem = std.mem;
pub const StringError = error {
ReadError,
WriteError,
};
pub fn load(reader: anytype, allocator: mem.Allocator) ![]const u8 {
var bytes: [2]u8 = undefined;
var res = try reader.readAll(&bytes);
if (res != 2) return StringError.ReadError;
const len = mem.readIntLittle(u16, &bytes);
var list = try std.ArrayList(u8).initCapacity(allocator, @as(usize, len));
list.expandToCapacity();
list.shrinkAndFree(@as(usize, len));
res = try reader.readAll(list.items);
if (res != @as(usize, len)) return StringError.ReadError;
return list.toOwnedSlice();
}
pub fn store(string: []const u8, writer: anytype) !void {
var len: [2]u8 = undefined;
mem.writeIntLittle(u16, &len, @as(u16, @intCast(string.len)));
try writer.writeAll(&len);
try writer.writeAll(string);
}
As I said, it’s working. Just want to know if there might be a better way to do the read than the way I’m initializing the array list, followed by expanding it to capacity and shrinking it to the exact size. That part seems clunky to me.
Note I am planning to add a check to make sure the length fits, although since all of the strings in question are going to be Unix path names I shouldn’t ever encounter anything that doesn’t fit in a u16.