Disassembling Terabytes of Random Data with Zig and Capstone to Prove a Point

I recently used Zig to do a little bit of informal research for a side project. Zig made it really easy to write fast code, which was important, since the research involved a sort of Monte Carlo simulation for data collection.

The first third of the linked write-up is devoted to the cool stuff that Zig enabled for the project. The most interesting part (in my opinion) was using comptime metaprogramming for tallying up Zig errors during runtime. For that, I had to figure out (and felt like I should document) how to reify errors from their names using @field(anyerror, errorName).

Also of potential interest to members of this community (but not explicitly called out in the write-up) is how I did argument parsing using comptime struct reflection (src/argparse.zig in the code repo because Discourse won’t let me include more than two links). The gist is that I would make an arguments struct (basically) like:

var args: struct {
    field_name: usize = 0,
    field_two: bool = false,
    field_three: ?u8 = null,

    const Self = @This();

    pub fn init(allocator: std.mem.Allocator) !Self {
        return try doArgparse(Self, allocator);
    }
} = undefined;

pub fn main() !void {
    // ... init allocator
    args = try .init(gpa);
}

This is not exactly production-grade, but was super handy for quickly iterating on the project.

I’m sharing here in the hopes that Zig users new and old find the work interesting. Happy to answer any questions!

7 Likes

I’ve also shared this on Hacker News and Reddit. No interesting discussion yet, but linking here for now in case there are any relevant comments later on.

1 Like