Hello, long time no post. I’ve been busy with a game development journey, one that started over the past 2 years part-time, but has transformed for me into a full time opportunity.
I’m sharing this video as a ‘thank you’, some recipes and some feedback for the zig language from the perspective of a game developer. Y’all are basically enabling my dream scenario and I’m extremely grateful.
Broadly the video is seperated into several sections in the Youtube description, so feel free to skip around, and play on 2X
As I outlined at the end, my job will be to build this game out, and I won’t have tonnes of time to socialize or share the underlying systems, but hopefully by the time I’m done, I will have a bit more free time to share more! Thanks.
Excellent job of doing a review. Don’t agree with everything, but it was well expressed and it wasn’t just a “I used Zig for a day” surface level complaints video.
re Upgrading: I don’t find this argument very convincing. Mostly because there isn’t a lot of flip flop in features between versions. Basically put, any work spent upgrading to 0.15 now won’t be wasted when upgrading to 0.16. Indeed, the likely most efficient way to upgrade to 0.16 would be via 0.15. I’ve also found code relatively easy to upgrade, and conceptual items to be the harder piece (eg, wrapping my head around the new reader/writer). As such, breaking it up also breaks up the learning mental load.
Yeah surprisingly I feel like I don’t need operator overloading… and I mean, the tradeoff for less readability brings it more inline with how much readability * / - + symbols have
Thanks! Yeah upgrading actual zig versions isn’t that bad, but because I have dependencies on zig-gamedev webgpu support, it wasn’t straightforward - in my case I did a bunch of upgrade work and got a black screen in return.
I have the 0.15 upgrade sitting on a branch to finish once I have the motivation, but I tested the build times for release=safe builds, which is what I want as low-latency as possible, and they weren’t significantly better, so I just abandoned the branch for now. Maybe 0.16 will make me want to upgrade, I don’t know
Also interesting to see the decl literal math bit. It’s making me consider switch over to struct based types for algebras. Previously I strongly prefer arrays (eg, [2]f64), but that syntax did look nice. Too bad we can’t declare a type that allows indexing and methods. (we kind of get it for integers with enums, but arrays are one of the types left out. Could just wrap an array in a struct, but then every element access is now more work. Idk.)
I noticed a lot of functions declared as inline. The general advice as I understand it is to only use inline if you need it for its comptime properties or because you’ve tested actual performance improvements. Is there a specific reason you’re using it so much?
Re: Build modes:
I’m looking forward to more fine grained control over optimization modes. I don’t think it’s fully working yet? At some point we should be able to set different modes for different parts. Eg, release engine code with debug gameplay code, so fast rebuilds when editing gameplay code but it still runs well. We do already have @setRuntimeSafety.
Definitely would be nice! Also I want to be able to put methods on tuples. Each of my vector math types has a fromVector and toVector method in the mean time to use them like @Vector(4, f32) types.
Definitely I need to profile more, I was doing it a bit reflexively.
I at least am able to set different optimization between my ‘frontend’ exe and the ‘gameplay’ dynamic library, which comes in handy a bit. But really I need full optimization for just the physics code and mesh generation, and everything else could be debug probably. It definitely would be nice to declare that in files or functions specifically.
If you put the code in different files, you can make them different modules, and set std.Build.Module.optimize. What I don’t recall off the top of my head is how much that’s actually respected atm.
I really appreciate the practical viewpoint you provided in the video. For example, when talking about build modes, you didn’t mention the fine details because they didn’t matter to you, what mattered was how the features served your use case (compile time, beta testing). This was surprising to me because you demonstrated advanced knowledge of the zig build system (your own build steps, code gen steps, forking zls??!).
Personally I think I have nerd-sniped myself too much with programming language theory(a common affliction of users of uncommon programming languages) and your video is a reminder/inspiration that there is a whole other class of user motivations that just want to get stuff done. They dive deeper when it fits their needs, and not just as an intellectual exercise. And as you mentioned, this is another reason why the zig build system being in zig is a huge win as a user gains more experience.
For my personal projects, I intially chose zig just because it had packed structs and arbitrary bit width integers that could directly represent a binary protocol I had to implement, that was really the only reason… I think I should return to that level of simplicity for a while, forget about the noise, and just build some cool shit!
At least in 0.15.1 that’s respected. I use it to heavily to compile known working C libraries like zydis to some release build and use it in my debug build. Something like this:
// Separate module to always compile it with a release mode.
const zydis = b.createModule(.{
.root_source_file = b.path("src/vendor/zydis.zig"),
.optimize = .ReleaseFast,
.target = target,
});
zydis.addCMacro("ZYAN_NO_LIBC", "1");
zydis.addIncludePath(b.path("src/vendor/"));
zydis.addCSourceFile(.{ .file = b.path("src/vendor/Zydis.c") });
const mod = b.createModule(.{
.root_source_file = b.path("src/main.zig"),
.optimize = optimize,
.target = target,
.imports = &.{.{ .name = "zydis", .module = zydis }},
});
One can also do something like this for the optimization instead to respect the release mode chosen:
.optimize = if (optimize == .Debug) .ReleaseSafe else optimize,
I was wondering, do you ever shrink&free the arenas, or do you just let them get as big as they get and keep an eye on memory usage?
In my (much less far-along) game I was kinda spooked by using ArenaAllocator since I was worried about it ending up being full of too-small nodes as a system’s memory footprint got bigger and using up way more memory than needed.
(The implementation does try to resize a block before allocating a new one, and I never checked how often that actually works!)
Funnily enough I “cured” my fears of using too much memory by pre-allocating the whole game and having fixed limits all over the place.
So now it /always/ uses “too much” memory, but at least I know exaclty how much too much is
Yeah! As I was outlining, each ‘feature’ or ‘system’ of the game is a node in a graph, where each gets a set of arena allocators to use. Generally I just use these freely as scratch space as well as a space to point to when returning data out of them.
More recently out of interest, I started using sub-arenas as scratch allocators, where I strategically create and consume memory in these scratch arenas, and throw them out at the end of a code block - though this only benefits memory usage if there isn’t any main arena allocation during that process, since I don’t think the arena is checking for gaps, just adds more space at the end. I do think the scratch allocators help a bit, if I keep using the scratch space in an inner loop, it’s running over the same chunk of space, so for mesh generation especially it’s a boost I think.
But that’s it. I’ve been pretty care-free with memory past this, and I’ll generally forgo global stores intermediate information like HashSet lookups of keys to values, terrain stamp / forest cache data, and instead just generate those things in the arena right before I need them in the function. Data is passed directly as inputs and outputs to functions and the types used for inputs and outputs is simple structs/unions/slices, so they’re easy to serialize and debug at any point.
Thanks!! It’s a lot in retrospect, but it just accumulated with time. It’s amazing that all this technical depth is accessable from the outset, and starts with a naive ctrl+click on a std symbol in VsCode . It’s like the opposite experience to what I was used to with Unity and C#, where it’s as wide as an ocean and deep as a puddle.
Building cool shit is extremely rewarding, would recommend!
Sorry, re-read it, and no I don’t shrink the space. My target Steam Deck has 16 GB of memory and doesn’t multitask, so really I’m just prioritizing for clean architecture and simplicity of adding new features, not trying to keep memory footprint down. I think it will end up being pretty memory light, but I’ve got 16GB, I might use a decent chunk of that, if it doesn’t affect performance and battery life.