Zig Code Smells

Code smells - every language has them and some times they’re unavoidable.

Sometimes, a code smell can indicate a choice that causes something to be more awkward than it needs to be. However, much like the stench of mold, a code smell can be an indicator that something is rotten in the foundation of a design.

What do you think is a Zig code smell?

What do you think the code smell indicates and do you have any suggestions?

I’ll kick it off with error handling:

  • Using try statements but never actually handling errors.

This code smell applies to applications more so than library code. Users of said library may want to handle those errors themselves, but there are legitimate times where a library can run afoul of this.

Zig provides errors as values, but if you never do anything with those values and just let them bleed through main, you might as well have used a @panic.

Using errors to signal deallocations, form error messages, close resources, and attempt recoveries are good uses of errors. This is a code smell because there are valid cases for using try and letting things percolate through main (such as hello_world.zig), but in larger applications it can give the false pretense that errors are being handled.

Try to think about whether something is actually recoverable or not - it can greatly simplify your code.


anytype instead of static dispatch interface/trait types is the biggest one.
e.g. a function expects a reader and the argument is (reader: anytype)
Tools cannot help you and you have to understand what methods the reader must have from reading the source.
anytype is useful for many cases, but this use case stinks.

The problem is that interfaces like Reader and Allocator are for runtime polymorphism. They have the cost of the indirection and if they need state, they necessarily have reference semantics, since the interface can’t store the state inside of it. In the vast majority of cases, the call site knows exactly the concrete type that they are going to use with the functions, and that never changes throughout the runtime. So you’re paying theses costs for nothing.
I actually think the Allocator idiom is something Zig needs to rethink. One the things they say in the C++ community is that languages that don’t have virtual polymorphism end up with ad hoc implementations of virtual polymorphism. And when you learn Zig, one the first things you encounter is… The Allocator, an ad hoc implementation of virtual polymorphism. It literally has a member called vtable.
I think allocators should be passed by anytype, and the allocators should implement the alloc and free methods, so that you can call them directly instead of being forced to go through the indirection. This would lead to much better generated code.
Before anyone mentions it, devirtualization is a myth. It never happens. I looked at the generated machine code, and I could never get it to trigger, neither in C++ nor Zig.
Taking by anytype is strictly superior than taking by interface in terms of generated code, because it avoids the cost of indirection and, if the user actually wants runtime polymorphism, they can simply call the function with a type erased object, which is exactly what all those interfaces are. This is the famous “pay only for what you use”. You can also use a type erasure if your goal is actually to avoid code bloat, which can happen with generic code. By consistently calling the same function with the same type erased object, you only instantiate one function.
I understand the choice Zig made. One the things people complain about C++ is that generic code can lead to unreadable code and code bloat. By using interfaces in its standard library, I believe Zig was trying to say “hey, we have the generics, but it’s not going to become the mess that you see in C++”. Sort of like hiding its generics so that people don’t see it so blatantly and avoid comparisons with C++. But, the reality is, anytype is the superior choice for performance.
anytype loses in terms os readability and tooling, that much is certain. But a lot of that could be improved with concepts. They are not necessary for generated code, their only purpose is programmer ergonomics. I thought that the rejected proposal for concepts was beautiful, readable and matched the whole idea of Zig quite well (can’t find the proposal now).


Allocator have a vtable and is dynamic/runtime polymorphism. Reader/Formatter/Writer can have static/comptime polymorphism. anytype works correctly by expecting a read method from the reader, because read is used in the body of the function. What is missing is documentation and tools.
Rust traits (without the dyn keyword) have the same concept of static/comptime polymorphism. Rust wants you to implement each trait/interface for your types. Go does not force you to specify the implemented interfaces, if you have the correct method(s) you implement the interface. Go interfaces are dynamic only. Zig have static/comptime dispatch for anytype and the type fits fine if the correct method(s) are implemented.
This is so common that missing documentation and tools support is something big.


One thing that has troubled me is how often you see code with consecutive lines of try allocator.alloc(...). On one hand, that creates a memory leak when the some allocations succeed while a latter one fails. On the other, we’re talking about a situation where the system is running out of memory and the prospect of the program continuing is minimal anyway. Realistically, most programmers would just leave the leak in there even if you point it out to them.


Using undefined as a default value for non-array struct fields. It’s rarely necessary and would usually be better without a default value.

Storing allocators in structs that you have more than a couple instances of.

Conflating unreachable with a panic.


You mean, allocating but not properly using errdefer to manage deallocation in case of error?

I’d like to hear more about your thoughts on what this indicates.

I’m seeing two ways this could be a code smell (and maybe you have more). One way is that it indicates a data-oriented approach could be better than the approach being taken - pooling resources together into common arrays rather than having individual structs do the management.

Another way I interpret this is a lack of awareness about unmanaged variants of data structures (such as ArrayListUnmanaged) which cuts down on unnecessary copies of allocators where only one allocator is actually being used among several data structures (especially if they all belong to one parent object).

Maybe you have more or different reasons you see this as a code smell?

1 Like

I have to agree with the try keyword, I think this is one major inconsistency with Zig right now. I’m not sure where Andrew stands on this. But without putting words in his mouth it seems to also go against his philosophy for Zig. Zig is suppose to be explicit, and robust. But in my opinion not forcing users to handle errors and returning them early to the next one is just not a good idea. Error set coercion. Shouldn’t be a thing. Error set should belong to one place only because you just end up with error type inference which doesn’t lead to elegant or readable code. Error should be handled locally with one level of return, be explicit and exhaustive too. Or maybe you can offer some level of convenience by defining explicit super set of different error set. The point being that try is a bad idea in my opinion.

I respectfully disagree about try in general, but I understand where you’re coming from. The try statement (imo) is very powerful when combined with errdefer and defer more generally and I much prefer it to throw in other languages.

My issue is when the try bubbles up to main where it ultimately just terminates the program. It can create a bulky syntax and has the potential to become viral if it never gets handled along the way (hence why I think it can be a code smell). This is simply due to the fact that saying try and adding a ! to the return type is less typing than writing catch and having to make a choice. Like @chung-leong said:

Most code projects I’ve seen tend to assume that there is always more memory to use. The amount of complexity that a program would have to incur if malloc no longer returned valid memory is often way out of scope for what that program is trying to achieve.

There are cases where this makes sense - say for instance I’m using a stack allocator and it returns OOM. In this case, that error is totally recoverable because you can fallback to using a syscall. On the other hand, if hello_world.zig fails to obtain resources, I want the program to cease and desist - not attempt to survive.

That said, try it’s not to everyone’s liking and I appreciate your contrary position.


I totally agree with you about the “throw” – it’s terrible. To give you some backstory, when I discovered Zig, it was through talks from Andrew. I came for the language nerd that I am, but I stayed for how well-designed and thought-through the language is. For once, it wasn’t a talk about a ton of quirky, nice syntax tricks to do X in 1 line instead of 5. Quite the opposite, actually. What I really like about Zig is that it doesn’t give you what you want; it gives you only what you need to do the job. Zig is all about acknowledging that we are the imperfection in the system – we are the lazy ones, not the computer. The whole design of Zig revolves around making the easy choice the correct one. “Try” sure is the easy choice; it just doesn’t seem to be the correct one.

I’m no genius, and I can’t share my crystal ball with anyone. But if “try” doesn’t get fixed, and if error set coercion doesn’t get fixed, then we will just end up with “anyerror” everywhere, and by the end, error handling will not really be enforced. Sure, people willing to go the extra mile will always be able to produce solid code despite that, and linters can enforce that too. On top of that, “try” sure shows that a function can fail, but if it never gets handled explicitly, then you don’t really know what can fail in the path of execution. In larger codebases, this can be a problem, especially with 3rd-party code. Because even if they do a nice job of building nice error sets with descriptive and insightful names, what good does it do if you are just going to try and ignore it?

I mean, don’t get me wrong; it’s not the end of the world. If it doesn’t get fixed, the language will still be worth it for everything else. I just think that there are a lot of good ideas out there, and that “try” shouldn’t be as permissive.


This is a big part of why I think it’s important to talk about code smells.

If a language is turing complete (or even capable of simple loops or recursion), there’s always the propensity for writing bad code. While it’s probably not a good idea to write code that goes into an infinite loop, I have a hard time taking a system-level language seriously where that wasn’t possible (I’m open to debate).

I can’t imagine that there will be a serious language where there can’t be some way to abuse it - that’s really where the community input is valuable.

1 Like

If a language is turing complete (or even capable of simple loops or recursion), there’s always the propensity for writing bad code.

Of course no matter what you do there will always be ways to abuse your language, and I think that’s a good thing, try is not inherently bad, I just think it goes against the philosophy of the language of “maintaining robust, optimal and reusable software”. Try doesn’t feel robust, nor does it feel optimal, I realize that handling all errors is sometimes difficult but maybe the language can rethink it’s way of doing so, errors as value is awesome, but the point of those errors is to carry a meaning, if people end up passing it onto the stack for it to coerce to anyerror than we lost a lot of informations. Like maybe you can use try when you only have one possible error in the error set like OOM for example, but as soon as there is multiple errors, you are forced to catch |e| switch(e) {}; I don’t know but I’m sure we can find a way of improving that part.


I would say using anyerror outside of function pointers is code smell, but what about inferred error sets? Note that they allow you to handle some errors using catch, but return others.

1 Like

I’m not sure I understand what exactly is meant by “code smell”, like anyerror when you don’t intentionally want to type-erase the error set is a clear mistake, not just a smell.

Anyway for me a code smell would be that ZLS doesn’t work on your project: chances are you’re overusing comptime :^)


Yes, emphasis on the word intentionally. I found anyerror quite useful for a function that just took an error as argument, logged it out, and then returned it right back for the caller to handle or propagate up via try. Worked really nice. A kind of Aspect Oriented Programming function.


Anyway for me a code smell would be that ZLS doesn’t work on your project: chances are you’re overusing comptime :^)

This is a good definition of a code smell btw! :slightly_smiling_face: But also kind of a bummer that even tiny amounts of comptime seem to break ZLS, affecting how code gets written.

There’s nothing wrong with the try construct or the way people are using it. The problem lies with the design of allocators in Zig. It seems that the language hasn’t fully embraced the concept. ArenaAllocator is treated as an optimization trick as opposed to the regular way of doing things. Meanwhile, the allocator christened “GeneralPurposeAllocator” is pretty much malloc wrapped in an object. Many programmers end up structuring their code in a C+±ish manner.

What I would like to see is an allocator that combines the attributes of ArenaAllocator and StackFallbackAllocator, named in a way that suggests that it should be used in situations where allocated objects don’t need to persist beyond the current transaction, which is most situations. ScopeAllocator I’m thinking. Most of the time your code would get memory from the stack. That is discarded automatically when the function exits. The allocator will track objects from the fallback allocator as well, freeing them when it gets deinited.

Under this arrangement, the use of try is no longer an issue as the allocator would free the partially created object when the error pops through the scope. Maybe the allocator should also provide a rollback mechanism so that upon encountering an error the caller can restore the allocator to a saved state and continue.


I’ve had the viewpoint that linters are there to enforce a higher coding standard such as complaining you aren’t handling errors. But at what level the linter should complain might be complicated. Maybe bolting on some attribute to explicitly say “allow this error to bubble up to the caller” would satisfy the linter when errdefer is omitted.

// From ziglings
fn detectProblems(n: u32) MyNumberError!u32 {
    if (n < 10) return MyNumberError.TooSmall;
    if (n > 20) return MyNumberError.TooBig;
    return n;

fn importantFunction() !void {
  // Explicitly catch and rethrow. Clearly bubbling the error to the caller is intentional.
  //const a = detectProblems(4) catch |err| return err;
  //return a;

  // Is it intentional to not catch and handle the errors? Or just a lazy "I'll get to it later" and maybe forget to do it. After all, it works on my machine so why handle an error I have never seen? No explicit if on the result or catch or errdefer.
  //const a = try detectProblems(4);
  //return a;

  // Proposal: Trust me bro, I know what I'm doing with not switching on the various errors this function can return. Consider all error cases in the result of this call to detectProblems() as already handled in this scope.
  const a = rethrow detectProblems(4);
  return a;

If the linter cannot successfully enforce best practices or at least steer the dev in the right direction, then that is a code smell to me. I’d expect a linter to be capable of detecting leaks described in Clearer documentation for `errdefer` in blocks · Issue #7298 · ziglang/zig · GitHub.

Disclaimer: I’m very new in no position to be giving this or any other feedback yet… still a very young ziguana.

And personal bias: I’ve worked on many projects where its not appropriate for a library or utility methods to handle errors. The design and intention is for the caller to catch and handle them appropriately. This enables the caller to make decisions when something unexpected like when a server offline occurs and there is no “correct” way for a library to know if you want to fail or to use a fallback.

Allocators in Zig are composable. You can create arena with StackFallbackAllocator as its backing allocator. No need to cause combinatorial explosion in stdlib.