Error: local variable is never mutated

This feature will be really annoying when you’re trying to isolate a bug by selectively commenting out sections of code. Imho, it’s more pragmatic to show the notices of this sort only when compiling for release. That’s usually when programmers have time to fix such issues.


Again there is an escape hatch _ = &x; that makes compiler happy and can be used while debugging.


When I’m compiling for debug, I’m already telling the compiler that the code isn’t finished. Why do I have to continually shush it?


I can see how it makes sense to have the compiler strict and leave it up to tooling to make experiments and refactors more pleasant. However, I would also prefer this to be a warning that could be ignored in debug builds but a compiler error in release (same for unused var).


So here’s a thought - why can’t these things be automatically promoted to const by the compiler?

There’s precedent for this behavior already in the compiler - it promotes types to pointers under specific optimization conditions (which has side effects).

Is anyone aware of a reason that this can’t be done implicitly when we already have implicit promotions via optimization levels? I don’t see this as being less clear than changing a value to a reference.

(also, welcome to the forum @fooblaz)

Though I do agree you should not be able to ignore warnings in production; I could see this and also unused var existing to a very specific class of Debug Warning. The current behavior could remain unchanged. For debug builds only flag --IgnoreDebugWarnings would still emit the warning but the compile could succeed. Anyway, I expect this sort of thing was already discussed at length around the subject of autofix etc…


I also wholeheartedly agree that compiler warnings have their place in tooling and am not just a little off put by the apparent inability to accept this and instead workaround this by moving the problem into IDE tooling. I’d also posit that there may be other types of warnings (think: linting) that may be impossible to fix in automated rewrites in the IDE, yet people find value in these diagnostics while not letting them break the build.


It’s too easy to fall into the “more is better” mindset. Sometimes less is better. Consider the scenario where the compiler issues no warnings regarding minor coding issues (those does not affect the correctness of execution) when building for debug but fails hard when building for release. Now programmers would have no option but to test in release mode prior to committing their work, lest minor issues like a unused variable cause an embarrassing fail at CI.


I see this objection repeatedly, but, as far as I know, nobody is advocating for making these checks “default off”.

What people are advocating for is a flag that only works in concert with debug build that lets people turn these off. That is strictly an “opt-in” froim the programmer, themselves. If you don’t turn off your debug-only flags and then get smacked in release, that’s your own fault.

One other problem has been that you quite often have to clear off all of these dumbass “errors” in order to get at real errors. I have had a couple of times where had I been able to get to the real error the dumbass “error” would have gone away.


Have you considered that it’s not necessarily about your code, but about other people’s code?

Consider: you want to compile and use a program, but it only builds in -O Debug and with --sloppy. As somebody who is only interested in using this application are you going to go through the code, fixing unused variables / constants and not mutated variables? Or are you going to say “screw it” and just build the thing as is?

I don’t think I would be willing to apply fixes just to have to do it again on the next update.


I would just not use external code that is written like that.

I see this example as similar to saying “if the compiler forces you to have tests in your code, except for debug / sloppy mode, what would you do with third-party code that does not have tests, and only compiles in debug / sloppy mode?”

In this (made up) example, I would say that I would not accept using such code in my projects, but I would also say that it is not the compiler’s business to enforce that code should always include tests.


Best argument so far here.

What if that really is the best course for the company at that particular moment? What if a company needs to ship a product by a hard date or they’ll go bankrupt, but devs can’t get a product out the door on time because they wasted too much time changing vars to conts, in code that didn’t even make to the final product? It’s one thing to incentivize good practices, it’s another to patronize your users, who you have no idea what their situation is.


I made an account just to comment on this issue.

Why not just give people the freedom to do what they want? Why force them to write proper code even if they want a sloppier one for whatever reason.

Like Gonzo pointed out before, there are cases where I know something’s bad, yet I want to be able to write code like that for whatever reason.

I don’t want to fix up an outdated repo with sloppily-made variables that I downloaded from somewhere when I just want to see how the compiled program works.

Again, let people write the code they want to if they understand the risk. The flag option that’s off by default is the best solution here and a perfectly viable compromise.


For context, here’s the original issue opened by Andrew:

A lot of the issues and arguments in this thread were discussed in the issue comments.


@f-cozzocrea Thanks for that link to the issue.

This is so much worse than I expected.

“var” vs “const” isn’t just a trivial formatting thing in Zig due to the aliasing issues from “Parameter Reference Optimization” and “Result Location Semantics”.

Here’s a relevant post towards this point for anyone that isn’t following the conversation on Github. From @andrewrk:

The problem is that when doing semantic analysis, whether or not var or const is 
used matters to determine whether expressions are comptime or not, which can in 
turn influence what the types of things are, which could then affect whether a var or
const is the correct mutability. So having the user specify mutability makes things 
significantly simpler, and is a low-effort decision for the programmer, which
increases readability of code. And then Zig can look at the results in the end and
see if everything makes sense. The alternative is so complicated that I think it
could potentially even result in an endless loop, where actually both var and const
are incorrect. Imagine trying to understand that compiler error.

Here’s where things get complicated for comptime code. From @mlugg:

This requires semantic analysis to determine whether x can be const. Fine so far, 
but now, consider this:
fn f() void {
    var x = foo();
    if (comptime bar()) {

In the case above, if bar returns true, x may be modified. If bar returns false, x may not be modified and the variable would need to be promoted to const.

It seems like, from @andrewrk’s post, an incentive for this is to reduce the complexity of a lurking problem with how semantic analysis is performed. One benefit of this is the “const correctness” that people here are referring to, but is not the only (or perhaps) driving motivation for this change.


Programming languages are called languages for a reason. They’re a mean for programmers to communicate their desire to the computer. They’re also a mean for programmers to communicate among themselves. Languages operate on the basis of agreed upon meanings. Giving people the freedom to reconfigure a language to suit their personal preference just lead to needless misunderstanding.

“Debug” and “Release” aren’t just optimization settings. They’re also terms conveying a programmer’s level of confidence in their code. “Debug” means, without ambiguity I would say, that the code is not ready yet, it’s still being worked on. “Release” means high confidence that the code is going to do what it’s supposed to do. At the minimum a programmer should have tested his code at that setting. The compiler mandating that code compiled at that level to be free of cosmetic defects is also not unreasonable. Such cosmetic defects could be indications of larger issues. If you had chosen to use var instead of const, then at some point you must have thought that it needs to be mutable. Why no changes actually are made–I have no idea. From the perspective of other programmers, I do want the compiler to give me the assurance that you have revisited that code.

The analogy I would use here is news articles. Our expectation is that any news article “ready for publication” is free of typos. Not because typos would somehow impair a reader’s ability to understand the article. We want articles to be typo-free because that’s evidence they’re been proof-read.

1 Like

It seems this discussion always gets to this point…

  • I love that zig tells me about unused parameters / variables.
  • I love that zig will warn me about var being used where const would suffice.
  • [This was my own example, but I do believe is similar] I would not mind at all if zig told me about code lacking tests.
  • If I was writing an article, I would love to have clear indications about any typos / proof errors.

That does not mean I want to be warned about these things at all times. Even when writing a paper, many times you only edit one section, and ask your editors to just concentrate on the modified section. I would demand a final proofreading for the whole article, but I don’t want to be always focusing on the whole thing.

The informal suggested name for this type of work, sloppy, tries to convey this. Yes, I know it is messy; no, I don’t intend to publish yet; yes, I will add those tests / remove those arguments / change those vars into consts. But why do I have to do all those RIGHT NOW, when I still don’t know the exact, final shape of what I am working on?


– Edited –

I’m removing my post because it s only relevant to const and var being automatically decided by the compiler. This was off topic from the original discussion A flag could theoretically be used here, but it’s unlikely to be accepted.


From the context I have gathered the semantic analysis problems are only an argument for making the user specify const/var themself, instead of automatically determining it.
But I don’t think that is at questions here.