I’m looking for more resources into how comptime works and how compile times can still be so low even when it can provide features similar to macros.
I’ve only heard a bit about how ZIR analysis works based on posts from mlugg, but I’m not sure why comptime doesnt seem to bloat compile times like heavy use of macros often does.
It does increase compile time significantly, comptime is slower than python atm.
It’s just that most use cases are very simple so you don’t notice it.
I feel that a simple way to evaluate it is: if your comptime usage doesn’t require using setEvalBranchQuota, then the increase in compilation time for this use case is not very noticeable.
Very relevant comment re:speed:
And a relevant thread re:implementation:
My mental model is that it isn’t that comptime is slow, but that you can use it to generate lots of code, and compiling that code is what’s slow (and makes your binary huge). That’s the same deal as with Rust proc macros — proc macros aren’t the fastest way to execute logic, but, in the grand scheme of things, this doesn’t matter, because the actual problem is not macros per se, but people using macros to generate loads of work for compiler with relatively innocuous-looking source code.
I think a slight difference between comptime and macros is that with macros this can continue in a recursively more meta level, for example when you start using macros to write macros which generate macros that write macros …, with Zig you still can write comptime code which generates code which uses other comptime code, but because the syntax of the language remains fixed, doing this eventually leads to visibly and syntactically more friction that can be observed, while macros allow hiding that increase in complexity behind syntax extension that looks like it is part of the language.
So with macros you can recursively add more and more meta-programming levels to a language, with Zig it is more clearly visible when somebody adds additional levels of meta programming, because that means that they will embed dsl code in comptime strings or other comptime values, that are clearly distinct from normal Zig code.
When it is easy to hide many levels of meta programming it also becomes more likely that such explosions in code size will happen. When people are forced to see the complexity they are more likely to abandon trying to implement things requiring too much meta programming via embedded meta programming and then write a code generator instead, where they are again forced to explicitly deal with complexity, instead of having it almost invisibly auto-generated by macro system.
While I have appreciation for macro systems and some of the things they allow, I think they don’t make it particular easy to deal with complexity and make it relatively easy to accumulate more complexity than you would expect. I also think that macros are just one way of doing meta programming and that some of the other ways may have their own benefits (so I think it is good when languages explore alternatives to macros).