I extensively (ab)use comptime. Right now my debug-build compile time is reaching 41s on a very powerful Mac (forgive me that).
Are there any tools or techniques people use to profile/optimize compile times? e.g. I’d love to get a profile of the comptime interpreter so I can know which parts to optimize manually.
There is a side-answer to this which is use the native x86 backend with incremental compilation on linux. But that’s treating a symptom with a more significant change. I’m interested, more generally, in approaches to optimizing comptime use.
Thanks for the links. Yeah, diving in and working directly on it would be great. Not possible right now, but hopefully something we can do (and/or help fund in some way) in the near future.
There is no direct way to measure it yet. But there are indirect ways you could use.
Compile time correlates with binary size → the more code it has to emit the slower it is. (Of course this doesn’t take into account runtime interpreted code, but generally if a large portion of your compile time is spend in LLVM emit object, then code size is a good proxy for compile time)
On Linux you can display the size of all functions in a binary using nm.
Here is an example command that filters all functions and sorts them by size
In the output you can then look for your comptime duplicated functions. E.g. I can find around 80 copies of reallocAdvanced near the top with 3kB binary size each. So if I wanted to optimize compile time, then this would be something to look into
I agree, but it’s so powerful… I fear I’ve made a cheeseburger; huge amounts of salt but it somehow works so well!
Unfortunately at this stage it’s not viable to wholesale change the codebase to remove a lot of comptime usage. It will be in the future, but until then we must make do