How do optimizations work?

I just read through the drop-llvm issue for a bit and came across this comment. It made me wonder: how do optimizations even work, especially regarding the main differences between compile modes like safety checks, runtime perf optimizations and binary size. I would assume that they happen both on the front and backend, but I decided asking could only do me and others good, maybe even get me additional insight.

2 Likes

From my limited understanding, most of the optimizations of a compiler come from the backend, which is also why many languages use LLVM as their backend. In a programming language, the frontend is the “easy” part (though it’s not truly easy, it’s comparatively simpler than the backend). The frontend’s responsibility is to tokenize the source code, then typically pass it to a parser to build an Abstract Syntax Tree (AST), which is analyzed by the Semantic Analyzer. There can be an arbitrary number of steps and transformations between these stages, involving different intermediate representations.

In the backend, things get more complex. Here, you have your intermediate representation, which is generally a verbose and high-level representation of your source code. Now, you need to transform this into code that the computer can understand. This involves a lowering step, which performs various tasks and ultimately generates machine code specific to your target. But before that, compilers usually implement “optimization passes”. This is where the backend looks at the intermediate representation and tries to make it more efficient while preserving your intent. A good compiler should provide numerous optimizations such as inlining, strength reduction, loop unrolling, loop-invariant code motion, constant folding, sparse conditional constant propagation, dead-store elimination, tail call optimization, and many more.

Implementing these optimizations yourself is a daunting task because the compiler must prove to itself that the changes it makes don’t alter the semantics of your code. If it can’t prove this, it won’t perform the optimization. Thus, implementing these optimizations independently requires implementing all these passes, ensuring their correctness, and generating optimal machine code for every target.

This complexity is why most languages start and often remain with LLVM as a backend. LLVM offers a comprehensive set of optimization passes for a wide variety of targets. So, as long as you can create a frontend that produces LLVM IR, you can essentially plug your frontend into LLVM and get optimized machine code for all the target that LLVM supports for free.

As for explaining every type of optimization, that I can’t really do but I can recommend you this video

Understanding Compiler Optimization - Chandler Carruth

3 Likes