i was watching an interview about mojo, and the developer, Chris Lattner said that it runs on MLIR, the successor to LLVM, since Zig uses a LLVM backend, is Zig going to switch eventually, or is there no need for that ?
As I understand, Zig will gradually move away from LLVM, not to any other backend but just to do everything itself:
that sounds like a crazy amount of work
…yup I fear I don’t know the full story, may be worth looking into the linked thread.
From what I’ve read, the idea is to treat LLVM as just another backend option, unlike the current situation where it’s a required dependency. When the transition is complete, a custom and fast x86 backend will be used for Debug builds, and LLVM would be an option for producing optimized release builds, but not the only option; so maybe MLIR could be used as another option too.
I thought MLIR compiled to LLVM IR? If so, then I think zig already has its own pre-LLVM IR layer.
would be awesome if zig could eventually do everything on its own, but im guessing its hard to compete with llvm, since its supported by so many big corps and has an insane amount of optimizations
Fair enough, but you could also say that other compilers should have as good cross-compilation support as Zig, or should support incremental linking, but Zig is not at the bottom of the list in terms of compiler tech. Actually it’s at the top of the list in a lot of categories.
I think it’s possible we can develop something better in the Zig community.
MLIR offers no advantages unless one is targeting heterogeneous architectures: GPU, TPU, and so on.
Zig is a CPU focused language, there’s no advantage to adding it to the project unless that changes.
MLIR offers no advantages unless one is targeting heterogeneous architectures: GPU, TPU, and so on.
AFAIK don’t think that’s totally true - its trying to stay ahead of (and allow) architecture improvements that exist (and are under development) everywhere. For example, most CPUs have matrix multiplication instructions, and MLIR makes that much easier to allow the compiler to figure that out, which ends up resulting in much faster code. I may be wrong though
That sounds like Mojo hype to me. A lot of that going around lately.
Given the standing of LLVM in software, and the relationship MLIR and LLVM have, if MLIR innovates techniques which result in genuinely faster CPU code (and not just VC-backed hype trading on Chris Lattner’s reputation), those techniques will end up in LLVM. MLIR isn’t pixie dust, it’s a sub-project of LLVM.
I am not sure if that will be the case for everything.
From https://mlir.llvm.org:
Other lessons have been incorporated and integrated into the design in subtle ways. For example, LLVM has non-obvious design mistakes that prevent a multithreaded compiler from working on multiple functions in an LLVM module at the same time. MLIR solves these problems by having limited SSA scope to reduce the use-def chains and by replacing cross-function references with explicit
symbol reference
.
That quote and this video https://youtu.be/ovYbgbrQ-v8?t=3500 sound to me like MLIR may be able to do things that can’t easily be integrated into LLVM and also like it isn’t merely a sub project.
It’s easy to promise a lot when you haven’t done squat yet.
We’ll see what happens. Until then, my position is that LLVM will take what it needs to from MLIR, and the latter will be mostly useful on chips which aren’t CPUs.
On the other hand, if someone wants to spearhead a whole MLIR backend for Zig, don’t let me stop you!
There is also work to make use of MLIR for clang (see llvm/clangir and llvm/polygeist). That said, flang, the LLVM-based fortran compiler, already uses MLIR. gfortran, the gcc fortran compiler, still outperforms it in most benchmarks IIRC.