Thanks @dimdin - I have a much cleaner way to do this these days and we’re doing that for Zigrad. Ultimately, it’s just a C-style linkage and isn’t much different than anything else tbh. The more familiar I got with the build system, the more it all kind of looks the same.
In other news, I am closing out Metaphor because Zigrad is going to be much more feature complete! We’re working on our first rewrite with full device support that can be disabled at comptime (Metaphor was GPU only). We’re really excited about the prospects and we’ll have a bunch of examples (yes, including LLM’s) that will be run on CUDA. This is coming together quickly so I don’t anticipate it will be a long wait!
Also, we’re actively looking for contributors. If you want to help us bring a full scale, feature complete torch-style ML system to Zig, then please get in contact!
If you haven’t already, checkout Zigrad: Deep learning faster than PyTorch
And to see the status of our new release and all the new changes, checkout the unstable
branch (and the other feature branches too): GitHub - Marco-Christiani/zigrad at unstable
Device Support PR: WIP: CUDA and redesign by Marco-Christiani · Pull Request #36 · Marco-Christiani/zigrad · GitHub