--fork is going to make it much nicer to track zig master.
Love the zig-pkg change and the idea of torrenting dependencies is potentially brilliant.
I have alway thought usage of the cloud for things like build caches to be wasteful, especially from what I have seen in enterprise software. It is such a seemingly obvious solution — P2P.
Then again, that would be taking cash flow away from the almighty cloud providers ![]()
wow, --fork is amazing!!
My only feedback is that I keep reading fork as a verb, which then makes me wonder “why is fork taking an argument and what would that mean in this context?”, and then I have to remember that in this instance “fork” is probably a noun.
Maybe it’s because my autistic brain needs more clarity, but perhaps something like override-dep would be less ambiguous.
I mean, it is both a verb and a noun.
But also verbs act on things, “what are you forking?”.
IG the confusion comes from passing an existing fork, which does indeed make it a noun.
autism gang :3
The zig-pkg is huge, I and probably many other, work at a company where you need to be able to build every single project offline, and you need to be able to do so in 20y if needed.
I love the zig-pkg change. This is also huge for AI agents. I have seen my agents go through .zig-cache every now and then to figure out libraries (it was a little bit overwhelming since they had to go through many files.). zig-pkg makes this easier i think.
I love the idea of a distributed network of torrented dependencies, but I think my computer would light up like a christmas tree to the IT department in a lot of work environments, if I was suddenly seeding potentially-gigabytes of data every day at work.
Fortunately it’ll be opt-in via a flag ![]()
It seems it would be trivial to also provide rate-limits and caps. I would hope there is also a hybrid approach for the case where their is not enough or no seeders, it would be able to fallback to a traditional cloud download.
Either way, I really like the idea of using P2P for fetching packages.
As someone playing around with LFS and struggling to build software because of a dependency hell this is a huge win making vendoring 3rd party libraries in the source first class.
As an example playing around with extlinux, bzimage, busybox and zig booted in qemu I was able to build all my pure zig projects without a single problem in what would be considered a hyper minimal environment. – Now knowing that in the future playing around with these mini distros I could build zig projects without an internet connection is a huge win.
Im not sure I understand fork. The way I’m currently working is that I have my build.zig.zon pointing to local forks, editing there as needed. This seems easier to me than passing —fork every time I build (I understand I can put it in a justfile or something)
that’s what i’ve done till now when doing the old update shuffle on projects with dependencies. this quickly becomes a headache in the event of transitive dependencies needing updates and is a bit annoying when using jj for version control. nothing i can’t handle, of course, but i’m excited to have a less manual option
I’m not sure I understand
fork.
I’m not sure I understand it completely either, but it solves one common problem for me: working with dependencies that have local changes which are not yet committed to git (e.g. the same problem solved by npm link).
…e.g. with my pacman.zig project here:
I have a dependency to sokol-zig (GitHub - floooh/sokol-zig: Zig bindings for the sokol headers (https://github.com/floooh/sokol)) in build.zig.zon.
Now let’s say I want to test pacman.zig with a locally modified version of sokol-zig. In the past I rewired the dependency URL in pacman.zig’s build.zig.zon to a local directory, e.g.:
zig fetch --save=sokol ../sokol-zig
…and then being careful not to accidentially commit the modified build.zig.zon.
Now I can do this --fork thing instead and it picks up the local sokol-zig without modifying the build.zig.zon:
zig build --fork=../sokol-zig
info: fork ../sokol-zig matched 1 sokol packages
…not sure here if I’m missing another use case, and also not sure why it is a ‘transient’ command line arg instead of a persistent change… e.g. I probably would have preferred something like this:
zig link sokol ../sokol-zig
…this would ‘link’ the sokol dependency to a different URL and store that link in a local file like .zig-cache/links.zon until it is unlinked again via zig unlink sokol.
During build the same info log would remind the user that there are overridden dependencies:
zig build
info: dependency 'sokol' linked to '../sokol-zig'.
…this might make it easier when more than one dependency is involved, but maybe I’m missing some feature/usage of --fork which enables such more complex scenarios without having to add one --fork per overridden dependency.
One key difference is that --fork will override that dependency also for your dependencies.
In the awebo case (from the original devlog), both awebo itself and dvui depend on zigwin32, and with the flag both instances were replaced.
will zig init going forward create the zig-pkg dir or will that have to be a manual opt in for even new projects?
it will probably be created on demand, much like zig-out and .zig-cache are already.
That would be whenever a package is fetched, or perhaps when --fork is used depending on how it works.
--fork would prevent zig-pkg directory from being created, because it takes effect before fetching.
For example, say your project has only 1 dependency, bar. You don’t have bar fetched yet. But you do have a source checkout in /home/you/bar which happens to be the correct version, or perhaps newer but still compatible. You don’t have Internet connectivity right now, so you use zig build --fork=/home/you/bar. In this case, zig-pkg directory is not created because all dependencies are already fetched or overridden.
100% agree. As a person who is very much into ownership and preservation, this is something I’m highly thankful that Zig added.