For projects where I’m building wasm and testing in a browser, I need to serve the files from a real httpd to allow cross-origin-resource-sharing. Just opening files directly in a browser doesn’t work.
Does anyone know of a way to run a zig httpd from build.zig to serve the produced artefacts?
My current solution is a dockerised httpd as I don’t want to assume that other developers have particular tools. A zig solution which is handled transparently by build.zig would be much better.
In the future you should be able to use the --watch flag (to zig build) for this. I believe there is still a component missing that would enable this which is a way for a process spawned by the build runner to hook into the watch system and receive events.
However, that mechanism isn’t really needed for HTTP servers because there’s a simpler way - make that package support serving the files from disk rather than memory (or make an alternative package that does that), in which case --watch already works as-is.
Still, the build system event mechanism is a general-purpose solution to arbitrary build graphs, and so it could be made to work for HTTP servers that serve from in-memory as well, at least as a proof-of-concept. You could imagine a more complicated scenario where it needs to perform some application-specific logic in order to maintain data invariants that go beyond serving files from disk.
Edit: ah looks like I already filed an issue for this:
Basically the server fork()s to the background so --watch can start, and the next server instance will just terminate the old one by sending a request. The injected javascript on a html page queries the server when it was started and reloads the page if a newer server instance is detected.
To prevent the forked server from leaking beyond zig’s runtime, it receives zig build’s process id on the environment and checks if it is still running.