Hi, I’ve started a jq-like utility project but for ZON. Seeing how quickly ZON is being developed and adopted in the compiler, it might prove useful already.
So, if you’re using Zig master, try running this in the zq repo:
That way, depending on zq will allow user packages to define build steps that update .version and .minimum_zig_version in their build.zig.zon (it’ll be even easier once we’re able to import build.zig.zon in build.zig).
Also, I’ve been thinking about why have a public API function with a parameter of type [:0]const u8 instead of []const u8, like std.zig.Ast.parse does for the source parameter?
Naturally, this API choice forces dependencies, like zq, to use as well as require its users to use the same restrictive type.
I understand why you’d always specify [:0]const u8 type for comptime function parameters that are meant to only be static strings.
But, otherwise, with the exception of C ABI compatibility, which in the case of std.zig.Ast.parse is largely irrelevant, I couldn’t really think of any other good reasons.
I’m just a little on the fence about this, let me try to explain.
The reason being that it’ll lead to users deferring to JSON processing whenever they don’t see needed ZON processing features. This has the effect of somewhat inhibiting not only zq’s development, but ZON adoption, in general.
In that respect, adding to zq the --translate-zon feature for translating ZON to JSON would be somewhat similar to adding to zig the translate-zig feature for translating Zig to C, IYKWIM.
But, I’ll think about adding this JSON output functionality with comments asking the user to create a “feature request” issue on functionalities that they miss in zq.
Oh, I get it now. Slices of Zig source code are required to be zero-terminated, because that terminating zero is one of the bytes that the tokenizer’s state machine switches on to either emit an .eof token or invalidate state:
So, it’s by design, which actually does make sense since it simplifies tokenization a bit.
Whups, yeah I looked right past the tokenizer trying to find the signature for the parser.
It’s probably better for performance to represent the end of source as a byte state, rather than have a separate index check before each advance. Empirical question but that would be my guess.
Parse can get away with it, and that should let slices of the source code / token collection be parsed in isolation, although casual perusal suggests this isn’t being taken advantage of.
Which calms down my sense of order, which is otherwise demanding that it also store source sentinel-terminated. If that’s an accidental invariant, then the code isn’t obligated to uphold it, whether or not the loosened contract is being used to advantage right at the moment.
I understand your hesitation. However, as an avid nushell user, I’ve learned the value of being able to easily convert data between arbitrary formats (see here). Adding support to output json opens the door for a whole lot of other kinds of integrations (because json is so widespread).
That being said, perhaps this is a great place to add some “friction” as the Zig team like to call it. Maybe the json converter could be a separate utility (binary) or something?
What kinds of adoption are you hoping to get from ZON?