Source code rewrite transformations

i’m trying to optimize my end-user’s experience within vscode – which for me means that hover, hints, and intellisense are as accurate and informative as possible… yes, i know that the current zls will someday be replaced with the “incremental compiler”; but that could be a long time from now, and i’d like to at least get some near-term improvement…

to keep it simple, my criterion for “success” is this: the end-user enters a “.” after some identifier and hopes to see a list of features they can select…

with a relatively simple source file, what we have today is certainly “good enough”… where i’ve noticed things start to break down is with the introduction of (some) generic functions that synthesize types in a rather complex fashion…

here’s some “simple” psuedo-code which the zls handles properly:

pub const Itab = struct {
    foo: *const @TypeOf(foo) = &foo,
    bar: *const @TypeOf(bar) = &bar,
};

pub fn foo() void {
    ...
}

pub fn bar(x: u32) u32 {
    ...
}

details aside, you can certainly see the pattern here… given an instance iobj: Itab, typing “iobj.” in vscode gives you choices bar and foo as you would expect…

it turns out that i’m able to “reify” the Itab struct with a generic function that locates all the public functions of this source file… certainly less tedious than maintaining Itab manually!!!

but now, zls can’t really grok the reified Itab; and the user gets no suggestions in the editor when typing “iobj.” as the type of iobj is “unknown”… but obviously the underlying compiler has no issue here, and outputs the same object code as the original “hand-generated” version…

[[ BTW, would a future zls that is based on the incremental compiler “fix” this deficiency??? ]]

standing back, i find that i’m applying a number of design patterns in my codebase which ultimately introduce some redunancy in the sources… in SOME cases, i can implement a generic function to handle the pattern (though with a downgraded zls experience)…

but then, there are other cases where the alternative to manually realizing some pattern would be some sort of source code transformation…

going back to original example, suppose i had simply declared Itab as my interface specification:

const Itab = struct {
    foo: *const fn() void,
    bar: *const fn(x: u32) u32,
};

armed with this declaration, i would like generate a set of stub implementation functions – which basically swallow their args and return a “zero” value where necessary…

[[ when i worked with java in eclipse, the “experience” of implementing an interface was great; adding skeletal code to the file open in the editor was just a right-click away… ]]

using Ast.parse and Ast.render with the editor’s current contents, i can easily see my way through this… and these transformations can happen outside of zls itself, which simply needs to stay in sync with the editor’s contents…

as suggested earlier, these sorts of transformations should not only simplify the application of certain design patterns, but should also yield (in general) “more lines” of code that are otherwise “simpler” for zls to grok…

these ideas are not unrelated to this post, as well as discussions on “source-code templating”… given that i would like my user to “see” the zig code that is also “seen” by zls, how should i approach an “interactive” implementation within a smart editor???

Hopefully, and it’s hard to imagine anything else doing the trick. Comptime is a full-strength programming language, so in the general case, ‘seeing through’ the result of comptime execution requires access to the evaluator, or a perfect reimplementation of it.

ZLS has made strides on handling various special cases, and Loris Cro has a great blog post about getting build-on-save diagnostics, which also helps. But only the compiler has complete insight into what happens at comptime, so the options are duplicate the compiler (and keep the duplicate up to date with changes to Zig) or integrate the compiler into the language server in some fashion.

for sure, integrating the compiler with zls should help with comptime…

but what about some situations where a comptime solution isn’t possible – such as my example requiring an empty stub function implementing some function type…

and even when i can write a comptime function to return some specialized type, an alternative implementation via a source-code rewrite might actually “help” the current zls by simplifying the code…

i’m looking for some general guidance of where/when/how source-to-source transformation makes sense in zig…

Ideally? Never. I don’t expect codegen or source rewrites to ever be directly supported by the language, “no macros” is not a negotiable position as I understand it.

In practice, until comptime gets allocators, there are reasons to do it. RuneSet has a serialize-as-source function, because the headache of making it possible to generate at comptime was not something I was willing to take on, especially given that at some future point, when comptime does support allocators, that significant effort would be wasted, and actually emitting a RuneSet as a Zig string is pretty easy.

You might have a use case where the approach you’re most familiar with is better than what you’d accomplish by mastering comptime, it’s hard to tell, but my guess would be that you don’t.

But codegen is the ultima ratio scribum, if you will. There will always be a few things which can’t be cleanly taken care of in another way.

Zig goes to great lengths to minimize those cases. It isn’t Java, and a lot of things which make sense in Java won’t in Zig.

i’m not suggesting anything that can’t be expressed in the language as is… but right now, as an example, it’s not possible to “reify” a struct type that contains function definitions as well as static variables – only fields…

but armed with no more than an AST, i can easily do what i need… and since the AST preserves (but does not necessarily understand) comments, i can always “annotate” using comments with special syntax… yes, it’s a “language within a language” – but then so are the format strings passed to print

another use-case is to create “linters”, which enforce things like naming conventions – something beyond what the compiler should even attempt…

and then there’s the generation of idiomatic “boilerplate” code motivated by some domain-specific application framework that happens to use zig as its host programming language…

Maybe? You can use usingnamespace to add declarations to a struct:

function StructBuild(T: type, U: type) type {
    return struct {
        a: T,
        b: usize,

        const which_namespace = comptime if (T == SomeType)
            ContainerA(@This(), T)
        else
            ContainerB(@This(), U);

        pub usingnamespace which_namespace;
    };
}

That kind of thing? ContainerA and ContainerB would also be functions which take types as arguments and return struct types used as namepaces.

The return values don’t need to have fields:

const NoField = struct {
    pub fn something() usize {
        return 5;
    }

    pub fn somethingElse() bool {
        return true;
    }
};

test "No field needed" {
    try expectEqual(5, NoField.something());
    try expect(NoField.somethingElse());
}

The functions you define this way can use arguments to whatever function is returning the type, and you can even pass in the type you’re defining with @This(),

If I follow what you mean by reification here, this might get the job done.