Making Overloaded Function Sets Using Comptime

I agree with you both, but I don’t know how to resolve it.
There can be a return value, there could be a returned tuple, void or noreturn.

After playing around with it, I think noreturn is more appropriate, with the custom type we have false positives, according to lang doc everything is compatible with no return, so we can consistently trigger the error message from call, the code also gets a bit more simple.

With noreturn we always get the error message as the first error and this error reads much clearer then the not used error which can be wrong and misleading, further the NoMatchingOverload also causes our compile error to be displayed second and its “referenced by” section gets elided we have to manually supply -freference-trace to see it, with noreturn we always see the “referenced by” for our “No Overload found …” message.

I probably came up with the NoMatchingOverload type because somewhere in my mind I am probably still used to hacky c++ template stuff, but noreturn seems cleaner, more consistent and more ziggy. That said, I don’t know a lot about noreturn or all the places where it is used, so I am not entirely sure, if I understand it completely or if there are semantic nuances about it, I might not be aware of.

@AndrewCodeDev you can just use it, maybe just add a link to this topic?

1 Like

Ultimately, this is a collaborative effort at this point. If I’m out voted, then I’m happy to move forward with what you two see as the best fit here. I can always just use my own version privately but I think since this is a communal effort, it’s best to go with what the community decides is appropriate.

@Sze, @Tosti, If you guys both see value in this approach, I am happy to update the current version of this :slight_smile:

I went a bit nuts and created a refactored version and then added a new feature of an optional “TypeConverter” function, that can be used to implement implicit type conversion (e.g. converting comptime_int to an u32 if it fits, or allowing to call an overload with a string literal or array literal if it expects a slice).

Because the code got longer because the extra shenanigans, I made it into a gist instead, it also has a comment at the top, describing some of my reasoning why I changed it to use noreturn in a particular way and get rid of the indexing.

The noreturn stuff seems kinda hacky, but I also like it.
I also removed inline used with the for loops everywhere, because we aren’t really generating code, we just make a bunch of decisions and checks at comptime which result in one particular function being choosen.

3 Likes

Interesting! May we discuss it here?

  1. You pass a converter as one of the tuple elements. What about making another parameter specifically for converter?
pub fn OverloadSet(comptime def: anytype, comptime converters: anytype) type;

converters: anytype because in this way it’s possible to pass a tuple of converters, thus removing implicitTypeConverters (at least the user of OverloadSet won’t need it). On the call site it will look like

const set = OverloadSet(.{
    nothing,
    string,
    string2,
    add,
    addMul,
}, .{ implicitArrayToSlice, implicitComptimeInt });

If you don’t wont implicit conversions, you pass .{} (making DefaultTypeConverter redundant). Also, it should simplify implementation, because there will be no if (skip(T)) checks, getTypeConverter etc.

I don’t know whether it should be the first parameter of OverloadSet or the second.

  1. I think I’ve found a bug.
                const match = params.len == args_fields.len and
                    MATCH: for (params, args_fields) |param, field|
                {
                    for (converters) |c| {
                        if (c(param.type.?, field)) break :MATCH true;
                    }
                    if (param.type.? != field.type) break false;
                } else true;

For this code, it’s enough to detect that at least one argument can be converted to determine that it’s a match, like in the following example

fn intAndString(_: i32, _: []const u8) void {}
const TestSet = OverloadSet(.{
    intAndString,
    implicitTypeConverters(.{ implicitComptimeInt })
});
TestSet.call(.{ 42, 43 });

The error

src\main.zig:211:80: error: expected type '[]const u8', found 'comptime_int'
            return @call(.always_inline, findMatchingFunction(def, args_type), args);
                                                                               ^~~~
src\main.zig:351:28: note: parameter type declared here
fn intAndString(_: i32, _: []const u8) i32 {
                           ^~~~~~~~~~

But you want the “No overload” error. The check should look like that.

                const match = params.len == args_fields.len and
                    for (params, args_fields) |param, field| {
                        const convertible = for (converters) |c| {
                            if (c(param.type.?, field)) break true;
                        } else false;
                        if (!convertible and param.type.? != field.type) break false;
                    } else true;

1 Like

@Tosti covered most of my thoughts here - by keeping the converters as a second argument, we can keep that hard constraint of no anytype params in the overload list.

A thought I had when reading your implicit-array-converter is we’re matching on s.is_const == p.is_const but @Sze, have you checked to see if this allows for upcasting to const? Like []u8[]const u8 kind of idea? I was going to compile this tonight and see what’s happening.

Basically if we have an arg and a param, we’d have to allow for the following:

arg is const and param is const (matched)
arg is non-const and param is non-const (matched)
arg is non-const and param is const (upcast to const)

But of course prohibit…

arg is const and param is non-const

Which would already be a compile error.

Also, converters seem like an interesting idea because you could have a converter that takes a str -> int and then finds an overload for that. So if you have a converter that takes in your argument type, it could then delegate to a function call. Once I’m off work, I can read this more closely so maybe @Sze can confirm or deny if this is already happening?

1 Like

On second thought though, I think that could make this a more bloated implementation.

I actually think OverloadSet is a good pre converter itself, say we had:

foo: (str) -> int
bar: (MyStruct1) -> int
baz: (MyStruct2) -> int

You could have…

const int_converter = OverloadSet(.{ foo, bar, baz }, .{});

So in a sense, an OverloadSet itself could easily play the role of a pre-converter. Just an interesting use case, not really a complaint or suggestion.

1 Like

Thank you for finding that bug, I didn’t think about that part deeply enough.

Initially I thought I would have many ideas for different kind of other optional things, I wasn’t quite sure whether I wanted to add an extra parameter for that or not, but as I continued on I didn’t really have ideas except the implicit converters.

I thought there would be other kind of TypeConverters and then you would combine those converters into one and pass it in. I also thought that maybe something could make use of the ordering of the elements in def, with other special things placed in there to mark or group etc., but without ideas for other useful things to do all of that isn’t needed. With only implicitTypeConverter being used, your suggestion of passing converter functions directly as a tuple make sense and allow simplification.

I sometimes play around with ideas of using comptime datastructures to describe things in a dsl like manner, that is probably why I wrote it that way, it is me playing around and being silly, trying to stretch the bounds of the language and see if I can find some useful or interesting pattern.

So I think your simplification is definitely one point in the design space, that makes this feature set more practical, but I am also still curious whether we could find other useful things to do with the idea of what else could we do?

Vaguely I thought these things could make sense to explore:

  • What if we allowed multiple functions to have the same signature?
    If the functions all returned optionals that could mean, try the functions in order,
    until one of them returns non-null or it was the last one.
    This would probably mean that call would get more complex for those extra things in these cases. But would also mean we need to use indexes again.

  • What would a matrix/vector/math library author would like from something like this?
    Maybe some kind of way to not only call one function, but instead select a series of calls and select the right sizes and variants to minimize some kind of error value?
    With that you would probably end up with something that looks more like some kind of computation graph.

1 Like

Sounds like OrderedOverloadSet if you ask me :slight_smile:

1 Like

The const conversion is definitely something that can be improved here, my reasoning in the moment was I will just do exact matching right now, this will be needed to be added / thought more about later.

Currently the converters only check if something should work automatically by what zig already does, so they basically just allow the matching to bridge the gap between exact types and what zig converts for us. The good thing about that is, that I can write my converter code and have zig yell at me when they are wrong (if I remember to write enough examples).

Adding a feature to allow conversion beyond that with custom conversions is also something that would make sense, but first I would want to collect some compelling use cases for that, before doing the work of changing the converters to also be able to explicitly change arguments before they are passed to an overload.

1 Like

Here’s an elegant way to check if they are const-compatible.

pub fn constCompatible(arg_const: bool, param_const: bool) bool {
    return (arg_const or param_const) == param_const;
}

pub fn main() !void  {
    std.debug.print("Is compatible: {}\n", .{ constCompatible(false, true) });
    std.debug.print("Is compatible: {}\n", .{ constCompatible(false, false) });
    std.debug.print("Is compatible: {}\n", .{ constCompatible(true, true) });
    std.debug.print("Is compatible: {}\n", .{ constCompatible(true, false) });
}

The reason this works is because we need to mimic arg <= param because if we consider non-const to be 0 and const to be 1, then the argument must be less than or equal to the parameter (we can always cast the lesser up, but we can’t cast down).

Since max(x, y) for bool is x or y, we’re just checking that param_const is always equal to the max of the two.

1 Like

I think it should be a completely separate struct. It can be implemented separately, but used in combination with OverloadSet. It looks like a basic building block that may be useful outside of overloading context.

1 Like