That is a common misconception. People generate a small function in Compiler Explorer, they assign a function pointer to a variable, immediately call it, then never use it again. Then they look at the generated code and see that the compiler optimized it, and think that devirtualization is common.
That’s barely devirtualization, that’s just a convenience constant.
Anything more than this trivial example, and devirtualization will fail.
The solution for that is generic programming. If your code is bloated, instantiate the generic function with a type erasure, containing the vtable. Now you only have 1 function for all your types, exactly the same as you have today. You can also mix and match. If you need this function to be really fast for one particular type, you can instantiate your function for that specific type, and if for other types you prefer to debloat, you instantiate the function with the type erasure.
C++ named these concepts. There were some proposals for it in Zig, all rejected. Some were quite elegant, but none addressed one the main problems in generic programming, which is tooling. Any proposal needs to take into consideration how a LSP would be able to generate completions based on what is written in the function signature. One of the proposals that gained traction was that you could use arbitrary functions that returned bool in an anytype declaration, and the compiler would reject the type if the function returned false. It has its merits, but this is the kind of thing that a LSP would never be able to generate completions from.