Zig-to-Zig Dynamic Libraries/Modules

Zig to C / C to Zig

Zig often touts its excellent compatibility with the C ABI, and it’s true. There are many examples of writing code in Zig and using it in C/C++, or vice versa:

lib.zig

export fn sum(a: c_int, b: c_int) c_int {
    return a + b;
}

lib.h

extern int sum(int, int);

I won’t go deeply into this topic since there are numerous examples and discussions around it. However, there’s another question that, in my opinion, has not been fully addressed yet.

What about Zig-to-Zig?

So, how do we handle libraries written in Zig that are intended to be used in other Zig code?

You might confidently say: use modules!

Yes… if we’re using static linking for the entire library/module. Zig strongly promotes this idea of including library sources into your application’s code, and I have to agree—it’s particularly convenient for standard libraries, where the compiler can directly include only the code being used. This also allows for additional optimizations and eliminates the need for a runtime in Zig!

BUT!

However, there comes a time when dynamic linking for Zig is needed.

There can be various reasons for this, and I mean Zig to Zig, not dynamic linking from C/C++ to Zig. For example, imagine you wrote a massive game engine called Real Engine 5 in Zig, and Zig is its main scripting language. Now, someone develops a game using your engine. Compiling the entire engine from source is feasible, but if we include the entire engine in the game’s code, it could become problematic. You wouldn’t want to recompile the entire Real Engine 5 just to add a new entity to your game, right? The same goes for drivers dynamically linked to the OS kernel or plugins loaded dynamically into your application.

Zig itself generally has no special issues with the dynamic linking process. Here, I would like to talk about API and writing Zig code that uses dynamically linked Zig code.

When writing in C/C++, we have to separate files into .h and .c/.cpp, representing headers and implementation. This concept is well-known and understood, though it introduces certain inconveniences. Zig seems to follow a different path, freeing us from this requirement. Convenient!

However, for this very reason, when I write C/C++ code, I don’t need to write wrappers for my library later because providing my header files is enough.

In Zig, this has to be done manually. In general, this is not an issue for libraries if we know in advance that we’re writing a dynamic library and our library serves a specific purpose. But for a game engine, OS kernel, or something similar, it might not be the case. Furthermore, many systems and mechanisms in a game engine are usually transparent for the games developed on it. For instance, if you have a Scene structure in your engine’s code, using it in the game’s code feels natural, and you don’t want to write a separate SceneApi wrapper for the game’s code.

Zig has many features and possible optimizations (comptime, error, ?<Optionals>, etc.) that I don’t want to lose when exporting my code to a dynamic module because I plan to use it again in another Zig code.

The Problem, with an Example:

assets.zig

pub const Asset = struct {
    name_hash: u32,
    ...

    // Comptime function
    pub fn init(comptime name: []const u8) Asset {
        return .{
            .name_hash = comptime hash(name),
            ...
        };
    }
};

var assets: []Asset = &.{};

// Function that combines comptime and dynamic calculations.
pub fn findAsset(comptime name: []const u8) ?*Asset {
    const name_hash = comptime hash(name);

    for (assets) |*asset| {
        if (asset.name_hash == name_hash) return asset;
    }

    return null;
}

This is a simplified example, but the main point is that some APIs like assets.find(...) can be successfully used both inside the engine code and in the game that uses the engine. However, the function contains compile-time calculations, which can lead to issues. This can be solved by separating the dynamic code:

// Function that combines comptime and dynamic calculations.
pub fn findAsset(comptime name: []const u8) ?*Asset {
    const name_hash = comptime hash(name);

    return findAssetByHash(name_hash);
}

export fn findAssetByHash(name_hash: u32) ?*Asset {
    for (assets) |*asset| {
        if (asset.name_hash == name_hash) return asset;
    }

    return null;
}

Great, it seems like we can now compile our engine as a .dll/.so and use it in our game! …Seems, but not quite. What about the game’s code? We can’t just do this:

// Wrong!
const assets = @import("assets.zig");

fn game() !void {
    const map = assets.find("zigland.map") orelse return error.MapNotFound;
}

If we do this, everything will be statically compiled, and our game won’t be linked with the engine dynamically. The issue isn’t just that I didn’t include a call to load the .dll/.so; it’s that assets.zig contains both the API and the implementation, resulting in our own find(...) function and an independent asset slice var assets: []Asset = &.{}; in our game.

It seems like we have to manually separate the API from the implementation. However, there’s a problem. I love Zig, and I enjoy that it allows writing methods (functions inside structures):

pub const Temp = struct {
    num: usize = 0,

    pub fn add(self: *Temp, other: usize) void {
        self.num += other;
    }
};

But this does not align well with separating implementation from the API. Imagine Temp.add(...) as an internal implementation used within the engine. To export this function with the same nesting inside the Temp structure for the game, I would have to write a wrapper:

extern fn addImpl(self: *Temp, other: usize);

pub const Temp = struct {
    num: usize = 0,

    pub inline fn add(self: *Temp, other: usize) {
        addImpl(self, other);
    }
};

And I would need to do this for every structure. Moreover, I would have to define the structure twice—first for the engine, then for the game. Now imagine the scale of Real Engine 5 with hundreds of structures, thousands of functions, methods, and more.

Simply “wonderful”! In reality, it’s HORRIBLE! If I need to change the structure in the engine, I will have to rewrite the API wrapper. It turns out I would have to abandon inline functions, and due to the unstable ABI (even when writing a Zig library for Zig), I would need to avoid using error/optionals in API functions or write additional WRAPPERS. Oh, and I would have to write WRAPPERS anyway, manually adding extern/export for all functions because Zig has no macros!

In this case, it seems easier to write everything in C/C++ or another language :frowning:

What Language Improvements Are Possible?

Finally, I would like to discuss potential language improvements and tools that would allow us to write dynamic code without the pain. But Zig is too strict for any kind of dynamic approach, and honestly, I’m afraid nothing will change in this direction. For instance, adding tools in the build system to simplify or even skip writing wrappers would be a good step. Ideally, introducing new constructs in the language that allow separating implementation from external API while keeping everything in a single .zig file would be great.


What do you think about this?

What solutions do you use?

What ideas do you have for introducing new features in the language to solve this problem?

6 Likes

In my opinion zig-to-zig shared libraries are not a good idea. The feature that Zig seamlessly integrates in C-scape is one of the really big ones, at least for me.

What would be the advantages of a shared library that is not using the C/OS ABI and instead would be specific for Zig?

The only feature of shared libraries that did not really become obsolete is the opportunity to update core functionality that is actually reused in order to fix security issues without having to rely on various packages all updating their dependencies.

Sharing memory might still be interesting for core functionalities like libc/ssl/… but I don’t see why this functionality should be zig specific.

Where do you see the benefit of using shared libraries the way you described at all? The cost seems to be that you would loose all comptime interfaces and optimizations that offers.

Thinking about this a bit more, using the source level as interface between application code and libraries looks like a gigantic advantage. The comptime optimizations that can make use of specific hardware seem to be so great that I wonder if binary code distribution would not become obsolete.

Shared libraries really look unattractive given all the opportunities of comptime.

1 Like

Hmm. Look. There are two names:

  • so - “shared object”, emphasis on “shared”, meaning that one and the same library can be used by various programs unrelated to each other (libc as the most prominent example)
  • dll - “dynamically loaded library”, emphasis on “dynamically loaded”; it’s not necessary that a library is used by many programs.

I guess @bagggage is talking about 2-nd case.

An example of case where DLL’s are very useful.
I have a program, which upload data to a DB, it’s a single binary.
Data being uploaded are in various formats.
I have a service template (systemd), mysys-upld@.service.
In this file I have

[Service]
ExecStart=@/opt/mysys/sbin/upld-psgr mysys-upld-%i

There are many instances of this service, one per each data format, for ex. mysys-upld@<target>-xxxx.service.

  • <target> - where to upload
  • xxxx - data format.

When an instant of uploader starts, it looks at it’s name, provided by systemd and loads a decoder for a given format, decoder is a DLL, there is one decoder per each data format. Decoders have strictly defined interface and their role is to transform data to some “universal” internal representation which then used to construct an insert request. Without DLL’s I would have to have many almost identical binary uploaders, but why?

1 Like

Do you remember DLL Hell ?

My last open source projects were Go and PHP based.

I appreciated how convenient it’s to build/compose projects using only the sources.
One requirement - fast compilation.

And it’s the reason to divorce from LLVM

Compilation speed is increased by orders of magnitude.

I would like to have a choice:

  • “dlls”
  • sources

It seems you misunderstood me. I’m not discussing whether dynamic libraries are necessary or not. That’s something each developer decides for themselves, and it’s typically determined during the design phase of your software.

What I am asking is how should I describe structures/functions using Zig if I plan to dynamically link this code to another zig code later on.

Some things simply can’t do without this. I am developing an OS kernel, and as you might imagine, device drivers need to be able to load dynamically. Otherwise, imagine loading a graphics card driver, but then you need to recompile your entire kernel. Now I’d have to ship all the kernel sources, drivers, and the Zig compiler along with the system. With every driver update, every user would have to recompile the kernel themselves. Sounds awful, right?

So, this is simply unavoidable.

Now, imagine that a driver needs to interact with the kernel. For this, it would be convenient to create a corresponding API. In fact, the kernel already has this API, and it is successfully used within itself. But since the driver will not be statically linked with the kernel, I need to export certain functions from the kernel and then import them in the driver’s code.

To achieve this, I will likely need to use export/extern, but the problem is that the kernel’s structure is quite complex. The driver needs access not only to the functions but also to various types of structures. However, the code where these structures are defined already implements methods:

pub const Temp = struct {
    data: usize = 0,

    pub fn impl() void {
        ...
    }
};

I can’t just use @import in the driver’s code because I don’t need to compile the function; I need to import it from the kernel.

This is where the issues arise. It seems I would need to define each structure twice: once for the API and once for internal use.

// Internal
pub const Temp = struct {
    data: usize = 0,

    pub fn work() void {
        ...
    }

    comptime {
        @export(work, .{ .name = "Temp.work" });
    }
};

// API
pub const Temp = struct {
    data: usize = 0,

    pub inline fn work {
        @extern(*const fn() void, .{ .name = "Temp.work" })();
    }
};

And I want to avoid this because it would mean writing the entire kernel twice.

4 Likes

It would be great to have something like an api keyword with @importApi that combines both extern and export:

lib.zig

var num: usize = 0;

api fn calc() usize {
    return num + 10;
}

app.zig

const lib = @importApi("lib");

pub fn main() !void {
    const value = lib.calc();
    ...
}

In the library’s code during compilation, api would behave as export,
but when using @importApi, the imported function would be interpreted as extern,
and its implementation would be ignored.

Moreover, @importApi could ignore all other fn and pub fn functions,
as well as any non-constant global symbols like var and pub var.

This approach would allow for something like the following:

lib.zig

pub const Temp = struct {
    var num: usize = 0;

    api fn calc() usize {
        return num + 10;
    }
};

app.zig

const lib = @importApi("lib");

pub fn main() !void {
    const value = lib.Temp.calc();
    ...
}

In this way, the symbol lib.Temp.num would not be included in the app.zig code when using @importApi,
and the function lib.Temp.calc() would be treated as extern.

From the library’s perspective, everything would remain as before. The only difference would be that the function lib.Temp.calc would be interpreted as export and would be exported during compilation.

To avoid naming conflicts, you could specify it like this: api(<myExportFunctionName>)

lib.zig

pub const Temp = struct {
    var num: usize = 0;

    api("mylib.Temp.calc")
    fn calc() usize {
        return num + 10;
    }
};

This is just my proposal with some simple and unrefined examples, but having something like this would be nice to avoid writing wrappers for exporting/importing functions.


EDIT: Expanding on this concept

As I mentioned, a potential @importApi would interpret api functions as extern.
Conversely, api with a standard @import would appear as export.

This would maintain C ABI compatibility and not introduce any additional issues. However, I would also consider some edge cases:

inline Functions

I would prefer that any pub inline function is also visible through @importApi. This would allow them to be used similarly to static functions in C/C++ declared in header files. However, this might cause confusion, as even those inline functions intended for internal use would become visible as API functions. To solve this issue, we could introduce api inline fn functions. These would be visible to code using @importApi without changes, with their full implementation as standard pub inline fn.

This way, we could still retain support for errors, optional types, and comptime for inline functions.

But what if such a function calls another internal function that is not marked as api?

In that case, a compilation error would occur because other fn/pub fn functions would not be visible through @importApi.

Functions Returning type/comptime_int

Such functions could be interpreted as analogous to template<...>/constexpr functions in C++. They could also be exposed via @importApi, following all the previous rules. This would preserve template types when importing from a library.

Nested Imports

Consider the following example:

lib.zig

pub const subsys = @import("subsys.zig");
pub const lib2 = @importApi("lib2");
...

The lib.zig library uses @import for subsys.zig and @importApi for lib2.
If an application app.zig does @importApi("lib"), how should imports within the lib.zig file be interpreted?
There are several potential solutions:

  • For nested @import:

    • Interpret nested @import as @importApi.

      This would allow the library to expose nested types and other nested api functions
      within its submodules, e.g., lib.subsys.myCalc (where subsys.myCalc => api fn myCalc()).
      I am more inclined toward this option.

    • Ignore the nested import.

      If ignored, we would have to manually connect other nested types for the library
      or resort to writing wrappers again. However, the goal of @importApi is to eliminate the need for wrappers,
      so this seems like a poor choice.

  • For nested @importApi:

    • Ignore it.

      This seems like a good option because @importApi is intended to be used
      in the final application for importing external code. If @importApi is used
      within a library that might also be used by other code, it means that @importApi should only be visible
      within the code that directly uses it.

Implementation

I’m not very familiar with the internals of the Zig compiler, but I assume that simply excluding functions or global variables from compilation would not work. It would require rewriting some AST generation passes (as I understand it).

Therefore, we could implement something similar to zig translate-c, for example: zig generate-api.
This would eliminate the need for @importApi, as it would be enough to traverse the root file lib.zig
and remove everything unnecessary, leaving only struct declarations and api functions,
which could be translated as extern.

The result of zig generate-api might look something like this:

input.zig

pub const subsys = @import("subsys.zig");

var global_num: usize = 0;
 
const InternalStruct = struct {
    ...
};
 
pub const MyLibStruct = struct {
    some_field: []u8,
    data: *anyopaque,

    pub fn internalFunc() !void {
        ...
    }

    api fn apiFunc(self: *) usize {
        internalFunc() catch |_| {
            return 0;
        }
 
        global_num += 1;
 
        return global_num;
    }
}

output.zig

// `subsys.zig` must also be translated -> `subsys_api.zig`
pub const subsys = @import("subsys_api.zig");

// `InternalStruct` removed

pub const MyLibStruct = struct {
    // Fields remain unchanged
    some_field: []u8,
    data: *anyopaque,

    // Example of translated API function
    pub inline fn apiFunc() usize {
        return @extern(*const fn() usize, .{ .name = "apiFunc" })();
    }
}
2 Likes

i never developed linux drivers and my last windows one was finished @1997

i found I made a pseudo linux device driver with zig on youtube
i have no clue what he is doing, but may be it helps

i think that device drivers devlopment require special attention

first of all we need to know how to use shared (loaded?) libs in zig

Relevant issue: Proposal: Zig ABI for language specific features

1 Like

This is confusing me. If you are writing your own kernel, and this is about device drivers, then you are not using .so or .dll, but .ko and then your own implementation of .ko (kernel modules) since this is not the Linux or Windows kernel.

Why use “ExecStart=exe %i.dll” and not just “ExecStart=%i.exe”?

What do you mean, when you say “DLL’s are very useful”? As a means to organize your code? As a means to save memory, bandwidth (for binary uploads), performance?

There are so many ways how you could implement that, all of which may have security, performance and convenience characteristics that you should consider.

If you would use a monolithic architecture, you have almost all advantages:

  • No need to manage file system locations and permissions (for the dlls)
  • The interface between API and library is statically checked by the compiler (while a dll can do anything and you will first see bugs in the logs).
  • Less attack surface (replaced DLLs. bugs in the dynamic linker (happened))
  • Faster startup (exe is only loaded once and then shared by the kernel), fewer syscalls, no dynamic linking phase
  • Minimal memory usage (exe share code segments just like dlls)
  • More opportunities for the compiler to optimize code, potentially more code sharing
  • No potential for versioning issues (old DLLs after update)

The only disadvantage I see is that you have to redistribute the fat executable every time you need to update or add a format. And this would only be relevant if you distribute your app and the dlls separately. Are you planning to do that?

If you’re using systemd, you’re using Linux. DLLs are a Windows thing if I didn’t miss something new. DLLs and shared libraries do the same thing, different name for different OS, right?

1 Like

The number of raw data formats in my case is about 20.
Why compile one and the same sources (and linking decoders statically) 20 times to produce 20 executables instead of having just one and then run it 20 times loading plugins dynamically?

Aha, and using plugins (which are DLLs, technically) is the best one for me.

It’s a complex system and data uploaders are only a part of it.
It’s deployed via deb packet.

When I say “DLL” I mean “dynamically loaded library” (regardless of particular OS), not file “extension”, .dll used in Windows or so, used in Linux. BTW, POSIX.1-2001 things for dealing with dynamic libraries are dlopen and dlclose, note dl.

1 Like

(Sorry if I came across as patronizing, this wasn’t my intention).

Why compile one and the same sources (and linking decoders statically) 20 times to produce 20 executables instead of having just one and then run it 20 times loading plugins dynamically?

I would not do such a thing. I would start compiling all decoders into a single binary by default and if there is a good reason, I might, just like you, decide to support a plugin mechanism. And what a “good reason” is, depends entirely on the requirements. In the absence of requirements, good is a balance between secure, fast, small, easy to produce, easy to maintain, at least that’s my general criteria.

In my experience, plugins are the best solution if you or third parties actually contribute independent implementations. There are of course many other reasons why dynamic code might do better, I just never saw one in my work.

It’s a complex system and data uploaders are only a part of it.
It’s deployed via deb packet.

I don’t want to insist, but: One .deb for each plugin and one for the app, or is it one package for everything?

Debian packages take care of cleaning up files if I remember right, so when you update the plugin interface and install new versions, the old ones will be taken care of. But did you test that? You would have to if you wanted to be certain. No such extra task in a single binary.

And a single fat binary would not have any disadvantages that I can see. That depends on how many services are running, how many different encoders. Maybe there are cases where you waste some memory or startup times would be slower because there is more code to load. My bet is that the static (fat) binary performs better. It is inherently more secure because the attack surface is smaller. It is easier to write and maintain because you don’t need the plugin infrastructure and you have no restrictions resulting from that.

That doesn’t mean that you should or cannot use your plugin mechanism, I just don’t see that it has advantages. And I don’t have to see either. Just presenting you with my point of view. The best outcome of such a discussion for me is you destroying my argument, because then I would learn something.

When I say “DLL” […]

I talked about DLL vs. SO, because someone (I thought it was you) brought up the point that they are different. In my understanding, it’s the same tool that can be used for different purposes and has different names. I just wanted to clarify that or learn something new if I was wrong.

Awful idea.
I “started” ~10 years ago and having one DLL (plugin) per data format serves me well.

That project is “in-company” project, it’s not open source and I am the sole developer.
And the only reason for having DLLs is that there are many data formats.

All in one.

The system is running 24x7 for years on several hosts. When I make some additions/bug-fixes, I just do git pull; ./release; dpkg -r <pckt>; dpkg -i <pckt>.

Of course they are about the same, I’ve never stated the opposite.
But so is emphasizing shared (i.e. to be used by many programs), whereas dll is emphasizing “dynamically loaded”. :slight_smile:

AFAIK nothing prevents using zig dlls, but you won’t be able to expect any ABI stability whatsoever. As long as you compile everything with same compiler it will still work.

It’s possible to also do extern C functions, pass pointer to zig struct and cast it on the other end for example. Anyhow no ABI stability here. For something like game / game engines, this might be fair enough still however.

I personally don’t like dynamic libraries in general however, they have caused more problems than solved.

1 Like

api defintion

// my-api.zig
pub const MyApi = extern struct {
    method1: *const fn() callconv(.C) void,
    method2: *const fn() callconv(.C) void,
};

dll 1

const Api = @import("my-api.zig").MyApi;
const log = @import("std").debug.print;

fn method1() callconv(.C) void {
    log("la-m1\n", .{});
}

fn method2() callconv(.C) void {
    log("la-m2\n", .{});
}

export const api: Api = .{
    .method1 = &method1,
    .method2 = &method2,
};

// zig build-lib -dynamic my-lib-a.zig -O ReleaseSmall

dll 2

const Api = @import("my-api.zig").MyApi;
const print = @import("std").debug.print;

fn method1() callconv(.C) void {
    print("lb-m1\n", .{});
}

fn method2() callconv(.C) void {
    print("lb-m2\n", .{});
}

export const api: Api = .{
    .method1 = &method1,
    .method2 = &method2,
};

// zig build-lib -dynamic my-lib-b.zig -O ReleaseSmall

application

const std = @import("std");
const log = std.debug.print;

const dll = @cImport({
    @cInclude("dlfcn.h");
});

const Api = @import("my-api.zig").MyApi;

pub fn main() void {
    const lib1 = dll.dlopen("./libmy-lib-a.so", dll.RTLD_NOW);
    defer _ = dll.dlclose(lib1);
    var api: *Api = @ptrCast(@alignCast(dll.dlsym(lib1, "api").?));
    api.method1();
    api.method2();

    const lib2 = dll.dlopen("./libmy-lib-b.so", dll.RTLD_NOW);
    defer _ = dll.dlclose(lib2);
    api = @ptrCast(@alignCast(dll.dlsym(lib2, "api").?));
    api.method1();
    api.method2();
}

// zig build-exe my-app.zig -O ReleaseSmall -lc

run it

$ ./my-app 
la-m1
la-m2
lb-m1
lb-m2

I dare ask - where is the problem?..

… but see this issue.

I’ve thought about this before, and one idea I have had before is having a build tool which scans your module for public declarations and then automatically creates C ABI wrappers for them (which is used to build the shared library) along with C header files, and then generates a Zig module that wraps the C ABI back into Zig types to mirror the API of the original module.

2 Likes

Again - if we are talking about general purpose libraries (libc, libm, libcrypto etc) then kinda yes, and it’s distros maintainers responsibility to keep everything compatible (key word is shared here). But if we are talking about a bunch of plugins for some particular application, I can not see any problems here (key word is dynamically loaded here, so file is not necessarily “shared” by every program in a system, unlike libc for ex).

2 Likes

The problem lies in scalability. Yes, this solution definitely has its place. It’s similar to a vtable where we predefine a table of functions and then define a structure to specify exactly which functions are included and in what order. Moreover, I mentioned this in my very first post.

If we think about it this way, then why don’t we all just use one giant function table? Why do C libraries like musl or glibc export so many functions?

The answer is: because it’s very inconvenient. The number of functions can be huge, especially in the case of C++. If we are writing a couple of functions for something small, this doesn’t matter, but when we’re developing an entire framework, game engine, large program, or OS kernel, we would spend a lot of time tracking all functions. The end user of our framework would want a clean and user-friendly API to easily use this functionality. It doesn’t really matter what kind of software it is; these are just examples that I came up with, and I understand that in some cases, libraries might not be necessary.

Take another look at this example:

I would like to have the implementation and API in one place (or at least not have to write the entire code twice) to save time and avoid repeating this process for every structure.

In C/C++ (again, this is just an example without any practical significance):

#ifdef MAKEDLL
#  define EXPORT __declspec(dllexport)
#else
#  define EXPORT __declspec(dllimport)
#endif

class EXPORT xyz {
public:
    void myMethod();
    static inline int myStaticInline() {
        return 256 - 0x20;
    }
    template<typename T>
    inline T* templateAlloc() {
        return new T();
    }
};

Notice that when we include our header in the program’s source files, we don’t lose the templateAlloc method, even though it is not exported during compilation. The same applies to the inline function, which is available in both the library code and to the end user. However, the void myMethod() method will be exported from the library and imported into the application. All of this happens automatically, without manually doing everything that needs to be done in Zig.

2 Likes

I think in a lot of cases the kind of hot code reloading that is planned for Zig would be an alternative to these complicated APIs. If your compiler is started and keeps running and updating the running code then there isn’t really a good reason to split your game engine into a bunch of dynamic libraries, then it is much better to just have the game engine as a statically compiled part of your game that gets compiled, cached and reloaded as you make changes to any part of it.

Basically future game engines written in Zig will communicate with the running compiler and get updates about things they need to reload/update.
It still isn’t quite clear how exactly that will work, except that there will be some kind of protocol your running application can use to communicate with the compiler.

Personally I think that might enable a much better workflow then having very many API boundaries.

So this comment is mostly to say, I think with hot code reloading I mostly would stop caring about loading dynamic libraries.

It still would be nice to have a low effort way to create APIs but it would become less of an issue, because the remaining use cases for dynamic libraries would be fewer and more coarse grained and thus involving fewer functions. (So it wouldn’t be so bad to just create them manually, although things that make it less work could help)

Overall creating small grained APIs seems tedious to me and I think it is best to avoid that, I think dynamic libraries can be used to emulate/fake hot code reloading, but if you do that I would break it apart into a few big pieces that get replaced. For fine grained stuff I think we either need hot code reloading or a similar tool that is more language specific.

My question to @bagggage, if you could just edit your running application that is using an engine/framework and it would automatically update the parts that were changed, would this still be a big concern to you and in what way?

1 Like