Brainstorming- an Io implementation for multiple nodes

It’s easy enough to envisage an Io implementation that uses coroutines to provide async actions in a single thread

Likewise, it’s well within the scope of the design to spread that over multiple cores via a thread pool

The next leap beyond that would be to scale horizontally to multiple nodes in a cluster.

Let’s say I had a cluster of 4 separate machines, all running the exact same binary … and had a consistent scheme for doing rpc’s across the network to hand off tasks

It would be possible already to do this in userland code separate from the Io implementation… but wondering if it would be possible to build an Io implementation that did this scaling internally, with some caveats. (Eg - you couldn’t expect to pass actual file descriptors between nodes)

Bottom line is that you could build a library that worked fine in single threaded mode, or spread across a thread pool to engage multiple cores … or … spread horizontally across a cluster as well if one was provided)

Question is - what sort of design decisions are in Io interface design that would make this possible in some situations, and not possible in others

I think at the function call level it seems reasonable, as long as you are not passing references to file descriptors around. Eg - a call to async(myFunc()) could easily be farmed off to a less busy node in the cluster.

A similar call to async(myFunc(&myGlobalContext)) would be a mess, unless the system as a whole had some sane way of sharing that global context across nodes. Tricky problem.

This is a problem that Erlang solves up to a point, at the cost of some pain, immutable data, and functional programming.

How far do you reckon this could be done in zig ?

2 Likes

Not actual file descriptors, but you could abstract that away. In theory, a file descriptor in your IO implementation might not have to map directly to an OS-level descriptor.

Transparency via network is very dangerous practice.
Clear boundaries between local and remote environments still is better solution.
At least for me

1 Like