I found this interesting:
Makes me wonder what kind of programs would benefit from that technique.
In what situations it would make sense to use it.
I found this interesting:
Makes me wonder what kind of programs would benefit from that technique.
In what situations it would make sense to use it.
The extra complexity this would add for a standard project seems mind-boggling to my simpleton brain, but I can vaguely imagine some use-case for it in compiler-development, especially optimizing for embedded where space is critical.
The first thought I had: could we do everything with just one byte and lots of time? Probably not…
Interesting it is.
Often we already - of course - use the main idea of the video: re-use space.
The maths are often beyond me.
The pigeons said no
A more pragmatic example might be a 1-bit CPU, it could do everything a 64-bit CPU could do, and with much fewer transistors (e.g. “space”), but taking much more time to execute the same task.
Of course now it might turn out that stacking a ton of such simple 1-bit CPUs might be faster and more efficient than a single complex 64-bit CPU for some tasks (and in a way, GPUs have gone down that path).
…also I feel like functional programming has already explored the idea of replacing ‘memory’ with ‘computation’ ad infinitum, sometimes it’s useful, many times it’s not.
In the end, hardware shapes software, and while software-ideas very slowly trickle back into hardware design, I think it will be hard to beat the current ‘local maximum’ (as proven by the vast graveyard of “revolutionary” CPU designs in the last 40 years or so) - and the worst you can do is writing “idealistic” software that doesn’t match the hardware it needs to run on.
I did not dive into it, but am very curious what quantum processors will bring in the future. I think these have lots of space.
Still waiting for the moment the chess game will be solved.
I suspect that any actually working quantum computing solution will just borrow that space from a parallel universe