Not All AIs are Created Equal
Ok so the vast majority of responses are focused on LLMs like ChatGPT, and that’s my fault for presenting an example (the TED talk) of just that type of AI development branch. As noted by many here, LLMs can active a kind of “super assistant” status in terms of programming, but there are currently limits to how far it can go, mainly due to how they are trained and the data they are trained with.
But hten we have another branch of AI development, deep learning, mainly spearheaded by Google’s Deep Mind and its poster child AlphaZero. This area of research in AI focuses on “true learning” versus the “training” model of LLMs. AlphaZero, for example, learned to play different games just by playing against itself. The only data fed in was basically the rules of the games and the notions that winning a game is good and losing a game is bad. With just that, it managed to achieve superhuman levels of game play in many games.
Now there’s AlphaDev, that applies this kind of deep learning to software development. This paper explains how it managed to develop a sorting algorithm that’s faster than anything known at that point and is currently being used in real code like LLVM. This video summarizes: https://www.youtube.com/watch?v=n2qCry_o2Fs
So to clarify, if we’re talking about LLMs, yes, it’s a larger leep to full AI programming. But if it’s AlphaDev style deep learning, the leap seems to be smaller, although by no means a trivial task.
Human Anthills
Somewhere in the pile of blog posts, videos, and books I’ve recently read and watched on this subject, someone made an analogy more or less in the following lines:
Let’s imagine that the ants decided one day to create a new species that would be even more advanced and intelligent than they were themselves (these were very intelligent scientist ants.) They created us humans and having seen how we were developing, they asked themselves “When aren’t they going to start building better anthills?”
So what I get from this is that if we achieve a true AGI that can learn and think and will probably do so at superhuman speeds and levels, we can’t expect it to do anything our way. It will undoubtedly develop much more advanced languages and techniques that could (and probably will) be beyond our ability to comprehend. There already have been instances where AIs have developed their own languages to communicate amongst themselves. Languages that are unknown to us, but work just fine for them.