AI and the Future of Programming

Not All AIs are Created Equal

Ok so the vast majority of responses are focused on LLMs like ChatGPT, and that’s my fault for presenting an example (the TED talk) of just that type of AI development branch. As noted by many here, LLMs can active a kind of “super assistant” status in terms of programming, but there are currently limits to how far it can go, mainly due to how they are trained and the data they are trained with.

But hten we have another branch of AI development, deep learning, mainly spearheaded by Google’s Deep Mind and its poster child AlphaZero. This area of research in AI focuses on “true learning” versus the “training” model of LLMs. AlphaZero, for example, learned to play different games just by playing against itself. The only data fed in was basically the rules of the games and the notions that winning a game is good and losing a game is bad. With just that, it managed to achieve superhuman levels of game play in many games.

Now there’s AlphaDev, that applies this kind of deep learning to software development. This paper explains how it managed to develop a sorting algorithm that’s faster than anything known at that point and is currently being used in real code like LLVM. This video summarizes:

So to clarify, if we’re talking about LLMs, yes, it’s a larger leep to full AI programming. But if it’s AlphaDev style deep learning, the leap seems to be smaller, although by no means a trivial task.

Human Anthills

Somewhere in the pile of blog posts, videos, and books I’ve recently read and watched on this subject, someone made an analogy more or less in the following lines:

Let’s imagine that the ants decided one day to create a new species that would be even more advanced and intelligent than they were themselves (these were very intelligent scientist ants.) They created us humans and having seen how we were developing, they asked themselves “When aren’t they going to start building better anthills?”

So what I get from this is that if we achieve a true AGI that can learn and think and will probably do so at superhuman speeds and levels, we can’t expect it to do anything our way. It will undoubtedly develop much more advanced languages and techniques that could (and probably will) be beyond our ability to comprehend. There already have been instances where AIs have developed their own languages to communicate amongst themselves. Languages that are unknown to us, but work just fine for them.


So to clarify, if we’re talking about LLMs, yes, it’s a larger leep to full AI programming. But if it’s AlphaDev style deep learning, the leap seems to be smaller, although by no means a trivial task.

I read some of the paper, and I don’t see how a similar approach can be applied to general programming tasks.

I don’t think that the types of problems these neural networks solve are general problems, and the process of optimizing an algorithm doesn’t have much to do with general programming.

They were able to turn the process of optimizing an algorithm into a “game” that a neural network can teach itself how to play. This is extremely cool, and we happen to have gotten very good at making machines that learn how to play games. But it’s very easy to define what a “winning” algorithm looks like. Correct and fast = win, anything else = lose. Everyone can agree on that.

How do you do that for a larger program? I’d guess it’s either impractical or impossible. If we could agree on what a great program looked like, we wouldn’t still be arguing about “Clean Code” or object-oriented vs functional programming or TDD or any of the other dozens of approaches to writing large programs. And without a well defined cost function, these deep learning approaches don’t work.

In software engineering, requirements change, hardware fails, humans do unpredictable things, etc. We can improve on our techniques, but I don’t think we can turn it into a solvable game until we find out what “winning” looks like.


AlphaGo was eight years ago. Saying that self-play deep learning with reinforcement is going to lead to AGI is a guess, and it’s one which has to reckon with the fact that recent progress in machine learning hasn’t been driven by that approach.

It could happen, sure, a lot of things could happen. It’s just that there’s no evidence to back that up, and no way of setting a timeline as to when it might happen. Could be next year, could be never.

This is not undoubted, but rather, eminently debatable. It’s also getting way out over the skis. It was fun to think about superintelligent AI in the 1960s, and it’s fun now. But we still don’t have them, and there’s no reason to conclude that they’re just over the horizon. Minksy’s lab thought that too, and they were wrong.

Machine learning tools are still tools: they’re used by humans to do things. They don’t develop anything on their own, because they have no ability to think, sense of self or ambition, or motive force. I’m not making an appeal to vitalism here, not claiming that a machine couldn’t have those properties as a matter of definition. I’m saying that approximately no progress has been made on software exhibiting those qualities.

If the past is any guide to the future, then we can expect with near certainty that bad actors will come onto the scene and ruin everything. Programming sites like StackOverflow will probably become completely unusable, besieged by bots posting fake questions and answers. Advertisers will try to tilt the AI in favor of the products they’re pushing. Fraudsters will seek to introduce security vulnerabilities in AI generated code. The Open Source movement will probably come to an end, as good projects get drowned out by the sheer number of dubious ones.


I believe that AI can properly document code from a well-written codebase. I have experienced this myself.

I am not asking to have code written for me; rather, I just want to have an approach to a problem and compare it with other models.

I think this is true, but what it misses is the original intention. When we don’t document the intention, then this information is lost. With intention I mean something in the line of, e.g. an interface is like it is or the implementation assumes certain things.


Ok @chung-leong , now tell us what you really think. :smile_cat:

Believe me, I’m a realist not an optimist, and I’m sure that as with most other human innovations, AI will be used as a weapon. This is yet another dimension in the AI discussion, the fact that a global race, like the arms race, is already happening. Which also ties into the ethical questions. If country A decides to limit AI to adhere to its moral and ethical values, that will not stop country B from moving forward (just lile nuclear all over again). But hey, that’s a whole other topic thread. :^)

But it never is all bad. Look at the Internet for example. It’s orgin is a military one (DARPA) and yet we got this incredible medium for worldwide communication, knowledge sharing, commerce, and social interaction. It’s not perfect, but IMO it has been a major advance for humanity.

1 Like

What should we ask of AI: that is the real question. If it’s to do the work for me, I don’t need it. But if it’s to help, then yes. By help, I mean an immense, interactive living library

Why everybody is talking about “writing” only?

Can AI “invent” completely new kind of a program, design and (only then) write it from scratch? I mean this - imagine there is no such thing like spread sheets. Or file managers. Or whatever it be. Is AI able to realize that there is some need for a program of some kind?

Can AI realize that TCP (just for instance) has some drawbacks and then have an idea for a new protocol, then design it in all details and then implement it in C/Zig?

Also “writing” is not the most important part of programming.
What about testing and debugging?
There are some kind of errors which are extremely hard to even reproduce.
I mean event/interrupt driven systems, where some “unusual” sequence of events can lead to misbehavior. Is AI able to analyze program logs and reconstruct an event sequence which lead to prorgam malfunction?

You were almost arriving at the mental model I have, execept for this final part on implementation. What need does this future AI have for C or Zig? These are languages made to help humans program the computer. The AI can go straight to machine code. Think of The Matrix when they need to learn something new like fly a helicopter; the computer programmed their brains directly through that uplink cable. No need for intermediate language, just straight to “brain machine language”.

Now back to programming computers. This all is still science fiction for now, but in contrast to how far-fetched it sounded in the 1960’s, the trend line points to this being a reality sometime in the future. True, we can’t predict exactly when it will happen, but I’m betting it will be sooner than later.

There are multiple companies that play with the idea of using analog circuits directly.

For example this one:

1 Like

We’ve heard this many times in the past.
The point is that “anyone” does not want and does not like to program in any form, with AI assistance or without AI assistance.
Remember, SQL was created for use by book-keepers and store-keepers.
But try to find proffesional SQL/PL-SQL programmer these days…

In general, for me personally AI is quite overstated, it’s rather very-well-trained neural network. But NN per se is not an “intellect”, it’s a cleverly constructed (by humans!) number cruncher and modern NN become possible due to appearance of huge RAM devices.


Oh, no. If something will go wrong (AI shall produce buggy program) who will be finding what’s and where exactly wrong and be fixing that program written directly in machine code / asm. AI itself? Programs are created by people. People make mistakes. NN is a program. It can contain mistakes. Bad NN (or badly trained NN) can easly generate buggy program. Who will be responsible? AI? :slight_smile: People who “asked” AI to make a program? People who constructed that particular NN? People who trained it?

Maybe the responsibility question is one of the desired but dark outcomes, like in:

“sorry that the car crashed, the AI made a mistake, nothing can be done about this”

I’m not that concerned about military use of AI actually. Warfare is governed by rules. The world of business on the other hand seem to have no rule nowadays. In fact, breaking accepted norms is rewarded. “Disrupting the market” they call it.

Yeah, I meant software for controlling technological proccess, vehicles and alike, not software for a desktop computer (games, browsers and whatnot). I guess no one would entrust developing a software for, say, Atomic Power Station to AI right now, it would be an extreme ( natural, not artificial :slight_smile: ) stupidity. Using “AI” for entertainment, for some little help in something - it’s ok, but writing software for airplain - no way!

1 Like

The subtle problem with this kind of question: it looks like we’re talking about a kind of program. But we’re not, we’re talking about a nebulous concept, applied to hypothetical future programs.

Can an LLM do all that, like, right now? Absolutely, definitively, no. Will future development of LLMs produce software which can? My prediction is a firm no on that one.

But if we aren’t taking about specific architectures, or programs which exist, then the question is just “can future software do this thing which humans do now” and the only useful answer is “don’t see why not”.

The problem is no one knows how to write that software. And “we don’t know how” is a different category from “we know how, but computers aren’t fast enough, and/or we don’t have enough of them to do it”. The state of ignorance persists as long as it lasts.

1 Like

I don’t think it will. Humans can go straight to machine code, but they don’t. I’m willing to conflate assembler and machine code here, because a program which writes programs isn’t going to need mnemonics, the actual machine code patterns make as much or as little sense to them as some ASCII strings which can be one-to-one converted.

If you have a robot arm which can build bridges, do you give it toothpicks, or planks? Programming languages compress the problem space, allowing for algorithms to be expressed at a higher level, with less input for the equivalent useful output.

This property is useful to any intelligence, machine or otherwise. So we could expect this thing which doesn’t exist to take advantage of that property, and use programming languages.

An “inventive program which writes programs” could invent a new language, kind of by definition, since human who write programs also do so, from time to time. Thing is that this is work, and it can’t take advantage of all the training data it already has on solving problems.

I’m unconvinced by all the hard takeoff speculation, it’s entirely out of date. Beating professional Go players was surprising, and people speculated that any capability advance would look like that, but in domains which are less artificial, they never do. Look at self-driving cars: improvement is slow. LLMs are plateauing as we speak, you can see that in the metrics, and they haven’t overcome their fundamental limitations, and I don’t think they ever will.

A lot of these breathless theories about the future sound a lot like “an Artificial Intelligence will solve the Traveling Salesman problem in linear time!” to me. I don’t think it’s at all likely that programs are going to jump from “can’t think or reason” (which is, I remind you, the status quo) to “write any program you want directly to machine code instantly with no bugs” in some mighty leap. That flies in the face of everything we know about computability and complexity theory.

Trend lines can’t tell you when we’ll figure out things we don’t know, and drawing a line into the future can fail to predict it, happens all the time. The Concorde would predict that passenger travel in the 2020s was at an average of Mach 3, and that didn’t happen.


while remaining concrete:
I tested the AI ​​with ASM, it understands what I want, but the response is not operational, even if I help it, the goal was to see if the AI ​​could respond to a concept and the apply in an operational way, not just functional.

Afterwards, as I said above, I use it as an interactive library on subjects to see solutions other than the one I had imagined…

1 Like

There is one rule: Make more money, no matter what it takes.

1 Like