AI and the Future of Programming

It seems AI agrees with most of you guys lol.

Question: Will AI be able to write computer programs better than humans?

3 Likes

Itā€™s definitely a good answer, and I would even argue that by the time AI is able to replace us programmers, I canā€™t imagine what society would look like. If we get replaced, that probably means there wonā€™t be any interesting jobs left for us to move into. In my opinion, that would be a very sad society, one I certainly wouldnā€™t want to live in.

I love programming, and I would probably be very bored and depressed if we were entirely replaced by machines. AI is great as an assistant, and it would be wonderful if it actually did what we were promised it would do (i.e., replace laborious, repetitive, and boring jobs) instead of encroaching on our work in creative fields.

3 Likes

I am a bit skeptical about that answer, I find it highly likely that these kinds of answers are extremely moderated by the engineers that wrote those LLMs, that is another interesting aspect, what would the raw answer be vs what is the answer after the AI has been censored in all kinds of ways.

And nobody will tell you whether the answer that gets presented by the AI was given to it, by its operators or whether it generated it completely on its own.

With these systems there is big incentive, to fudge the answers into something that portrays the project in a positive light and thus you have to expect to get marketing answers.

3 Likes

You can make money honestly by producing something thatā€™s useful to your customers, or you can for a brief moment be among the richest people on the planet by hyping up some technology, make impossible promises to gullible investors, then end up in jail like Elizabeth Holmes and Sam Bankman-Friedman.

AI is a promising technology. Itā€™s just too bad itā€™s development is guided by the logics of social media instead of engineering. Itā€™s all just backward in my opinion. Most of us here in this forum are seasoned programmers. We can express ourselves far more quickly in a programming language than we can in English. For us, writing out a function requires far less effort than writing out a description of what that function does. The exact opposite of whatā€™s being developed is what we need. An AI that can automatically generate reasonably good documentation based on the code weā€™ve written would be a fantastic time saver.

A technology that makes programmersā€™ lives easier will never receive any funding though. They just want to make us unemployed.

6 Likes
1 Like

I fundamentally donā€™t understand the kid making a killer app in Zimbabwe meme.

If AI gets to the point where it is the best programmer on the planet that never sleeps and codes 10000x the output. That means it can damn well fabricate application ideas to a degree where itā€™s basically more or less driving all innovation. Research is all AI. Jobs are all AI. Itā€™s all AI.

I find it much more likely that this

  1. does not come to pass, or
  2. it does and we all get paper clipped because the AI is highly capable, but is misaligned.

I raise another question here though.

  • If this is to come and pass what do you even do about it?
    • As far as I can reason the answer is ā€œyouā€™re cooked buddy.ā€

At which point you/I realize you/I are in fact cooked and you just start doing what you/I want. I actually find programming quite fun. So Iā€™ll keep doing what Iā€™m doing now. Simply turn off Copilot and go read ziggit to pass the time. Maybe today Iā€™ll finally figure out how to read in a json config into a zig struct using std.json.

Perhaps tomorrow Iā€™ll speed up my Trie.

Maybe next week Iā€™ll remove golang from Earth.

Iā€™ve held off on making any posts here because I wanted to see what the temperature of the discussion would be before making any comments.

I see a lot of conversation about corporations and about what weā€™re afraid of, but very little in terms of what AI can actually do and the research supporting that.

For instance, Iā€™d like to point out the curse of reversal. Here is a paper about it from this year and is devastating to AI being able to generalize (it canā€™t) given our current architectures. Iā€™ll quote from the paper: https://arxiv.org/pdf/2309.12288

Thereā€™s also other intrinsically problematic things with the math itself. Asymmetric relations, for instance, are not captured by vector spaces as distances are symmetric between elements of the field.

Weā€™ve made some massive improvements on certain things and Iā€™ve built several models that work well with external sources (several are hooked up to wolfram alpha and they certainly ā€œdo mathā€ quite nicely) but theyā€™re not intelligent.

Hereā€™s something Iā€™m not seeing anyone talk about - I have worked at places where the specs coming from the business are horrid. We had to push back to tell the business that ā€œthis wonā€™t work and if you do this then youā€™ll have a nighmareā€. The worst thing to be in any genuinely ambiguous/challenging space is a ā€œyes manā€ for bad ideas.

So Iā€™ll kick this back to you - is programming the act of putting text on a screen or is it getting your ideas so clear that even a computer (essentially a rock rolling down a hill) can understand them? In this senseā€¦ ā€œprogrammingā€ isnā€™t going anywhere just because we can get computers to output code that can compile.

6 Likes

This is why I stated initially that our role would transform from writing code to crafting prompts. The final goal would be the same, getting the computer to do what we want. And the requirements problem also remains the same, we have to herd the user into expressing what they actually want / need, to then turn that into prompts instead of directly into code.

The thing is, that if the AI (LLM in this case) is advanced enough to handle a variety of prompts well and even enter into an ā€œinterview modeā€ type of exchange where it pries the requirements from the user (as a human programmer would,) then you eliminate the gatekeeper (the expert programmer) and have a system that allows users to obtain the software they want / need via direct interaction with the AI.

But once again, this depends on the AI technology being capable of doing this. From the research youā€™ve don, do you think reaching this level of interaction is ever possible?

1 Like

Great question - letā€™s move away from ā€œis it possible?ā€ to ā€œunder what circumstances do statistics help?ā€ because I find it helps ground the conversations in something more direct. This will be a long post, but I think itā€™s important to get this right (hold my beer).


Letā€™s start with where statistics/probability are not helpful. Specifically, why donā€™t we do gradient based learning systems to do things like ā€œ2 + 2ā€? Itā€™s because itā€™s not ultimately a question of statistics. In fact, itā€™s hard to imagine proving math using machine learning because thatā€™s circular - statistics cannot ā€œproveā€ that ā€œ2 + 2 = 4ā€ - itā€™s the other way around. Think of it this wayā€¦ if the algorithm uses math (floating point adds, etc) then weā€™re assuming thatā€™s valid to start.

We started using systems like gradient based learning algorithms because some sets of rules are so arduous to state that the programming becomes a nightmare. Itā€™s not that you canā€™t program them in a traditional way, itā€™s just awful. Where rules are easy to state, machine learning affords nothing.

Thus, weā€™re really talking about problem spaces where answers are not absolute but thereā€™s something thatā€™s probably the case (notice that I didnā€™t say true).

Hereā€™s a great example - consider the following sentence:

The chicken crossed the road. It was wide.

Question: what was wide? The chicken or the road? Seems weird to call a chicken wideā€¦ but why? Frankly, itā€™s because we just havenā€™t heard those words together with any significant frequency. If we started grouping them together, suddenly the answer to that question starts to change. There is no truth in this example - just patterns of usage.

Better yet, hereā€™s three statements to get a little more into it:

There is a man named John and he is 5'11
There is a man named Theo and he is 6'2
Question: How tall is he?

This has no direct answer (itā€™s a coinflip at best and thereā€™s nothing that machine learning affords in that circumstance). From the work that Iā€™ve done, Iā€™ve found that ambiguity like the statement above is actually best sovled by last-k search - what was the last ā€œheā€ that we mentioned? Try it sometime when youā€™re talking to someone - youā€™ll see thatā€™s how we typically resolve these sorts of statements in natural conversation :slight_smile:


With that in mind, letā€™s get to your example of the interviewer. Letā€™s start by laying out some assumptions here.

Your first assumption is that the person actually knows what they want but they just canā€™t articulate it. Thatā€™s a massive assumption. If the person doesnā€™t actually know what they want and the machine learning algorithm is providing questions, whatā€™s really happening there? Knowing what you actually want is hard work and itā€™s very easy to ā€œinfluenceā€ that and give the impression that the algorithm is actually finding that out. In those cases, I think youā€™ll find that weā€™re just in the business of generating text.

If they do know what they want but they donā€™t know how to phrase it, then asking questions can be helpful. These are ā€œsocraticā€ systems and Iā€™ve built a few based on the Large Godel model released by Microsoft. Theyā€™re neat and probably my favorite kind of network - I genuinely enjoy talking to them and I have a few custom training sets Iā€™ve recorded from thousands of interactions over the last 4 years (I started around a year after the paper ā€œattention is all you needā€ was published).

The thing with those systems is that theyā€™re trying to help you resolve something. But what? If itā€™s something statistically derivable (such as ā€œyouā€™re probably hungryā€) then itā€™s quite trivial to do exactly what youā€™re talking about (thatā€™s why I suggested we move away from the word ā€œpossibleā€). Then again, I can make an algorithm to do that in about 5 seconds:

"Have you eaten in the last 5 hours?"

"No: You're probably hungry."

"Yes: You're probably not hungry."

Thus, for anything that is not actually innovative, this is quite achievable. For things that we do not have pre-existing data for, then itā€™s not really the algorithm doing the workā€¦ itā€™s the user doing the work and the algorithm was a very advanced rubber duck. This is a much more nuanced problem then first meets the eye, really.


This has some seriously good and bad implications. Hereā€™s a bad one: scientific publishing standards. Iā€™m not the first to point out that scientific papers are written to be intentionally opaque. We now have AI systems that will help you take otherwise simple ideas and rephrase them like opaque nonsense.

Statistically speaking, is this what is ā€œappropriateā€ for the space? Sure. Is it good that weā€™re automating the obfuscation of language? Iā€™d say no. The issue here is that what is ā€œstatistically correctā€ is actually a bad thing and automation is going to help us double down on making this problem worse.

In conclusion - it depends on if itā€™s a path that is statistically derivable and if so, then it depends on what you mean by ā€œhelpfulā€. Thatā€™s where Iā€™ll stop.

Iā€™ll take my beer back.

5 Likes

I tend to think that it is not a matter of senses but a question of consequences. We (and the other animals) learn because we face consequences for our actions. We hit someone and then he hates us. We write crappy software and then we get a lot of issues on the Git forge with nasty comments. We write beautiful music and then people like us. We cook great food and then we enjoy the lunch.
This is something that the LLM donā€™t have, they can utter nonsense or dangerous things and never face the consequences. I think this is why they remain so dumb.

2 Likes

Thatā€™s something I realized through experience at the beginning, I had confidence, then the confidence eroded, today itā€™s a help for documenting, or translating.

@bortzmeyer, Iā€™m not sure what you mean by consequences - have you heard of loss scores? The consequence for being wrong is that your distance from the target is larger and that effects the gradient via the chain rule. We already do this - this is precisely how they learn, actually. If it has to be a loss derived by adversarial means wellā€¦ we do that, too. An example of this is training GANā€™s to produce and detect fraudulent handwriting.

What weā€™re talking about here is the problem of ā€œembodimentā€ and the idea is that the integration of senses (literally data being read from different kinds of sensors) can cause the emergence of smarter networks. Iā€™m skeptical of this. It may actually be something more fundamental to the types of computers we have. From a fundamental ability perspective, I donā€™t know if this advances us any further.

You are absolutely correct though for certain kinds of problems. There are whole classes of problems you can only solve using multi-modal inputs. If some part of a problem requires reasoning based on something that it sees (like solving for missing information), then it needs to integrate visual information.

Iā€™m just not convinced that more input equates to fundamentally more capable networks. We may be realizing what they were already capable of over a new space (a very interesting thing indeed) but I donā€™t believe that would adjust any computation boundaries.

1 Like

We could ā€œpanishā€ AI by reducing CPU/RAM resource which it can use, butā€¦ it wonā€™t feel no pain, no starvation, no shame, 'cause it just not a living being.

If we limit the training data to just all available knowledge of one programming language (Zig of course,) do you think we could get an LLM (Small LM?) that could output really good code given really good requirements? Maybe adding also knowledge concerning logic, and programming concepts.

I guess the more general question is: If we narrow down the training data, do we get higher quality outputs?

I was thinking about building one in/for Zig actually! I have a humongous update to metaphor coming in the next week or so - I think itā€™d be fun to open that up as a project.

2 Likes

Oh man, that would be awesome! If you need total AI noob help with anything, count me in! :^)

1 Like

Yeah, Iā€™ll send ya a message and if anyone else wants to help out, message me and Iā€™d be happy to get something started.

1 Like

Iā€™d like to try out for the zig llm team.

dm me plz.

1 Like

Iā€™m curious if this would be a good idea.

Would it not be better to instead train it on zig and zig-like languages. Iā€™d wager it would perform better given some C code. Youā€™d want to give it less C than Zig though.

Not entirely how sure that would go.

Youā€™d want the LLM to be making memory conscious decisions is all Iā€™m really getting at.

I mean consequences of their actions today, in production environnment.When ChatGPT produces broken code, is it ā€œpunishedā€ in any way? No. If it gives bad advice, will people stop using it? No. And even if they would, ChatGPT would not care: unlike a living being, it has no goals of itself.