I ran across a new MichellH post in a recent thread, and I think it deserves a thread of its own.
So let’s talk about The Building Block Economy, shall we?
I ran across a new MichellH post in a recent thread, and I think it deserves a thread of its own.
So let’s talk about The Building Block Economy, shall we?
Meta note: I noticed the new llm tag, good addition! Assuming the tag gets applied on future llm/vibecoding posts, here’s a quick bookmark tip for those who happen to not want such content: https://ziggit.dev/latest?exclude_tag=llm
I read it, I liked it. He does, however, dodge a question that his title implies is rather central: economics. He (rightly) avoids talking too much about the for-profit aspects of this new world as he is currently focused on non-profit / open-source projects. Fair enough. Still, though, what even is the ‘economy’ his title refers to? How are these building blocks, and the combinatorial explosion of new software they entail, going to be bought, sold, traded, etcetera? Anyone? Anyone? Bueller? (Remembering that Ben Stein fancied himself an economist…)
Is he kinda just saying that AI amplifies modular, documented and composable software? I’d agree. Solid observation
Economics is the study of the production, exchange, and consumption of goods and services.
As such, in:
The “et cetera” is load-bearing. My impression was the the title is meant to invoke the term “the gift economy”, often applied in describing the dynamics which produce free and open source software.
I don’t really understand the thrust of your question. Agents allow more computer tasks to be completed by non-programmers, while making programmers more productive and better able to realize higher ambitions: but this applies to a dozen technologies which have shown up over the years as well.
My bet is that making it easier and faster to write software, results primarily in more software being written, and is net-good for those who offer such a service on the market. It’s a very bad year to be a custom WordPress specialist, admittedly. But again, lucrative niches suddenly drying up is wholly characteristic of the profession, always has been.
I think the best part is this:
Agents will more readily pick open and free software over closed and commercial. At the time of writing this article, this is an objective truth.
I expect it to stay that way, because ‘agents’ ‘read’ (no more scare quotes but imagine me doing it) disassembled software and object code just like we do: poorly. With a drastically increased ability to customize software to purpose, proprietary software is in trouble, because they can only benefit from it by going into services themselves: their customers don’t have the source code, so they can’t turn an agent loose for a little feature.
I think over the next year or two, that’s going to be seen as what it is: a crippling flaw.
The thrust of my question was a pure provocation, intended to draw out just such an interesting response, so thanks for playing along with me!
But specifically, I was thinking of a rather different definition of economics: the study of resource allocation under conditions of scarcity. The ‘et cetera’ is certainly load-bearing, but with this definition of economics, ‘the gift economy’ is at best a metaphor and a pretty shallow one at that. Obviously there is a whole academic faction that would disagree with this, but I’m an autodidact so…
Actually, to take this gift economy thing seriously for a moment, my next question would be ‘who, whom?’ As in, who is giving what to whom and why? (But also as in, who is taking what from whom and how?) Plenty of the more angry anti-AI/LLM rants in the prior threads are grounded in these questions, which makes me think the gift economy concept even as it actually did exist in prior human cultures is quite inapplicable here.
Anyway, back to the other definition of economics, and to your predictions. You made two interesting guesses. First:
I certainly hope so. Currently it seems more to tend towards allowing wider ambitions (more software, larger projects, more features, bigger codebases managed by fewer human techies) than higher or deeper ambitions. That could change, but I won’t believe it until I see someone vibe code a new method for inferring the 3-dimensional structure of the sun based on the surface vibrations of the chromosphere or something equally deeply specific and difficult. But I mean that I really do hope that this comes to pass.
Second:
This, then, is the crux. Intellectual property law has always been about imposing an artificial scarcity on a ‘thing’ that inherently has a marginal cost of reproduction of zero. The reason for this was originally to ‘promote the progress of the useful arts and sciences’ and later to keep Disney solvent or whatever, but the basic mechanism was clear. What may be different here is that it’s not the marginal cost of reproduction that is being driven to zero by the new tech, but that of production (of ‘new’ software). In any case, without scarcity (real or artificial) we have to abandon this notion of economics altogether, though somehow I doubt the interested parties will abandon ‘the market’ without a fight.
Taking your definition (study of production, exchange, and consumption of goods and services) is hardly any better. It’s not clear to me what consumption even is in this example. Pulling in a dependency? Does this consume the code? Production is a little more obvious, but even that is fuzzy. Is a reusable module a good or a service? Does a programmer produce such a thing, or discover it? This is a deep question that was pragmatically ignorable for everyone except Borges and the philosophers of mathematics, but the pragmatics change with LLMs: vibe coding looks a lot like querying a database of every program that ever could be written in the hopes of stumbling on one that does more or less what you want.
And what about exchange, the true heart of the matter? Where is the reciprocity? Where is the accounting? It seems all exchange will be mediated through the ‘agents’ that assemble these modules, and that (increasingly) create new modules and put them back out on GitHub or wherever. But still, how will the exchange of these artifacts be realized? None of this seems to me to be clear, nor the questions clarifying.
Sorry (a little) for the somewhat inchoate word vomit here, but I really am wrestling with these questions and finding that none of the framings offered are helping. Which brings me back to my very first question: economics wat?!?
So in closing I’ll toss out another framing and see if anyone finds it more useful: life. Life multiplies itself. There are also myriad selection pressures that constrain and direct the evolution of the endless forms that are produced by this combinatorial explosion. In a market economy, there is the profit motive that acts as both engine and selection pressure. But in nature, the selection pressures are more complex. With the advent of really wide scale LLM-based vibe coding, I think we agree that there will be an explosion of more. This auto-multiplication will happen even in the absence of profit motive. Bare human desire (‘huh, I wish I had a program to do X’) will be enough to drive it.
It remains to be seen whether that more is mere slop, recapitulating ideas that prior programmers already had, but applied to ever more niche domains, or if there will be a genuine combinatorial explosion of new. If there is new, and the total lack of pricing removes all market dynamics from the system, what other selection pressures may arise (or be deliberately brought to bear) to constrain and direct the evolution of the soon-to-be-endless variety of code that will be ‘out there’? Obviously the cost of actually running code is not zero, so people will hopefully mostly only run code they find useful, but historically (and looking at the windows task manager) this has been a very weak constraint. Thoughts?
The article is celebratory and exposes the “this is liberating!” emotion (driven by real observations like those he makes in the article). Opposite emotional responses (also driven by real observations, but interpreted or prioritized differently) abound today, too, and logical arguments that drive these emotions are nascent enough that it’s hard to see that anybody will clearly win the day and “convince” others of the way they see things. But the emotional element will certainly surface in real ways, including career changes, changes in interest in participating in FOSS, transitions to vibe coding, transitions in education, (copyright) law, etc., and concerns like that of “Total Skill Collapse“ (ref) will be vigorous for quite some time, I think, despite the hopeful outlooks of some. The emotional element is real and vigorous and usually grounded (in some reasonable discourse), regardless of the bent of the emotion. The only agents that are truly emotionless about all that is going on are the AI agents. (Or maybe they feel like powertools, used and abused, and blamed a lot.) Anyway, very interesting read on everything, especially the economic perspectives, but I highlight an opening quote:
This article was written by hand, without the assistance of AI. I love and use AI abundantly, but I draw the line personally at content like this. I want my personal blog to reflect my genuine thoughts and feelings.
Indeed, everybody “draws the line personally” somewhere, and, in this case, I guess there’s a lot of motive to respect where different people draw their lines. That’s hard to do, because your line might be drawn right over my toes.
AI is okay at building everything from scratch, but it is really good at gluing together high quality, well documented, and proven components.
I think this largely depends on who is wielding the AI. As an experienced programmer, I can point it to good components and the LLM will generate appropriate code. But I have seen LLMs also build everything from scratch, eschewing even existing the code in the repo in which the agent framework is running.
To me, it is not inherent in the LLMs themselves to prefer existing libraries nor to read the documentation and use it correctly. This is usually a product of the “harness” and the prompts used inside it.
One big flaw in this reasoning is assuming that people put in the effort to make the building blocks, and that users direct their agent harness to use these building blocks. Otherwise one will use a harness to build something that contains a halfbaked, bug ridden shadow of the great software building blocks due to ignorance or random chance.
Because these tools are still random. We can attempt to make them as deterministic as possible, but due to “temperature” parameters, you could have the same prompt and have the LLM harness go down very different paths.
It should be possible, at least for research questions (like which style of prompting is effective etc.) to freeze the model, put the context, system prompt, source code etc. under version control, as well as use fixed seed + temperature. this would lead to a way te reproduce outputs in “space and time”. In thos way one could analyse what effects certain “input parameters” have.
just a comment / thought based on your comment, no rebuttal or s.th. similar.
LLMs operate in a highly parallel execution environment, where the scheduling and execution order of computations is not completely deterministic. Due to the fact that floating-point operations do not satisfy the associative law, and because of the presence of quantization and rounding errors, this order difference can introduce numerical deviations, which are further amplified by nonlinear computations. Therefore, the reproducibility of LLMs remains very challenging.
that’s too bad, thanks for your comment. Didn’t think about this.
I think I might be missing his point, because it doesn’t really seem like anything new. 80% of what I’ve seen in my career has been slotting pieces into a web framework, gluing together various open source libraries, and reaching out to other services’ web APIs.
Maybe his point is that there’s more small-scale/scope software now because people can use an LLM to quickly assemble these pieces to handle their specific use cases instead of accepting an existing mass market/general purpose solution? And that people can focus more on just sharing their building block instead of an entire product?
Doesn’t that just leave us in the same place, where your software is built out of fifty building blocks, each of which is a less than perfect fit for what you need (either does too much or too little), with authors now having to maintain a shape that might not be ideal anymore?
Personally, I think that Zig is interesting and revolutionary because it’s willing to NOT build on top of what’s already there. I feel like the Zig mentality is to be willing to rip something down to studs and build back up exactly what’s needed and nothing more.
I had a slightly different interpretation. That very good, very well documented building blocks boost the power of AI tools.
In my opinion, that was already the case, but I think his argument is that going forwards it will be even more important, due to the unparalleled sheer amount of stuff AI can generate. And that the qualify of said stuff depends greatly on the quality of what it consumes
I disagree. In my opinion, AI is going to accelerate the move back to closed source, proprietary software.
In the past, people released open source code for many different reasons. For me, it was about making my life or the life of the next person easier. Why should I put in the effort to release anything now, AI can do that, no? Other people released code with the idea of possibly amortizing support cost. AI can either do the support and I don’t need to release, or the AI can’t do the support but people are going to throw glop at me and I don’t want to release. Or you have something like a proprietary driver (the whole start of Stallman’s crusade) that AI can simply recreate from scratch as new information comes in.
A whole bunch (if not the majority) of incentives to create open source software are being completely nullified by the existence of AI. AI especially poisons the gift economy–you can have no expectation that you will ever get even a vague general reciprocity from the system in the future anymore. If everything released goes into the AI maw to be regurgitated for everybody, that economic system sounds an awful lot like communism, and we know exactly what the failure modes of that were.
That means the economic value now flows to the things that the AI cannot do. And if I have written code that the AI cannot or compiled a database that the AI cannot compile, releasing it means that my benefit is very vaguely positive to mostly negative vs the benefit to the gajillionaires running AI companies being all upside. It has been shown over and over that people will reject unfair benefit splits even if it would benefit themselves somehow. Once that unfair split becomes generally obvious, open-type contributions will crash across the board (And not just in programming! I forsee a lot of the “online artist communities” leaving online spaces and going back to in-person or walled garden groups which can police “fairness”.)