Rate of progress
I think it is to simplistic to see rate of progress as a simple value, I think the rate at which solving different tasks to better degrees will be very different depending on the tasks and making improvements will become more and more difficult, it also depends a lot on how well understood these tasks are already and how many free learning materials exist for these tasks.
LLMs
Repeating the implementation for some well documented existing algorithm isn’t difficult (being able to quickly find it is still useful, but these technologies are marketed as oracles (and this irks me, because it seems to claim more capability than it actually has), instead of fuzzy re-permutating search engines (which in many cases seems like a more accurate description of the results of LLMs to me))
I haven’t really seen AI demonstrations where the AI is able to produce new content well, it always requires pre-existing large heaps of data which are then basically made searchable and permutatable. What I haven’t really seen demonstrated is something that is actually able to understand the concepts in that material to a precise level and then use that to come up with new hypothesis-es, design experiments to test those and effectively come up with new thoughts that way.
Instead it always seem like we are taking the quality embedded in the learning material, losing at least a bit of the quality and precision and then get a result.
So at least with the LLM style models, the biggest benefit seems to be searchability of knowledge (while sacrificing a bit or a lot of precision).
Reasoning
The thing is, if it can’t improve the quality of the knowledge, by actually reasoning about it (finding contradictions, logic errors, errors in formal reasoning), then the quality of the results won’t just magically become better than what was originally in the training data and it is likely that it will be worse.
quality of training data
An additional problem is, that with all the people posting their chat gpt results as their own answers in forums and comments, without having a machine readable tag that identifies it as something that came from an LLM, we will have more and more LLMs which are trained on the smoothed-out/blurred results of other LLMs.
I suspect that this will make it more and more difficult to use website scraping, which seems to be the preferred method of building the big training sets, where somebody has actually thought through the claims in what was written/scraped.
If more of the input to the training data is just hallucinated gibberish and indistinguishable from other sentences that were actually formed by somebody that had knowledge about the topic. Future LLMs will get worse results, unless they make use of old data sets that contain less hallucinations.
Filtering for quality
There probably will be techniques in the future to classify things more, or maybe do actual reasoning to filter out meaningless noise. (But I haven’t seen something that actually demonstrates something like that)
But I think at some point you will hit a very difficult barrier, where it is extremely difficult to say what is a useful signal vs just noise and I think these hard problems will stall out certain types of AI progress significantly.
computability
I wouldn’t be surprised if overcoming some of those barriers, might even require things like quantum computers so that you can tackle some problems which are just impossible to do with classical computers, in reasonable time.
So I think fundamentally this topic would also have to tackle the whole topic of, different problems and how computable they are, in terms of how hard the problem is and what is required to solve it. Whether we can only create good approximations, or can’t even do that (Intractability).
applying appropriate techniques
And even if we can do specific things, the AI would have to be able to apply different techniques based on what makes sense, based on the problem.
(I imagine we will get there eventually, but I wonder how much of it, will be clever people teaching the AI, when to do what, vs people creating some higher level behavior loop, that somehow results in the AI being able to teach itself eventually)
Breakthroughs followed by plateaus
Personally I find it more likely that we will repeatedly find breakthroughs, that will create a surge of progress, which then a bit later is followed by a plateau, where the existing techniques stall out, until somebody finds a new way to improve something.
Marketing and Hype
I also find the whole everybody shouting they will achieve AGI by date so and so, gives me the vibes of vaporware salesmen and scam artists, hoping for quick investments, before they then ultimately do a rug-pull and disappear, instead of delivering anything, that actually implements what they promised.
At least to me, it seems like there is a high likelihood, that many of those claims are just marketing ploys to trick people into investing, tricking people into hoping they will be part of a gold rush. When the gold was just placed in the river where they were given a tour.
tl;dr
I agree with the more conservative answers, I think AI will be useful, but a lot of things seem like over-promising a technology, by either wanting to trick a group of people into thinking it is more than it is, or people being enthusiastic about it, extrapolating it linearly and thus hoping for a much higher improvement, than is likely without developing new or many specific techniques.