In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress:
* OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI"
* Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years"
* Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January.
What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028?
In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years.
In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning.
In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks.
We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.
On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.
No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).
This means that, while the compa
The original "Limits to Growth" report was produced during the 1970s amid an oil-price crisis and widespread fears of overpopulation and catastrophic environmental decline. (See also books like "The Population Bomb" from 1968.) These fears have mostly gone away over time, as population growth has slowed in many countries and the worst environmental problems (like choking smog, acid rain, etc) have been mitigated.
This new paper is taking a 1972 computer model of the world economy and seeing how well it matches current trends. They claim the match is pretty good, but they don't actually just plot the real-world data anywhere, they merely claim that the predicted data is within 20% of the real-world values. I suspect they avoided plotting the real-world data because this would make it more obvious that the real world is actually doing significantly better on every measure. Look at the model errors ("∆ value") in their Table 2:
So, compared to every World3-generated scenario (BAU, BAU2, etc), the real world has:
- higher population, higher fertility, lower mortality (no catastrophic die-offs)
- more food and higher industrial output (yay!)
- higher overall human welfare and a lower ecological footprint (woohoo!)
The only areas where humanity ends up looking bad are in pollution and "services per capita", where the real world has more pollution and fewer services than the World3 model. But on pollution, the goal-posts have been moved: instead of tracking the kinds of pollution people were worried about in the 1970s (since those problems have mostly been fixed), this measure has been changed to be about carbon dioxide driving climate change. Is climate change (which is predicted by other economists and scientists to cut a mere 10% of GDP by 2100) really going to cause a total population collapse in the next couple decades, just because some ad-hoc 1970s dynamical model says so? I doubt it. Meanwhile, the "services per capita" metric represents the fraction of global GDP spent on education and health -- perhaps it's bad that we're not spending more on education or health, or perhaps it's good that we're saving money on those things, but either way this doesn't seem like a harbinger of imminent collapse.
Furthermore, the World3 model predicted that things like industrial output would rise steadily until they one day experienced a sudden unexpected collapse. This paper is trying to say "see, industrial output has risen steadily just as predicted... this confirms the model, so the collapse must be just around the corner!" This strikes me as ridiculous: so far the model has probably underperformed simple trend-extrapolation, which in my view means its predictions about dramatic unprompted changes in the near future should be treated as close to worthless.
Thank you for the detailed answer!