In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress:
* OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI"
* Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years"
* Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January.
What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028?
In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years.
In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning.
In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks.
We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.
On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.
No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).
This means that, while the compa
Not really, or it depends on what kinds of rules the IAIA would set.
For monitoring large training runs and verifying compliance, see Verifying Rules on Large-Scale NN Training via Compute Monitoring (Shavit 2023).
Some more sketching of auditing with model evals is in Model evaluation for extreme risks (DeepMind 2023).
For completeness, here's what OpenAI says in its "Governance of superintelligence" post:
It's interesting how OpenAI basically concedes that it's a fruitless effort further down in the very same post:
It's not hard to imagine compute eventually becoming cheap and fast enough to train GPT4+ models on high-end consumer computers. How does one limit homebrewed training runs without limiting capabilities that are also used for non-training purposes?
This doesn't point to detailed work in the space, but in "Nearcast-based 'deployment problem' analysis", Karnofsky writes:
And here's that footnote:
I don't have a link to the report itself but Jason Hausenloy started some work on this a few months ago. https://youtu.be/1QY1L61TKx0