In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress:
* OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI"
* Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years"
* Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January.
What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028?
In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years.
In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning.
In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks.
We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.
On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.
No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).
This means that, while the compa
A random idea on how this film could end by explicitly promoting existential risk awareness:
Toby Ord reading a table out loud sounds like a bridge too far, but it's not uncommon for movies to end with a link to some relevant real-world resource. If I knew the people behind this movie (I don't) and thought there might be time to change it (no idea), I'd probably advocate for something like this (many ways to improve the wording, I'm sure) before the credits:
This film isn't based on a true story. But it may become one.
Learn about risks to humanity, and how you can help:
theprecipice.com
(More realistically, if I did have an in, I'd ask people like Toby Ord what message they'd want millions of random viewers to see.)
I could imagine them interviewing Toby Ord for a mockumentary, like Death to 2020
beautiful
How much would it cost to influence the film to make this happen?
I don't know; I doubt it's a problem where throwing money at it is the right answer. In any case, it's unclear to me whether doing this would actually be positive value or not. I imagine it would be quite controversial, even among EAs who are into longtermism. I just shared the idea because I thought it was interesting, not because I necessarily thought it was good.
Yeah, I agree that money is not the bottleneck. I think the strongest bottleneck is decision quality on whether this is a good idea, and a secondary bottleneck is whether our Hollywood contacts are good enough to make this happen conditional upon us believing it's actually a good idea.
Do you have a story for why this could be a bad idea?
Having popular presentations of our ideas in an unnuanced form may either a) give the impression that our ideas are bad/silly/unnuanced or b) low-status, akin to how a lot of AI safety efforts are/were rounded off as "Terminator" scenarios.
Any predictions on whether the film will seem to be positive from an existential-risk-reducing perspective or not?
Or perhaps more constructively, what possible features of the film could make it seem positive from an xrisk perspective? Which possible features could make it seem negative?
Possible Positives:
Possible Negatives:
It might give people language to describe their experiences. Like "when I watched this movie, it was just like how it was before Covid - people were either really scared or just laughed it off! I see people doing the same thing when it comes to [other risk]"
The premise of the film Seeking a Friend for the End of the World (2012) is that:
This is taken as inevitable and accepted by the characters in the film. The film ends with the Earth getting destroyed, implying human extinction.
I'll be looking forward to see if/how they deal with the aftermath of the impact, and specifically with the agricultural collapse that would ensue which is probably the most severe consequence of an asteroid/comet impact.
I just started watching it. My thoughts so far:
One thing that's not believable in the movie is that the media barely reacts to the two scientists' message when they break it to the New York Herald and the talk show; instead, there's memes making fun of them. In 2020, social media was buzzing with memes about Trump's assassination of Soleimani starting WWIII; you'd think there'd be at least a similarly sized reaction to a warning that A GIANT COMET IS ABOUT TO STRIKE EARTH AND WE'RE ALL GONNA DIE.
Also, goddammit president, asking how much it will cost to stop the comet and bikeshedding with the scientists over whether it's 100% or 70% likely to hit. First of all, even if there's a 10% chance that it will strike Earth, we should be trying to deflect it! Second, preventing an existential catastrophe that is certain to happen is worth the entire value of the world economy!
My impression was that in early 2020, there was a lot of serious-sounding articles in the news about how worries about covid were covering up the much bigger problem of the flu.
I think there could be some EA press written around this. I hope Toby Ord gets at least one interview out of it.