AI risk discussion often seems to assume that the AI we most want to prepare for will emerge in a “normal” world — one that hasn’t really been transformed by earlier AI systems. 

I think betting on this assumption could be a big mistake. If it turns out to be wrong, most of our preparation for advanced AI could end up ~worthless, or at least far less effective than it could have been. We might find ourselves wishing that we’d laid the groundwork for leveraging enormous political will, prepared for government inadequacy, figured out how to run large-scale automated research projects, and so on. Moreover, if earlier systems do change the background situation, influencing how that plays out could be one of our best levers for dealing with AI challenges overall, as it could leave us in a much better situation for the “boss-battle” AI (an opportunity we’ll squander if we focus exclusively on the endgame).

So I really want us to invest in improving our views on which AI changes we should expect to see first. Progress here seems at least as important as improving our AI timelines (which get significantly more attention). And while it’s obviously not an easy question to answer with any degree of confidence — our predictions will probably be off in various ways — defaulting to expecting “present reality + AGI” seems (a lot) worse. 


I recently published an article on this with @Owen Cotton-Barratt and @Oliver Sourbut. It expands on these arguments, and also includes a list of possible early AI impacts, a few illustrative trajectories with examples of how our priorities might change depending on which one we expect, some notes on whether betting on an “AGI-first” path might be reasonable, and so on. (We’ll also have follow-up material up soon.)

I’d be very interested in pushback on anything that seems wrong; the perspective here informs a lot of my thinking about AI/x-risk, and might explain some of my disagreements with other work in the space. And you’d have my eternal gratitude if you shared your thoughts on the question itself.

Sketch illustrating this perspective; total AI-driven change vs time
A lot of discussion assumes that we won't really move up the Y axis here until after some critical junctures.

18

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
More from Lizka
Curated and popular this week
Relevant opportunities