Three Scenarios for AI Progress
How will AI develop over the next few centuries? Three scenarios seem particularly likely to me:
For clarify my beliefs about AI timelines, I found it helpful to flesh out these concrete "scenarios" by answering a set of closely related questions about how transformative AI might develop:
The potentially useful insight here is that answering one of these questions helps you answer the others. If massive compute is necessary, then TAI will be built by a few powerful governments or corporations, not by a diverse ecosystem of small startups. If TAI isn't achieved for another century, that affects which research agendas are most important today. Follow this exercise for awhile, and you might end up with a handful of distinct scenarios, and then you can judge the relative likelihood and timelines of each.
Here's my rough sketch of what each of these mean. [Dumping a lot of rough notes here, which is why I'm posting as a shortform.]
This is pretty rough around the edges, but these three scenarios seem like the key possibilities for the next few centuries that I can see at this point. For the hell of it, I'll give some very weak credences: 10% that we solve superintelligence within decades, 25% that CAIS brings double-digit growth within a century or so, maybe 50% that human progress continues as usual for at least a few centuries, and (at least) 15% that what ends up happening looks nothing like any of these scenarios.
Very interested in hearing any critiques or reactions to these scenarios or the specific arguments within.
I like the no takeoff scenario intuitive analysis, and find that I also haven't really imagined this as a concrete possibility. Generally, I like that you have presented clearly distinct scenarios and that the logic is explicit and coherent. Two thoughts that came to mind:
Somehow in the CAIS scenario, I also expect the rapid growth and the delegation of some economic and organizational work to AI to have some weird risks that involve something like humanity getting pushed away from the economic ecosystem while many autonomous systems are self-sustaining and stuck in a stupid and lifeless revenue-maximizing loop. I couldn't really pinpoint an x-risk scenario here.
Recursive self-improvement can also happen within long periods of time, not necessarily leading to a fast takeoff, especially if the early gains are much easier than later gains (which might make more sense if we think of AI capability development as resulting mostly from computational improvements rather than algorithmic).
Ah! Richard Ngo had just written something related to the CAIS scenario :)