(Crossposted to LessWrong)
Abstract
The linked paper is our submission to the Open Philanthropy AI Worldviews Contest. In it, we estimate the likelihood of transformative artificial general intelligence (AGI) by 2043 and find it to be <1%.
Specifically, we argue:
- The bar is high: AGI as defined by the contest—something like AI that can perform nearly all valuable tasks at human cost or less—which we will call transformative AGI is a much higher bar than merely massive progress in AI, or even the unambiguous attainment of expensive superhuman AGI or cheap but uneven AGI.
- Many steps are needed: The probability of transformative AGI by 2043 can be decomposed as the joint probability of a number of necessary steps, which we group into categories of software, hardware, and sociopolitical factors.
- No step is guaranteed: For each step, we estimate a probability of success by 2043, conditional on prior steps being achieved. Many steps are quite constrained by the short timeline, and our estimates range from 16% to 95%.
- Therefore, the odds are low: Multiplying the cascading conditional probabilities together, we estimate that transformative AGI by 2043 is 0.4% likely. Reaching >10% seems to require probabilities that feel unreasonably high, and even 3% seems unlikely.
Thoughtfully applying the cascading conditional probability approach to this question yields lower probability values than is often supposed. This framework helps enumerate the many future scenarios where humanity makes partial but incomplete progress toward transformative AGI.
Executive summary
For AGI to do most human work for <$25/hr by 2043, many things must happen.
We forecast cascading conditional probabilities for 10 necessary events, and find they multiply to an overall likelihood of 0.4%:
Event | Forecastby 2043 or TAGI, |
We invent algorithms for transformative AGI | 60% |
We invent a way for AGIs to learn faster than humans | 40% |
AGI inference costs drop below $25/hr (per human equivalent) | 16% |
We invent and scale cheap, quality robots | 60% |
We massively scale production of chips and power | 46% |
We avoid derailment by human regulation | 70% |
We avoid derailment by AI-caused delay | 90% |
We avoid derailment from wars (e.g., China invades Taiwan) | 70% |
We avoid derailment from pandemics | 90% |
We avoid derailment from severe depressions | 95% |
Joint odds | 0.4% |
If you think our estimates are pessimistic, feel free to substitute your own here. You’ll find it difficult to arrive at odds above 10%.
Of course, the difficulty is by construction. Any framework that multiplies ten probabilities together is almost fated to produce low odds.
So a good skeptic must ask: Is our framework fair?
There are two possible errors to beware of:
- Did we neglect possible parallel paths to transformative AGI?
- Did we hew toward unconditional probabilities rather than fully conditional probabilities?
We believe we are innocent of both sins.
Regarding failing to model parallel disjunctive paths:
- We have chosen generic steps that don’t make rigid assumptions about the particular algorithms, requirements, or timelines of AGI technology
- One opinionated claim we do make is that transformative AGI by 2043 will almost certainly be run on semiconductor transistors powered by electricity and built in capital-intensive fabs, and we spend many pages justifying this belief
Regarding failing to really grapple with conditional probabilities:
- Our conditional probabilities are, in some cases, quite different from our unconditional probabilities. In particular, we assume that a world on track to transformative AGI will…
- Construct semiconductor fabs and power plants at a far faster pace than today (our unconditional probability is substantially lower)
- Have invented very cheap and efficient chips by today’s standards (our unconditional probability is substantially lower)
- Have higher risks of disruption by regulation
- Have higher risks of disruption by war
- Have lower risks of disruption by natural pandemic
- Have higher risks of disruption by engineered pandemic
Therefore, for the reasons above—namely, that transformative AGI is a very high bar (far higher than “mere” AGI) and many uncertain events must jointly occur—we are persuaded that the likelihood of transformative AGI by 2043 is <1%, a much lower number than we otherwise intuit. We nonetheless anticipate stunning advancements in AI over the next 20 years, and forecast substantially higher likelihoods of transformative AGI beyond 2043.
For details, read the full paper.
About the authors
This essay is jointly authored by Ari Allyn-Feuer and Ted Sanders. Below, we share our areas of expertise and track records of forecasting. Of course, credentials are no guarantee of accuracy. We share them not to appeal to our authority (plenty of experts are wrong), but to suggest that if it sounds like we’ve said something obviously wrong, it may merit a second look (or at least a compassionate understanding that not every argument can be explicitly addressed in an essay trying not to become a book).
Ari Allyn-Feuer
Areas of expertise
I am a decent expert in the complexity of biology and using computers to understand biology.
- I earned a Ph.D. in Bioinformatics at the University of Michigan, where I spent years using ML methods to model the relationships between the genome, epigenome, and cellular and organismal functions. At graduation I had offers to work in the AI departments of three large pharmaceutical and biotechnology companies, plus a biological software company.
- I have spent the last five years as an AI Engineer, later Product Manager, now Director of AI Product, in the AI department of GSK, an industry-leading AI group which uses cutting edge methods and hardware (including Cerebras units and work with quantum computing), is connected with leading academics in AI and the epigenome, and is particularly engaged in reinforcement learning research.
Track record of forecasting
While I don’t have Ted’s explicit formal credentials as a forecaster, I’ve issued some pretty important public correctives of then-dominant narratives:
- I said in print on January 24, 2020 that due to its observed properties, the then-unnamed novel coronavirus spreading in Wuhan, China, had a significant chance of promptly going pandemic and killing tens of millions of humans. It subsequently did.
- I said in print in June 2020 that it was an odds-on favorite for mRNA and adenovirus COVID-19 vaccines to prove highly effective and be deployed at scale in late 2020. They subsequently did and were.
- I said in print in 2013 when the Hyperloop proposal was released that the technical approach of air bearings in overland vacuum tubes on scavenged rights of way wouldn’t work. Subsequently, despite having insisted they would work and spent millions of dollars on them, every Hyperloop company abandoned all three of these elements, and development of Hyperloops has largely ceased.
- I said in print in 2016 that Level 4 self-driving cars would not be commercialized or near commercialization by 2021 due to the long tail of unusual situations, when several major car companies said they would. They subsequently were not.
- I used my entire net worth and borrowing capacity to buy an abandoned mansion in 2011, and sold it seven years later for five times the price.
Luck played a role in each of these predictions, and I have also made other predictions that didn’t pan out as well, but I hope my record reflects my decent calibration and genuine open-mindedness.
Ted Sanders
Areas of expertise
I am a decent expert in semiconductor technology and AI technology.
- I earned a PhD in Applied Physics from Stanford, where I spent years researching semiconductor physics and the potential of new technologies to beat the 60 mV/dec limit of today's silicon transistor (e.g., magnetic computing, quantum computing, photonic computing, reversible computing, negative capacitance transistors, and other ideas). These years of research inform our perspective on the likelihood of hardware progress over the next 20 years.
- After graduation, I had the opportunity to work at Intel R&D on next-gen computer chips, but instead, worked as a management consultant in the semiconductor industry and advised semiconductor CEOs on R&D prioritization and supply chain strategy. These years of work inform our perspective on the difficulty of rapidly scaling semiconductor production.
- Today, I work on AGI technology as a research engineer at OpenAI, a company aiming to develop transformative AGI. This work informs our perspective on software progress needed for AGI. (Disclaimer: nothing in this essay reflects OpenAI’s beliefs or its non-public information.)
Track record of forecasting
I have a track record of success in forecasting competitions:
- Top prize in SciCast technology forecasting tournament (15 out of ~10,000, ~$2,500 winnings)
- Top Hypermind US NGDP forecaster in 2014 (1 out of ~1,000)
- 1st place Stanford CME250 AI/ML Prediction Competition (1 of 73)
- 2nd place ‘Let’s invent tomorrow’ Private Banking prediction market (2 out of ~100)
- 2nd place DAGGRE Workshop competition (2 out of ~50)
- 3rd place LG Display Futurecasting Tournament (3 out of 100+)
- 4th Place SciCast conditional forecasting contest
- 9th place DAGGRE Geopolitical Forecasting Competition
- 30th place Replication Markets (~$1,000 winnings)
- Winner of ~$4200 in the 2022 Hybrid Persuasion-Forecasting Tournament on existential risks (told ranking was “quite well”)
Each finish resulted from luck alongside skill, but in aggregate I hope my record reflects my decent calibration and genuine open-mindedness.
Discussion
We look forward to discussing our essay with you in the comments below. The more we learn from you, the more pleased we'll be.
If you disagree with our admittedly imperfect guesses, we kindly ask that you supply your own preferred probabilities (or framework modifications). It's easier to tear down than build up, and we'd love to hear how you think this analysis can be improved.
I would guess that more or less anything done by current ML can be done by ML from 2013 but with much more compute and fiddling. So it's not at all clear to me whether existing algorithms are sufficient for AGI given enough compute, just as it wasn't clear in 2013. I don't have any idea what makes this clear to you.
Given that I feel like compute and algorithms mostly trade off, hopefully it's clear why I'm confused about what the 60% represents. But I'm happy for it to mean something like: it makes sense at all to compare AI performance vs brain performance, and expect them to be able to solve a similar range of tasks within 5-10 orders of magnitude of the same amount of compute.
If 60% is your estimate for "possible with any amount of compute," I don't know why you think that anything is taking a long time. We just don't get to observe how easy problems are if you have plenty of compute, and it seems increasingly clear that weak performance is often explained by limited compute. In fact, even if 60% is your estimate for "doable with similar compute to the brain," I don't see why you are updating from our failure to do tasks with orders of magnitude less compute than a brain (even before considering that you think individual neurons are incredibly potent).
I still don't fully understand the claims being made in this section. I guess you are saying that there's a significant chance that the serial time requirements will be large and that will lead to a large delay? Like maybe you're saying something like: a 20% chance that it will add >20 years of delay, a 30% chance of 10-20 years of delay, a 40% chance of 1-10 years of delay, a 10% chance of <1 year of delay?
In addition to not fully understanding the view, I don't fully understand the discussion in this section or why it's justifying this probability. It seems like if you had human-level learning (as we are conditioning on from sections 1+3) then things would probably work in <2 years unless parallelization is surprisingly inefficient. And even setting aside the comparison to humans, such large serial bottlenecks aren't really consistent with any evidence to date. And setting any concrete details, you are already assuming we have truly excellent algorithms and so there are lots of ways people could succeed. So I don't buy the number, but that may just be a disagreement.
You seem to be leaning heavily on the analogy to self-driving cars but I don't find that persuasive---you've already postulated multiple reasons why you shouldn't expect them to have worked so far. Moreover, the difficulties there also just don't seem very similar to the kind of delay from serial time you are positing here, they seem much more closely related to "man we don't have algorithms that learn anything like humans."
I think I've somehow misunderstood this section.
It looks to me like you are trying to estimate the difficulty of automating tasks by comparing to the size of brains of animals that perform the task (and in particular human brains). And you are saying that you expect it to take about 1e7 flops for each synapse in a human brain, and then define a probability distribution around there. Am I misunderstanding what's going on here or is that a fair summary?
(I think my comment about GPT-3 = small brain isn't fair, but the reverse direction seems fair: "takes a giant human brain to do human-level vision" --> "takes 7 orders of magnitude larger model to do vision." If that isn't valid, then why is "takes a giant human brain to do job X" --> "takes 7 orders of magnitude larger model to automate job X" valid? Is it because you are considering the worst-case profession?)
I don't think I understand where your estimates come from, unless we are just disagreeing about the word "precise." You cite the computational cost of learning a fairly precise model of a neuron's behavior as an estimate for the complexity per neuron. You also talk about some low level dynamics without trying to explain why they may be computationally relevant. And then you give pretty confident estimates for the useful computation done in a brain. Could you fill in the missing steps in that estimate a bit more, both for the mean (of 1e6 per neuron*spike) and for the standard deviation of the log (which seems to be about ~1 oom)?
I think I misunderstood your claims somehow.
I think you are claiming that the brain does 1e20-1e21 flops of useful computation. I don't know exactly how you are comparing between brains and floating point operations. A floating point operation is more like 1e5 bit erasures today and is necessarily at least 16 bit erasures at fp16 (and your estimates don't allow for large precision reductions e.g. to 1 bit arithmetic). Let's call it 1.6e21 bit erasures per second, I think quite conservatively?
I might be totally wrong about the Landauer limit, but I made this statement by looking at Wikipedia which claims 3e-21 J per bit erasure at room temperature. So if you multiply that by 1.6e21 bit erasures per second, isn't that 5 W, nearly half the power consumption of the brain?
Is there a mistake somewhere in there? Am I somehow thinking about this differently from you?
I understand this, but the same objection applies for normal distributions being more than 0. Talking about conditional probabilities doesn't help.
Are you saying that e.g. a war between China and Taiwan makes it impossible to build AGI? Or that serial time requirements make AGI impossible? Or that scaling chips means AGI is impossible? It seems like each of these just makes it harder. These are factors you should be adding up. Some things can go wrong and you can still get AGI by 2043. If you want to argue you can't build AGI if something goes wrong, that's a whole different story. So multiplying probabilities (even conditional probabilities) for none of these things happening doesn't seem right.
I don't know what the events in your decomposition refer to well enough to assign them probabilities:
I think this seems right.
In particular, it seems like some of your estimates make more sense to me if I read them as saying "Well there will likely exist some task that AI systems can't do." But I think such claims aren't very relevant for transformative AI, which would in turn lead to AGI.
By the same token, if the AIs were looking at humans they might say "Well there will exist some tasks that humans can't do" and of course they'd be right, but the relevant thing is the single non-cherry-picked variable of overall economic impact. The AIs would be wrong to conclude that humans have slow economic growth because we can't do some tasks that AIs are great at, and the humans would be wrong to conclude that AIs will have slow economic growth because they can't do some tasks we are great at. The exact comparison is only relevant for assessing things like complementarity, which make large impacts happen strictly more quickly than they would otherwise.
(This might be related to me disliking AGI though, and then it's kind of on OpenPhil for asking about it. They could also have asked about timelines to 100000x electricity production and I'd be making broadly the same arguments, so in some sense it must be me who is missing the point.)
That makes sense, and I'm ready to believe you have more calibrated judgments on average than I do. I'm also in the business of predicting a lot of things, but not as many and not with nearly as much tracking and accountability. That seems relevant to the question at hand, but still leaves me feeling very intuitively skeptical about this kind of decomposition.