The proportion of long-termists in effective altruism who are expressing confident convictions that the timeline for smarter-than-human AI is much shorter than has previously been predicted been increasing at an accelerating rate in the last year. This appears to be a shift in perspective among several hundred long-termists. Among the dozens I've read opinions from, numbers are almost never provided.

I don't check the platforms for the best AI forecasting or whatever much, so maybe there are models or predictions that make clear the quantitative estimates for these timelines. Please comment or answer with such a resource if you're aware of one.

Otherwise, based on the way different people are talking about it, I wouldn't be surprised if they thought the timeline is 10-20 years, or 5-10 years, or even 2-3 years. I've talked to others who are also concerned and open-minded to this or that short AI timeline but haven't done the research themselves yet, or had much opportunity to learn from those who have. We want to understand better but basic information crucial to understanding more like numbers for different models or timelines aren't being presented. We want to know and need to know help better. What are the numbers?

New Answer
Ask Related Question
New Comment

1 Answers sorted by

Disclaimer: Be careful about definitions and interpreting metaculus questions. The latter involves resolution criteria on defining AGI that does not align with my own (e.g. meeting the named criteria would not replace all human tasks). Also, there has been an inflow of additional forecasters based on recent developments which should be factored in.

I listed some of my current sources down below. I hope this helps!

  • Metaculus forecasts: 
  • Other:
  • Shane Legg (DeepMind Co-founder): 50%: 2030, some chance in next 10-30y
  • Demis Hassabis: 10-20y from now
  • Eliezer & Paul’s IMO challenge bet: “Paul at <8%, Eliezer at >16% for AI made before the IMO is able to get a gold (under time controls etc. of grand challenge) in one of 2022-2025. Separately, we have Paul at <4% of an AI able to solve the "hardest" problem under the same conditions.”
    • Eliezer: 
      “My probability is at least 16% [on the IMO grand challenge falling], though I'd have to think more and Look into Things, and maybe ask for such sad little metrics as are available before I was confident saying how much more.”
    • Paul Christiano: 
      "I'd put 4% on "For the 2022, 2023, 2024, or 2025 IMO an AI built before the IMO is able to solve the single hardest problem"
      I think the IMO challenge would be significant direct evidence that powerful AI would be sooner, or at least would be technologically possible sooner. I think this would be fairly significant evidence, perhaps pushing my 2040 TAI probability up from 25% to 40% or something like that."
  • Ajeya's "When required computation may be affordable" (from ACT):
    • Ajeya created a function to evaluate the predicted annual investments on giant AI projects (upward sloping curve), vs. the likely cost of training a human-level AI (downward sloping curve).
    • Eventually, these curves meet, representing the first trained human-level AI.
    • You can play around with the spreadsheet here.
    • Ajeya’s values: 20% neural net, short horizon, 30% neural net, medium horizon, 15% neural net, long horizon, 5% human lifetime as training data, 10% evolutionary history as training data, 10% genome as parameter number


Thanks for putting in the effort. This is helpful information. I've got a few clarifying questions, though please don't feel obliged to answer them if you don't have the time or don't have a sense of the answer. You've already helped much and I can search for answers elsewhere if need be.

  1. To summarize:

a. The low/near end of the predicted distribution of the timeline for AI being at or near general intelligence is roughly 5 years out.

b. The median prediction for when super-human AGI will be achieved, i.e., ‘transformative AI,’ or ‘superintelligence,’ is... (read more)