[ Question ]

Are there superforecasts for existential risk?

by Alex HT1 min read7th Jul 202013 comments


The closest thing I could find was the Metaculus Ragnarök Question Series, but I'm not sure how to interpret it because:

  • The answers seem inconsistent (eg. a 1% chance of >95% of humans being killed by 2100, but a 2% chance of humans going extinct by 2100). Maybe this isn't all that problematic but I'm not sure
  • The incentives for accuracy seem weird. These questions only resolve by 2100, and, if there is a catastrophe, nobody will care about their Brier score. Again, this might not be a problem but I'm not sure
  • The 'community prediction' (the median) was much higher than the 'Metaculus prediction' (some weighted combination of each user's prediction). Is that because more accurate forecasters were less worried about existential risk, or because there's something that makes a good near-term forecaster that makes people underestimate existential risk?

Related: here's a list of database of existential risk estimates, and here's a list of AI-risk prediction market question suggestions.

I wonder if questions around existential risk would better be estimated by a smaller group of forecasters, rather than a prediction market or something like Metaculus (for the above reasons and other reasons).

New Answer
Ask Related Question
New Comment

3 Answers

Yes, but it is hard, and they don't work well. They can, however, be done at least slightly better.

Good Judgement was asked to forecast the risk of a nuclear war in the next year - which helps somewhat with the time frame question. Unfortunately, the brier score incentives are still really weak.

Ozzie Gooen and others have talked a lot about how to make forecasting better. Some of the ideas that he has suggested relate to how to forecast longer term questions. I can't find a link to a public document, but here's one example (which may have been someone else's suggestion):

You ask people to forecast what probability people will assign in 5 years to the question "will there be a nuclear war by 2100?" (You might also ask whether there will be a nuclear war in the next 5 years, of course.) By using this trick, you can have the question (s) resolve in 5 years, and have an approximate answer based on iterated expectation. But extending this, you can also have them predict what probability people will assign in 5 years to the probability they will assign in another 5 years to the question "will there be a nuclear war by 2100" - and by chaining predictions like this, you can transform very long term questions into series of shorter term questions.

There is other work in this vein, but to simplify, all of it takes the form "can we do something clever to slightly reduce the issues that exist with the fundamentally hard question of getting short term answers to long term questions." As far as I can see, there aren't any simple answers.

Thanks for writing this. I've had similar questions myself.

I think the incentives issue here is a big one. One way I've wondered about addressing it is to find a bunch of people who forecast really well and whose judgments are not substantially affected by forecasting incentives. Then have them forecast risks. Might that work, and has anyone tried it?