The closest thing I could find was the Metaculus Ragnarök Question Series, but I'm not sure how to interpret it because:
- The answers seem inconsistent (eg. a 1% chance of >95% of humans being killed by 2100, but a 2% chance of humans going extinct by 2100). Maybe this isn't all that problematic but I'm not sure
- The incentives for accuracy seem weird. These questions only resolve by 2100, and, if there is a catastrophe, nobody will care about their Brier score. Again, this might not be a problem but I'm not sure
- The 'community prediction' (the median) was much higher than the 'Metaculus prediction' (some weighted combination of each user's prediction). Is that because more accurate forecasters were less worried about existential risk, or because there's something that makes a good near-term forecaster that makes people underestimate existential risk?
Related: here's a list of database of existential risk estimates, and here's a list of AI-risk prediction market question suggestions.
I wonder if questions around existential risk would better be estimated by a smaller group of forecasters, rather than a prediction market or something like Metaculus (for the above reasons and other reasons).
I'm not sure. I mentioned as a reply to that comment that I was unimpressed with the ability of existing "good" forecasters to think about low-probability and otherwise out-of-distribution problems. My guess is that they'd change their minds if "exposed" to all the arguments, and specifically have views very close to the median FHI view, if "exposed" -> reading the existing arguments very carefully and put lots of careful thought into them. However, I think this is a very tough judgement call, and does seem like the type of thing that'd be really bad if we get it wrong!
My beliefs here are also tightly linked to me thinking that the median FHI view is more likely to be correct than Will's view, and it is a well-known bias that people think their views are more common/correct than they actually are.