In October of 2018, I developed a question series on Metaculus related to extinction events spanning risks from nuclear war, bio-risk, risks from climate change and geo-engineering, Artificial Intelligence risk, and risks from nanotechnology failure modes. Since then, these questions have accrued nearly 2,000 predictions.

Catastrophes were defined as a reduction in the human population of at least 10% in any period of 5 years or less. (Near) extinction is defined as an event that reduces the human population by 10% within 5 years, and by at least 95% within 25 years.

Here's a summary of the results as they stand today. 

Global catastrophic riskChance of catastrophe by 2100Chance of (near) extinction by 2100
Nuclear war4.18%0.29%
Biotechnology or bioengineered pathogens4.18%0.17%
Artificial Intelligence failure modes3.99%1.88%
Climate change or geo-engineering1.71%0.02%
Nanotechnology failure modes0.57%n/a 

These predictions are generated by aggregating forecasters' individual predictions based on their track records. Specifically, the predictions are weighted by a function of the forecasters' level of 'skill', where 'skill' is estimated with data on relative performance on a number (typically many hundreds) of resolved forecasts.

If we assume that these events are independent, the predictions suggest that there's at least a 13.85% chance of catastrophe, and a 2.34% chance of (near) extinction by the end of the century. Admittedly, independence is likely to be an inappropriate assumption, since, for example, some catastrophes could exacerbate other global catastrophic risks. Moreover, the risks might higher be higher than these numbers suggest, given that there are other sources of global catastrophic risk besides the ones in the list.

Interestingly, the predictions indicate that although nuclear risk and bioengineered pathogens are most likely to result in a major catastrophe, an AI failure mode is by far the biggest source of extinction risk—it is at least 6-times more likely to cause extinction than the second most likely event to do so (namely, nuclear war).

Links to all the questions on which these predictions are based may be found here

81

4 comments, sorted by Highlighting new comments since Today at 2:30 PM
New Comment

Thanks for sharing this summary! I think these questions and forecasts are a useful resource.

For anyone who wants to see more forecasts of existential risks (or similarly extreme outcomes), I made a database of all the ones I'm aware of. (People can also suggest additions to that. And it includes a link to these Metaculus forecasts.) And here's a short talk in which I introduce the database and overview the importance and challenges of estimating existential risk.

You may very well already be aware of this (I didn't look at your linked post closely), but Elicit IDE has a "Search Metaforecast database" tool to search forecasts on several sites that may be helpful to your existential risk forecast database project. Here are the first 120 results for "existential risk."

Forum readers who are not frequently on Metaculus may be interested in knowing that there are a number of biases and internal validity issues for long-term predictions on Metaculus, potentially more so than for short term questions there. For example, arguably the most important long-term question on Metaculus:

has comments like:

The optimal strategy on this question should be to assign the lowest possible probability. In this way, if humanity is not extinguished by 2100, as many points as possible will be awarded while if it is extinguished, no one will be interested in the outcome. Note: This is a pseudo humorous comment.

I think nonzero predictors take these comments quite seriously, or for other reasons are fairly flippant about finding out accurate answers to these long-term questions. Thus, forum readers should be extra careful before deferring blindly to Metaculus about such questions, and thus rely more on other sources over Metaculus.

The strongest counterargument to my reasoning above might be something like "Metaculus is unusually public and quantitative as a platform. To the extent that Metaculus has visible errors, we may expect that other epistemic sources have other, potentially larger, invisible errors."(Analogy: the concept of "not even wrong" in science). I take this reasoning quite seriously but do not consider it overwhelming.

The reasoning in the comment you quoted is actually not very persuasive, because it's virtually certain that the user will be dead by 2100, Metaculus won't exist by then, or MIPs will have ceased to be valuable to them. Even the slightest concern for accuracy should trump the minuscule expected benefit from pursuing this alleged "optimal strategy". (Though I guess some would derive great pleasure from being able to truly say "I predicted that humanity had a 99% chance of surviving the century 80 years ago and, low and behold, here we are, alive and kicking!").

Unfortunately, for questions with a shorter time horizon, that kind of argument may have some force. I feel ambivalent about discussing these issues, since I'm not sure how to balance the benefit of alerting others to the potential biases in Metaculus against the cost of exacerbating those biases, either by drawing attention to this strategy among predictors who hadn't considered it, or by creating the impression that other predictors are using it and thereby eroding the social norm to predict honestly. I guess one can try to emphasize that, at least with questions whose answers have social value, adopting the MIP-maximizing strategy when it is in conflict with accuracy should be seen as a form of defection and those who do it should feel bad about it.