Hide table of contents

I wanted to write a quick overview of overarching topics in global catastrophic and existential risk where we do not know much yet. Each of these topics deserves a lot of attention on their own, and this is simply intended as a non-comprehensive overview. I use the term ‘hazard’ to indicate an event that could lead to adverse outcomes, and the term ‘risk’ to indicate the product of a hazard’s probability times its negative consequences. Although I believe not all uncertainties are of equal importance (some might be more important by orders of magnitude), I discuss them in no particular order. Furthermore, the selection of uncertainties is the result of what has been on the forefront of my mind and does not reflect the 8 most important uncertainties.

1. Timelines

Existential risk is often discussed as ‘y% risk in the next 100 years [or some other timespan], conditional on no other catastrophic events’. However, risk is probably not equally distributed over time. For example, risks from climate change are larger in the future as global temperatures continue to rise. Assuming we can do a reasonable assessment of risk over time, comparing timelines of different hazards is important for cross-risk prioritization. After all, we should discount the risk of one hazard by the probability that another catastrophic event would occur first. For example, I hear many non-EA’s say that ‘we shouldn’t worry about these futuristic risks such as AI, because the risk of catastrophe from climate change in the near term is very high’. On the other hand, we should also take into account the timeline of achieving civilizational invulnerability; if one believes superintelligence is nearly certain to arrive before 2100, they should heavily discount the post-2100 existential risk.

However, timelines by themselves only affects the risk of other hazards by a small factor. E.g. even if the global catastrophic risk climate change is 10% until 2050, that reduces the x-risk from AI after 2050 by only 10%.

2. Probability of recovery

Longtermism is unique in that it makes a big moral distinction between global collapse (i.e. the loss of critical infrastructure and loss of more than 50% of the world population) and existential catastrophes (e.g. extinction). In turn, a large argument in favour of a focus on emerging technologies is that the probability of recovery after global collapse is high or very high. However, not much research has been done into this (Cf. GCRI’s page for an exception). To me, it seems that people’s primary reason to believe recovery is probable is that humanity will have a lot of time: the Earth will remain habitable for a long time (100 mln. - 1 bln. years; ref) and the risk from natural hazards is low (Cf. Snyder-Beattie, Ord, Bonsall, 2019 for an upper bound on the risk).

However, not much research has been done on humanity’s expected lifespan after collapse, how much of this period will be suitable for large-scale complex societies (e.g. How often the climate would be suitable for agriculture; cf. Baum et al., 2019), how different catastrophes would affect the conditions for recovery, nor on obstacles that future humanity would face (e.g. limited resources for industrialization). A good rule of thumb seems to be ‘the later the collapse, the worse the prospects for humanity’ (cf. Luke Kemp). However, how much worse it would be is unclear. Furthermore, I believe that the probability of recovery is sensitive to the type of collapse and how the collapse influences the conditions for recovery. This means that we should not speak of a single probability of recovery, because it depends on one’s other judgments of which collapse scenarios are most likely.

Given the limited research available, I find confidence on this question unjustified.

3. Quality of recovery

Even more uncertain than the probability of recovery is the quality of recovery. My impression is that the standard view is ‘we can’t answer this question, so the epistemically responsible approach is to assume an expected value just as good/bad as our current trajectory with a large underlying variance in possible outcomes.’

I believe it would be valuable to do research on this topic: some things could potentially be discovered by a diligent researcher. For example, a recovered global society might be less reliant on fossil fuels, reducing the pressures from climate change. On the other hand, a recovering society might re-invent weapons of mass destruction, and the early phase after the discovery of these weapons seems much riskier than the current situation.

4. Degree of fragility of society

Current EA-thinking seems to apply a multi-hazard model of existential risk analysis. It simply looks at different hazards (nuclear war, pandemic, superintelligence, extreme climate change) and asks for each hazard ‘what’s the probability that this hazard will occur?’ and ‘given that it occurs, what is the probability of collapse, and of extinction?’ (Cf. p. 1-6 of my write-up for a more technical description of this model).

However, this approach seems to assume a resilient global system where extreme events are necessary to lead to collapse or extinction. In practice, we don’t know how resilient society is. Complex dynamic systems can appear stable, only to radically and suddenly fail (e.g. the financial system in 2008). If society is actually fragile, a focus on hazards is misguided, and a focus should instead be on improving the resilience of the global system. On the other hand, if society is resilient, minor hazards would be unimportant. Yet major hazards would - aside from being the main source of collapse/extinction - be more likely to only result in global disruption. This leads to the next uncertainty.

5. Long-term effects of disruption

Within the hazard-focused models, attention is mostly given to the ‘direct’ effects: the likelihood that a hazard directly leads to collapse or existential catastrophe. However, if a nuclear war were to occur that does not lead to global collapse or extinction it would be a major event in human history. The ‘status quo trajectory’ would be massively disrupted: post-war power relations would be significantly changed, humanity would view global catastrophe as much more likely for the next decades, and many other complex consequences would follow (e.g. World War II plausibly contributed to the empowerment of women, which had large social consequences).

If a major hazard is much more likely to lead to global disruption than collapse/extinction and if global disruption has significant long-term effects on humanity’s trajectory, then a large fraction of the expected value from reducing global catastrophe comes from how that work affects the likelihood and effects of global disruption.

6. Expected value of the future

Work on existential risk is regularly motivated by appealing to the fact that the future would be tremendously valuable. Extinction would be an ‘astronomical waste’. However, many people would disagree with this optimistic assumption. Arguments for the quality of the future rely on speculative arguments such as that the expected value calculation is dominated by futures that are optimized for value or disvalue, or that other agents would do worse in expectation (Cf. Brauner & Grosse-Holz, section 2.1).

Furthermore, the option value of postponing extinction is limited (Brauner & Grosse-Holz (section 1.3), me). In addition, there is the consideration of ‘which world gets saved’: if we change the properties of the world to reduce extinction risk, we also affect the properties of a surviving world. In a similar vein, we might conclude that a surviving world has certain properties (e.g. some combination of technological maturity, wisdom, and coordination) given that there has not been an extinction event.

Further work on the value of the future seems valuable - I'd especially like to see an accessible piece geared towards people who believe the future is not clearly positive. It could either provide convincing reasoning the future is likely to be valuable, or argue that work on GC-/X-risk reduction tends to be valuable regardless. Of course, opposing viewpoints are also very welcome.

7. Ways to achieve civilizational invulnerability

Arguably, the goal of existential risk reduction is to approach civilizational invulnerability so a good future can be created. This is a barely explored question and there might be multiple ways to achieve it (Cf. Bostrom (2013, 2018) for discussion of technological maturity and the Vulnerable World Hypothesis). Potential strategies probably contain a combination of technological and non-technological innovation (e.g. cultural, legislative, and economic innovation). Some feasible strategies may lean heavily on technological innovation, while others could rely more heavily on non-technological innovation.

I am not sure whether research on this would uncover valuable information. One potentially promising line of research (suggested by Aaron Gertler) is the trade-off between x-risk reduction and the quality of the future (including how it affects the likelihood of suffering risks).

8. Other models or angles of existential risk & meta uncertainty

It is tempting to - implicitly or explicitly - construct a single model of hazards and probable consequences. However, the dominant model might be missing some important factors or highlight only a part of the problem space. Reality can be carved up in different ways and it is good practice to view a problem from multiple angles. Different models of existential risk (including qualitative ones) could highlight aspects that are currently in our collective blindspots. For example, viewing all existential risks through the lens of agential risk (i.e. risks stemming from people’s intentional or unintentional behaviour, Cf. Torres, 2016), including boring apocalypses (Cf. Liu, Lauta, and Maas (2018); and Kuhleman (2018)), or a structural classification of global catastrophic risk (Cf. Avin et al., 2018).

Lastly, meta uncertainty is uncertainty about what we are/should be uncertain about. As a point in case this list is not comprehensive and I hope others add their main uncertainties to it.

---

Thanks to Aaron Gertler for providing useful feedback on this post. Many of my views here crystallized during my summer visitorship at CSER sponsored by BERI. Feel free to contact me if you want to know more about what I call ‘comprehensive existential risk assessment’.

Comments9
Sorted by Click to highlight new comments since:

Related to #4, I have a paper under review, with a preprint HERE, discussing an aspect of fragility.

Title: Systemic Fragility as a Vulnerable World

Abstract: The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss a new hypothesis that complexity of a certain type can itself function as a source of risk. This ”Fragile World Hypothesis” is compared to Bostroms ”Vulnerable World Hypothesis”, and the assumptions and potential mitigations are contrasted.

The results of a small survey on the longterm future potential reduction of agricultural catastrophes are in here, and results of a small poll on the longterm future potential reduction of catastrophes that disrupt electricity/industry are in here. I agree - lots of uncertainty.

  1. Expected value of the future

I just wanted to mention the possibility of so-called suffering risks or s-risks, which IMO should loom large in our trying to meaningfully assess the expected value of the future. (Although, even if the future is negative on some assessment, it may still be better to avert x-risks to preserve intelligence and promote compassion for intense suffering in the expectation that the intelligence will guard against suffering that would re-emerge in the absence of the intelligence (the way it "emerged" in the past).)

Yes, s-risks are definitely an important concept there! I mention them only at 7. but not because I thought they weren't important :)

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

“Open questions” are a key driver of new academic research, and can be a good way for academics to approach a new field. 

For this reason, I like seeing lists like Siebe’s — it’s not quite a set of open questions, but it lays out key uncertainties that could be used to produce such questions. It also provides a strong set of citations, giving the aforementioned academics a sense for where to start if they want to work on one of these areas.

Another one to consider, assuming you see it at the same level of analysis as the 8 above, is the spatial trajectory through which the catastrophe unfolds. E.g. a pandemic will spread from an origin(s) and I'm guessing is statistically likely to impact certain well-connected regions of the world first. Or a lethal command to a robot army will radiate outward from the storage facility for the army. Or nuclear winter will impact certain regions sooner than others. Or Ecological collapse due to an unstoppable biological novelty will devour certain kinds of environment more quickly (same possibly for grey goo), etc. There may be systematic regularities to which spaces on Earth are affected and when. Currently completely unknown. But knowledge of these patterns could help target certain kinds of resilience and mitigation measures to where they are likely to have time to succeed before themselves being impacted.

Hey Matt, good points! This all relates to what Avin et al. call the spread mechanism of global catastrophic risk. If you haven't read it already, I'm sure you'll like their paper!

For some of these we actually do have an inkling of knowledge though! Nuclear winter is more likely to affect the northern hemisphere given that practically every nuclear target is located in the northern hemisphere. And it's my impression that in biosecurity geographical containment is a big issue: an extra case in the same location is much less threatening than an extra case in a new country. As a result there are border checks for a hazardous disease at borders where one might expect a disease (e.g. currently the borders with the Democratic Repbulic of the Congo).

One such uncertainty is related to the conditional probability of x-risks and their relative order. Imagine that there is 90 per cent chance of biological x-risk before 2030, but if it doesn't happen, there is 90 per cent chance of AI-related x-risk event between 2030 and 2050.

In that case, total probability of survival extinction is 99 per cent, of which 90 is biological and only 9 is from AI. In other words, more remote risks are "reduced" in expected size by earlier risks which "overshadow" them.

Another point is that x-risks are by definition one-time events, so the frequentist probability is not applicable to them.

Yeah so the first point is what I'm referring to by timelines. And we should all also discount the risk of a particular hazard by the probability of achieving invulnerability.

Curated and popular this week
Relevant opportunities