Hide table of contents

Risks are bad. You probably noticed this if you've ever lost your wallet or gone through a pandemic. However, wallet-loss risk and pandemic risk are not equally worrying.

To assess how bad a risk is, two dimensions matter: the probability of the bad thing happening, and how bad it is if it happens. The product of these two dimensions is the expected badness of the risk.

Our intuitions are pretty bad at making comparisons of expected badness. To make things more intuitive, I introduce a visual representation of expected badness: catastrophic rectangles.

Here are two catastrophic rectangles with the same area, meaning they represent two risks with the same expected badness.

(Throughout this post, axes are intentionally left without scales, because no quantitative claims are made.)

Some people here want to reduce the badness of things. Reducing the expected badness of a risk means making its rectangle smaller. We can do this by reducing the event probability, or by reducing how bad it would be if it happened. The first approach is called prevention. The second approach is mitigation.

Prevention
Mitigation

Some events are catastrophic. This is typically the case when many people die at the same time. Some catastrophic events are even existential: they would permanently destroy human civilization. Which is really bad. To account for this extra badness, we have to give existential rectangles an existential badness bonus.

We can be pretty sure that the loss of your wallet will not destroy human civilisation. For some other risks, it is less clear. A climatic, artificial intelligence, pandemic or nuclear catastrophe could be existential, or not. These risks can be decomposed into existential and non-existential rectangles. Here is an example for pandemics:

What we want to do is still to reduce the total area of the rectangles. To do this, we can still use the prevention approach. For example, we can have better peace treaties to reduce the probability of a nuclear war.

Improving prevention

As for mitigation, there is something different about existential rectangles: their height cannot be reduced. This is because, by definition, an existential risk is an event that permanently destroys civilization. We can't make the end of humanity feel much better. What we can do is try to avoid it. To do this, we can either improve our response to catastrophes, or improve our resilience.

Improving our response means preventing catastrophes from getting too big. For example, we can have better pandemic emergency plans. Improving our response reduces the badness of the non-existential rectangle, while also reducing the probability of catastrophes that destroy civilisation.

Actually, this is not really what happens when we improve our response. By limiting the badness of existential catastrophes, we turned them into non-existential ones, so we have to widen the non-existential rectangle a bit. What's more, the catastrophes we converted to non-existential were the ones with the highest badness. This increases the average badness of the non-existential rectangle. Let's reduce its height slightly less to take this into account.

Improving response

We can also choose to focus specifically on the scenario where civilisation is destroyed. For example, some people have built a seed vault on the island of Svalbard, in Norway, to help us grow crops again in case everything has been destroyed by some global catastrophe. This sort of interventions try to improve our resilience, that is, to give civilisation more chances to recover. A resilience intervention converts some existential risk into non-existential catastrophic risk.

Improving resilience

Sometimes risks can have different origins. For example, the next pandemic could be natural, or it could be engineered. We could use more catastrophic rectangles to integrate this distinction.

Because the engineered rectangles look more terrible than the natural ones, it's easy to think that we should always focus on engineered pandemics. In reality, we should only focus on engineered pandemics when doing so allows for a greater reduction of the total area of the rectangles. An intervention that only reduces risks from engineered pandemics is not as great as one that would have the same impact on risks from engineered pandemics but would additionally reduce other risks.

Good
Even better

The same idea applies when we consider several risks at the same time. The Svalbard seed vault could be useful in any situation where all crops have been destroyed. It is thus better than an intervention that would reduce existential risk by the same factor, but only for one specific catastrophic risk.

Svalbard
not as good as Svalbard

All these rectangles are catastrophically simplistic. Many things here are highly debatable, and many rectangular questions remain open. For example:

  • How big do you think the existential badness bonus should be in this probability-badness frame ?
  • What would s-risks look like in this frame?
  • What would the rectangles become if every individual risk was represented in this frame with a high level of detail? What would be the shape of the curve?
  • How would you represent uncertainty in a nice way? How to represent a confidence region here?
  • Can this probability-badness frame be useful when used more quantitatively (with numbers on the axes)?
  • ...

I hope this way of visualising risks can help better communicate and discuss ideas about global catastrophic risks. If you want to make your own catastrophic rectangles, feel free to use this canva template!


CHERI logo

I wrote this post during the 2021 Summer Research Program of the Swiss Existential Risk Initiative (CHERI). Thanks to the CHERI organisers for their support, to my CHERI mentor Florian Habermacher, and to the other summer researchers for their helpful comments and feedback. I am especially grateful to Silvana Hultsch, who made me read the articles that gave me the idea of writing this. Views and mistakes are my own.

References

  • The idea of prevention, response and resilience is from Cotton-Barratt et al. (2020). It is also mentioned in The Precipice (Ord, 2020).
  • The existential badness bonus refers to Parfit's two wars thought experiment, cited by Bostrom (2013), and also mentioned in The Precipice.
  • The idea that there could be a tendency to focus too much on specific catastrophic scenarios refers to what Yudkowsky (2008) says about conjunction bias.
  • The idea of considering the effect of interventions across multiple existential risks refers to the idea of integrated assessment of global risks (Baum and Barrett, 2018).

Baum, Seth, and Anthony Barrett. “Towards an Integrated Assessment of Global Catastrophic Risk.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, January 17, 2018. https://papers.ssrn.com/abstract=3046816.

Bostrom, Nick. “Existential Risk Prevention as Global Priority: Existential Risk Prevention as Global Priority.” Global Policy 4, no. 1 (February 2013): 15–31. https://doi.org/10.1111/1758-5899.12002.

Cotton‐Barratt, Owen, Max Daniel, and Anders Sandberg. “Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter.” Global Policy 11, no. 3 (May 2020): 271–82. https://doi.org/10.1111/1758-5899.12786.

Ord, Toby. The Precipice: Existential Risk and the Future of Humanity. London Oxford New York New Delhi Sydney: Bloomsbury Publishing, 2020.

Yudkowsky, E. (2008), Cognitive Biases Potentially Affecting Judgment of Global Risks., In Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković, 91–119. New York: Oxford University Press.

Comments3


Sorted by Click to highlight new comments since:

I think these are visually appealing graphics and they can help emphasize the importance of existential risks as well as illustrate the basic concept of [average expected impact = probability X magnitude]. My main concern with them is that it seems like they could be a bit too reductionist: for any given event/risk, you can imagine it as having multiple possible outcomes each with their own probabilities of occurring and magnitude. For example, a given pandemic may have a 10% chance of occurring in a given time period, and if it does occur, it may have a 20% chance of killing 1M people, a 10% chance of killing 5M people, 1% chance of killing 100M people, and a 0.1% chance of killing ~7B people. To represent that, you would basically need to create a curve (probability X impact distribution) for each event instead of a rectangle.

Additionally, there is the issue about interactive effects of catastrophes—e.g., a given pandemic or a given nuclear exchange may not individually lead to extinction or total civilizational collapse, but it might be the case that together they have a very high chance of doing that.

Thank you for your comment!

To represent that, you would basically need to create a curve (probability X impact distribution) for each event instead of a rectangle.

Yes! This is actually the third open question at the end of the post. I'd be very curious to see such a curve, if anyone wants to make one.

I really like your second point, and wonder how the interactive effects you mention could be represented graphically.

Nice framing. Thank you.

Unrelatedly, I would like to see 3D cube graphics for importance, traceability neglectedness estimates, as exist on 80k where the volume is the priority.  

Curated and popular this week
 ·  · 1m read
 · 
Science just released an article, with an accompanying technical report, about a neglected source of biological risk. From the abstract of the technical report: > This report describes the technical feasibility of creating mirror bacteria and the potentially serious and wide-ranging risks that they could pose to humans, other animals, plants, and the environment...  > > In a mirror bacterium, all of the chiral molecules of existing bacteria—proteins, nucleic acids, and metabolites—are replaced by their mirror images. Mirror bacteria could not evolve from existing life, but their creation will become increasingly feasible as science advances. Interactions between organisms often depend on chirality, and so interactions between natural organisms and mirror bacteria would be profoundly different from those between natural organisms. Most importantly, immune defenses and predation typically rely on interactions between chiral molecules that could often fail to detect or kill mirror bacteria due to their reversed chirality. It therefore appears plausible, even likely, that sufficiently robust mirror bacteria could spread through the environment unchecked by natural biological controls and act as dangerous opportunistic pathogens in an unprecedentedly wide range of other multicellular organisms, including humans. > > This report draws on expertise from synthetic biology, immunology, ecology, and related fields to provide the first comprehensive assessment of the risks from mirror bacteria.  Open Philanthropy helped to support this work and is now supporting the Mirror Biology Dialogues Fund (MBDF), along with the Sloan Foundation, the Packard Foundation, the Gordon and Betty Moore Foundation, and Patrick Collison. The Fund will coordinate scientific efforts to evaluate and address risks from mirror bacteria. It was deeply concerning to learn about this risk, but gratifying to see how seriously the scientific community is taking the issue. Given the potential infoha
 ·  · 14m read
 · 
1. Introduction My blog, Reflective Altruism, aims to use academic research to drive positive change within and around the effective altruism movement. Part of that mission involves engagement with the effective altruism community. For this reason, I try to give periodic updates on blog content and future directions (previous updates: here and here) In today’s post, I want to say a bit about new content published in 2024 (Sections 2-3) and give an overview of other content published so far (Section 4). I’ll also say a bit about upcoming content (Section 5) as well as my broader academic work (Section 6) and talks (Section 7) related to longtermism. Section 8 concludes with a few notes about other changes to the blog. I would be keen to hear reactions to existing content or suggestions for new content. Thanks for reading. 2. New series this year I’ve begun five new series since last December. 1. Against the singularity hypothesis: One of the most prominent arguments for existential risk from artificial agents is the singularity hypothesis. The singularity hypothesis holds roughly that self-improving artificial agents will grow at an accelerating rate until they are orders of magnitude more intelligent than the average human. I think that the singularity hypothesis is not on as firm ground as many advocates believe. My paper, “Against the singularity hypothesis,” makes the case for this conclusion. I’ve written a six-part series Against the singularity hypothesis summarizing this paper. Part 1 introduces the singularity hypothesis. Part 2 and Part 3 together give five preliminary reasons for doubt. The next two posts examine defenses of the singularity hypothesis by Dave Chalmers (Part 4) and Nick Bostrom (Part 5). Part 6 draws lessons from this discussion. 2. Harms: Existential risk mitigation efforts have important benefits but also identifiable harms. This series discusses some of the most important harms of existential risk mitigation efforts. Part 1 discus
 ·  · 2m read
 · 
THL UK protestors at the Royal Courts of Justice, Oct 2024. Credit: SammiVegan.  Four years of work has led to his moment. When we started this, we knew it would be big. A battle of David versus Goliath as we took the Government to court. But we also knew that it was the right thing to do, to fight for the millions of Frankenchickens that were suffering because of the way that they had been bred. And on Friday 13th December, we got the result we had been nervously waiting for. Represented by Advocates for Animals, four years ago we started the process to take the Government to court, arguing that fast-growing chicken breeds, known as Frankenchickens, are illegal under current animal welfare laws. After a loss, and an appeal, in October 2024 we entered the courts once more. And the judgment is now in on one of the most important legal cases for animals in history. The judges have ruled in favour on our main argument - that the law says that animals should not be kept in the UK if it means they will suffer because of how they have been bred. This is a huge moment for animals in the UK. A billion Frankenchickens are raised with suffering coded into their DNA each year. They are bred to grow too big, too fast, to make the most profit possible. In light of this ruling, we believe that farmers are breaking the law if they continue to keep these chickens. However, Defra, the Government department responsible for farming, has been let off the hook on a technicality. Because Defra has been silent on fast-growing breeds of chicken, the judges found they had no concrete policy that they could rule against. This means that our case has been dismissed and the judges have not ordered Defra to act. It is clear: by not addressing this major animal welfare crisis, Defra has failed billions of animals - and the farming community. This must change. While this ruling has failed to force the Government to act, it has confirmed our view that farmers are acting criminally by using