Hide table of contents

This paper was published as a GPI working paper in September 2024.

Abstract

There are some possible events that we could not possibly discover in our past. We could not discover an omnicidal catastrophe, an event so destructive that it permanently wiped out life on Earth. Had such a catastrophe occurred, we wouldn’t be here to find out. This space of unobservable histories has been called the anthropic shadow. Several authors claim that the anthropic shadow leads to an ‘observation selection bias’, analogous to survivorship bias, when we use the historical record to estimate catastrophic risks. I argue against this claim.

Introduction

Estimating the probability of catastrophic events is a difficult business. We don’t have much to go on when the catastrophes would be of a novel kind, arising from hypothetical social or technological developments. On the other hand, for some types of catastrophes we have a long geological record to consult, along with other forms of data and the results of scientific modelling. For example, when it comes to asteroid impacts, we can use geological dating techniques and the observed distribution of crater sizes to inform our estimates of future risks. 

A curious thing about this historical data, however, is that there are some possible data points that we could not possibly observe. We could not find in the historical data an omnicidal catastrophe, an event so destructive that it permanently wiped out life on Earth. Had such a catastrophe happened, we wouldn’t be here to find out. Ćirković, Sandberg, and Bostrom (2010) call this space of unobservable histories the anthropic shadow

Some striking claims have been made about the significance of the anthropic shadow. For example, Ćirković et al. claim that, because of the anthropic shadow, a straightforward treatment of the historical record will lead to systematically underestimating the chances of potentially omnicidal events. In the extreme case, they write,

we should have no confidence in historically based probability estimates for events that would certainly extinguish humanity…(1497)

Tegmark and Bostrom (2005a) elaborate this line of thought in an earlier ‘brief communication’ published in Nature:

Given that life on Earth has survived for nearly 4 billion years (4 Gyr), it might be assumed that natural catastrophic events are extremely rare. Unfortunately, this argument is flawed because it fails to take into account an observationselection effect…, whereby observers are precluded from noting anything other than that their own species has survived up to the point when the observation is made. If it takes at least 4.6 Gyr for intelligent observers to arise, then the mere observation that Earth has survived for this duration cannot even give us grounds for rejecting with 99% confidence the hypothesis that the average cosmic neighbourhood is typically sterilized, say, every 1,000 years. The observation-selection effect guarantees that we would find ourselves in a lucky situation, no matter how frequent the sterilization events.

To avoid any straightforward appeal to the historical absence of omnicidal events, Tegmark and Bostrom develop a more complicated method based on modelling the formation times of habitable planets. More recently, Snyder-Beattie, Ord, and Bonsall (2019) estimate the lifespan of the human species and check how robust this estimate is with respect to different evolutionary hypotheses. They reason:

[I]f human existence required a 10 million year (Myr) period of evolution free from asteroid impacts, any human observers will necessarily find in their evolutionary history a period of 10 Myr that is free of asteroid impacts, regardless of the true impact rate. Inferring a rate based on those 10 Myr could therefore be misleading, and methods must to be used to correct for this bias.

In this paper I argue for a deflationary position: the existence of the anthropic shadow is essentially irrelevant to estimating risks. There are several interesting points that come up along the way, but an initial reason for skepticism is easy to state. According to standard forms of Bayesianism or evidentialism, what we ought to think depends on what evidence we actually have. The fact that we could not easily have had different evidence is not important in itself. So, even if we could not easily have had evidence of past omnicidal events, our actual evidence that there were no such events should make us think that the rate of them is low.

The core of this paper, sections 2–4, analyses a stylized example close to the one in Ćirković et al. Section 5 extends my analysis applies to the models used by Tegmark and Bostrom and by Snyder-Beattie et al., while section 6 concludes. As I mentioned, Ćirković et al. focus on potentially omnicidal events. For present concreteness, let us say that a ‘potentially omnicidal event’ is one that has a 10% chance of permanently ending life on Earth. Then, the upshot of my analysis is essentially as follows.

(A) The fact that life has survived so long is evidence that the rate of potentially omnicidal events is low. 

(B) Given the fact that life has survived so long, historical frequencies provide evidence for a true rate rather higher than the observed rate. 

(C) These two effects cancel out, so that, overall, the historical record provides evidence for a true rate close to the observed rate.

Thus, contrary to claims about the anthropic shadow, the historical record is (in the stated sense) a reliable guide to the rate of potentially omnicidal events.

On my reading, the authors quoted above are too focused on (B). Based on (B), the suggestion is that using the historical rate as an estimate of the true rate may lead to a bad underestimate. However, I argue that (A) is true and undermines this suggestion. Effectively, focusing on (B) neglects the base-rate provided by (A). I must admit, however, that I find the exact position of Ćirković et al. somewhat difficult to decipher. So, while I will try to indicate the specific points at which I disagree with other participants in this literature, the primary aim of this paper is to lay out the true story as clearly as I can, in a way that will forestall any further confusion.

Read the rest of the paper

Comments1


Sorted by Click to highlight new comments since:

(Just pointing out that previous discussion of this paper on this forum can be found here.)

Curated and popular this week
 ·  · 1m read
 · 
(Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.) > “There comes a moment when the children who have been playing at burglars hush suddenly: was that a real footstep in the hall?”  > > - C.S. Lewis “The Human Condition,” by René Magritte (Image source here) 1. Introduction Sometimes, my thinking feels more “real” to me; and sometimes, it feels more “fake.” I want to do the real version, so I want to understand this spectrum better. This essay offers some reflections.  I give a bunch of examples of this “fake vs. real” spectrum below -- in AI, philosophy, competitive debate, everyday life, and religion. My current sense is that it brings together a cluster of related dimensions, namely: * Map vs. world: Is my mind directed at an abstraction, or it is trying to see past its model to the world beyond? * Hollow vs. solid: Am I using concepts/premises/frames that I secretly suspect are bullshit, or do I expect them to point at basically real stuff, even if imperfectly? * Rote vs. new: Is the thinking pre-computed, or is new processing occurring? * Soldier vs. scout: Is the thinking trying to defend a pre-chosen position, or is it just trying to get to the truth? * Dry vs. visceral: Does the content feel abstract and heady, or does it grip me at some more gut level? These dimensions aren’t the same. But I think they’re correlated – and I offer some speculations about why. In particular, I speculate about their relationship to the “telos” of thinking – that is, to the thing that thinking is “supposed to” do.  I also describe some tags I’m currently using when I remind myself to “really think.” In particular:  * Going slow * Following curiosity/aliveness * Staying in touch with why I’m thinking about something * Tethering my concepts to referents that feel “real” to me * Reminding myself that “arguments are lenses on the world” * Tuning into a relaxing sense of “helplessness” about the truth * Just actually imagining differ
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma