Hide table of contents

Here's Nick Bostrom briefly introducing the argument.

From what I've read the doomsday argument from analogy is as follows:

Imagine there are two urns in front of you, one containing 10 balls, the other containing 1 million balls. You don't know which urn is which. The balls are numbered and upon blindly picking a ball numbered "7", you reason (correctly) that you've most likely picked a ball from the 10-ball urn. The doomsday argument posits this: when thinking about whether the future will be long (e.g. long enough for 10^32 humans to exist) or relatively short (say long enough for 200 billion humans), we should think of our own birthrank (you're roughly the 100 billionth human) the way we think about picking ball number 7. In other words, as the 100 billionth human you're more likely to be in the set of 200 billion humans rather than in the set of 10^32 humans, and this should be considered evidence for adjusting our prior expectations for how long the future will be.


I found few discussions on this in EA fora so I'm curious to hear what you all think about this argument. Does it warrant thinking differently about the long-term future?

14

0
0

Reactions

0
0
New Answer
New Comment

6 Answers sorted by

I'm not up on the literature and haven't thought too hard about it, but I'm currently very much inclined to not accept the premise that I should expect myself to be a randomly-chosen person or person-moment in any meaningful sense—as if I started out as a soul hanging out in heaven, then flew down to Earth and landed in a random body, like in that Pixar movie.

I think that "I" am the thought processes going on in a particular brain in a particular body at a particular time—the reference class is not "observers" or "observer-moments" or anything like that, I'm in a reference class of one.

The idea that "I could have been born a different person" strikes me as just as nonsensical as the idea "I could have been a rock". Sure, I'm happy to think "I could have been born a different person" sometimes—it's a nice intuitive poetic prod to be empathetic and altruistic and grateful for my privileges and all that—but I don't treat it as a literally true statement that can ground philosophical reasoning. Again, I'm open to being convinced, but that's where I'm at right now.

Indeed. Seems supported by a quantum suicide argument - no matter how unlikely the observer, there always has to be a feeling of what-its-like-to-be that observer.

https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality

With low confidence, I think I agree with this framing.

If correct, then I think the point is that seeing us at an 'early point in history' updates us against a big future, but the fact we exist at all updates in favour of a big future, and these cancel out.

You wake up in a mysterious box, and hear the booming voice of God:

“I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it.

If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box.

To get into heaven, you have to answer this correctly: Which way did the coin land?”

You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you’re in the big world — if the coin landed tails, way more people should be having an experience just like yours.

But then you get up, walk outside, and look at the number on your box.

‘3’. Huh. Now you don’t know what to believe.

If God made 10 billion boxes, surely it’s much more likely that you would have seen a number like 7,346,678,928?

In the "voice of God" example, we're guaranteed to minimize error by applying this reasoning; i.e., if God asks this question to every possible human created, and they all answer this way, most of them will be right.
Now, I'm really unsure about the following, but imagine each new human predicts Doomsday through DA reasoning; in that case, I'm not sure it minimizes error the same way. We often assume human population will increase exponentially and then suddenly go extinct; but then it seems like most people will end up mistaken in their predictions. Maybe we're using the wrong priors?

  1. The distinction between humans and other lifeforms is arbitrary.
  2. ~10^30 lifeforms have ever lived.
  3. Thus, there'll likely be another ~10^30 lifeforms.
  4. But it's equally likely that this will be:
    1. ~10^30 non-human animals, if we do go extinct (life originated ~5bn years ago and life will end in ~5bn years when the sun dies).
    2. ~10^30 (post-) humans, if we don't go extinct (as per Bostrom's calculation).
  5. And so the doomsday argument doesn't tell us anything about x-risk or the far future.

As I see, the point is to estimate when extinction would occur by estimating the distribution of population accross time, right?  So we use a Rule of Succession-like reasoning... I'm ok with that, so far. N humans have lived, so we can expect more N humans to live, we can update our estimate each time a new one is born...
But then, why don't we use the time humnas have already lived on Earth as input instead? I mean, that's Toby Ord's Precipice argument, right? So 200k years without extinction lead you to a very different guesstimate. 

One reason I'm not convinced by  the Doomsday argument is that it's equally true at all points in history - you could make the same argument 2,000 years ago to the Greeks or 10,000 years into the future (well, only if Doomsday isn't really imminent) and the basic logic would still hold. I find it hard to be convinced by an argument that will always come to the same conclusion at any point in history, even though the argument is that we're most likely to exist at the point that it's true.

The problem with the analogy is that the urn is continuously filling with balls with higher and higher numbers, so pulling out one number at any point in the process tells you nothing about the future number of balls in the urn. That would require analysis of the urn and the ball-dropping mechanism.

For this reason, I find concrete existential risks much more convincing than the Doomsday argument.

More from niklas
Curated and popular this week
Relevant opportunities