Hide table of contents

Abstract/TLDR

Recent existential-risk thinkers have noted that the analysis of the fine-tuning argument for God’s existence, and the analysis of certain forms of existential risk, employ similar types of reasoning. This paper argues that insofar as the “many worlds objection” undermines the inference to God’s existence from universal fine-tuning, a similar many worlds objection undermines the inference that historic risk of global nuclear catastrophe has been low from the lack of such a catastrophe has occurred in our world. A version of the fine-tuning argument applied to nuclear risk, The Nuclear Fine-Tuning Argument, utilizes the set of nuclear close calls to show that 1) conventional explanations fail to adequately explain how we have survived thus far and 2) the existence of many worlds provides an adequate explanation. This is because, if there are many worlds, observers are disproportionately more likely to reflect upon a world that hasn’t had a global nuclear catastrophe than upon one that has had a global nuclear catastrophe. This selection bias results from the catastrophic nature of such an event. This argument extends generally to all global catastrophic risks that both A) have been historic threats and B) would result in a significantly lower global population.

Acknowledgment

I would like to thank my Philosophy Professor, who acted as my summer research advisor, as well as all of my friends who I forced to read this for helping me get through this project.

Relevance to Existential Risk

There is significant disagreement, in the discussion of threats to humanity, about which apocalypse will happen in the near future. It seems that most people are either worried about climate change or UAI and there is little focus on Nuclear War. On a surface level, the fact that humanity has survived for more than half a century since the Trinity Test provides a common-sense reason to assume that the risk of a global nuclear catastrophe is less of a threat than risks (such as unaligned artificial intelligence) which lack an extensive history. The purpose of this paper, however, is to show that our historical record is misleading regarding the risk of nuclear catastrophe. Through the many-worlds lens offered by this paper, the long history of nuclear close calls in fact gives credibility to the threat of global nuclear catastrophe rather than implying that it is unlikely. As a result, I hope this paper will help to draw focus to accidental nuclear war(and nuclear war broadly) as an immediate risk. Additionally, I hope that this paper provides an important step in the creation of frameworks to conceive of historic risk. 

This paper focuses on nuclear war as a global catastrophic risk rather than as an existential risk. This is due to the ambiguity surrounding this question. As a result, the topic of whether nuclear war would lead to complete human extinction lies outside of the scope of this paper. If there were a nuclear war within the context of the Sixth Mass Extinction, humanity would be in an unprecedented ecological position. As a result, we simply cannot know if such an event would lead to our extinction. Even if it is not the Apocalypse, it would certainly be apocalyptic.


Epistemic Transparency

I am an undergraduate student studying philosophy and religious studies. I had an independent study on existential risk, and have just finished a summer research project which focused on this paper. I also this paper to the journal of risk research for peer review.

I am extremely confident (in excess of 95% sure) that the explicit thesis of this paper is valid, i.e., that if the existence of many worlds blocks the inference to God from Universal Fine-Tuning, it also blocks the inference to relative nuclear stability from Nuclear Fine-Tuning. I am marginally less certain (about 90% sure) that the claim “the existence of many worlds blocks the inference to relative nuclear stability from Nuclear Fine-Tuning” is true. I believe(about 60% sure) that the implication of this paper is correct, i.e., that there is a set of concrete worlds that includes many worlds that include nuclear deterrence, and that most of those worlds don’t avoid a global nuclear catastrophe for this long. I have philosophical commitments (specifically, modal realism) that make me particularly friendly to many-worlds theories. 

Introduction

At 6:30 p.m. on September 18th, 1980, a wrench was dropped in rural Arkansas. The wrench fell 80 feet before colliding with a Titan II intercontinental ballistic missile equipped with a nine-megaton nuclear warhead. This warhead, the largest in the US arsenal at the time, was identified by its designer as particularly unstable. At about 3:00 a.m. the following morning, as a lingering effect of the dropped wrench, the fuel cell exploded, launching the warhead into the sky.(Christ) The United States had launched a nuclear weapon.

Almost forty years later, Nick Bostrom would publish “The Vulnerable World Hypothesis,” which discusses the possibility of a technology being invented that puts all of humanity at risk. Notably, he places the possible discovery of an apocalyptic technology in the future.(Bostrom(2019)) This paper will put pressure on the assumption, common among authors who work in existential risk, that nuclear weapons as they exist in this world are not an apocalyptic technology. I will assume the multiverse hypothesis familiar from contemporary physics and from objections to the fine-tuning argument for God. According to the hypothesis, there exist multiple worlds.I will attempt to show that, if there are alternate versions of Earth, a significant percentage of alternate Earths experienced a global nuclear catastrophe before the current year. Mirroring arguments that attempt to show that the existence of other universes would undercut the inference from universal fine-tuning to God, I will show that the existence of other universes undercuts the inference from nuclear fine-tuning to relative nuclear stability. 

Signposting:

Section 1 will give a brief history of teleological arguments for God’s existence, focusing specifically on fine-tuning arguments, and will introduce the concept of anthropic coincidences. Section 2 will discuss the set of nuclear anthropic coincidences and will show how a many-worlds view explains these coincidences. Section 3 will discuss alternative explanations and will show how each fails to explain the coincidences. Finally, Section 4 will discuss other possible objections to the many-worlds interpretation of nuclear fine-tuning. 

Section 1: A Brief History of Teleological Argument

Before we get to the nuclear fine-tuning argument, I’m going to use this section to set the stage for our later discussion by reviewing many of the moves made in other fine-tuning arguments. Fine-tuning arguments are a subset of the many teleological arguments for God’s existence. In general, teleological arguments for God attempt to find empirical evidence for design and then infer from this evidence the existence of an intelligent designer. This section will begin by discussing biological teleological arguments and how the discovery of Darwinian evolution acted as a defeater for those arguments. Then I will discuss the fine-tuning argument In discussing these arguments I will talk about how the existence of other universes if verified, would defeat the argument from universal fine-tuning. This section will conclude by introducing the nuclear fine-tuning argument as an argument that is analogous to other fine-tuning arguments.

The Biological Teleological Argument:

Historically, perhaps the most familiar form of teleological argument has focused on biology. Such arguments look at how certain biological organs, like the heart, function in ways that are similar to an intricate machine, like a watch. Biological teleological arguments contend that this functionality is unlikely to arise through chance, and therefore the functionality of living things is evidence of an intelligent designer. Just like a watch, living creatures have many parts that are necessary to their functioning; it is not merely that we possess a single organ that is particularly intricate, but rather that every part of the human body, or of any other biological system for that matter, is impossibly complex. As a result, without a unified explanation, each additional contingency makes the existence of complex forms of life appear even more implausible. The design argument is compelling in this instance because it is necessarily implausible that a species would just happen to have the entire set of traits necessary to its survival if the possession of any one of these traits is determined independently from all other traits. 

The discovery of Darwinian evolution is generally understood to have undercut versions of the teleological argument that leverage the particular functionality of biological systems. (Friederich) This is because Darwinian evolution provides an alternative unified explanation for why biological life is so intricate, without appealing to blind luck or expanding our ontology. 

The Fine-Tuning Argument:

But even if the rise of Darwin has led to the demise of teleological arguments based on biology, at the same time there has been a rise of teleological arguments based on physics–this is the fine-tuning argument. (Joseph) In the first half of the twentieth century, the fine-tuning argument focused on the apparent fine-tuning of our home planet. For example, if earth's orbit around the sun did not happen to be within the “Goldilocks zone”, then liquid water would not exist on earth and life could not have formed. Just as the teleological argument grounded in biology appeals not to one, but to many contingencies to make the case for intelligent design, the fine-tuned planet version of the fine-tuning argument appeals to the many contingencies upon which life on earth depends–not to a single contingency–as support for God’s existence. (Joseph) This is because God if they existed, would place earth within the goldilocks zone because they desired to create a world that includes life. Defenders of the God hypothesis have argued that, because the God hypothesis provides a unified explanation for what otherwise seems like a set of anthropic coincidences–i.e., cases that make our current existence seem unlikely– these coincidences increase the probability of the God hypothesis being true. (Swinburne: 223--233) However, the discovery of planets outside of our solar system undercuts the plausibility of this interpretation. The numerous exo-planets inhospitable to life show that there isn’t some universal law that ensures that planets are life-friendly. (Friederich: 35)

Ever since the discovery of planets beyond our solar system, Teleological arguments for God’s existence tend to focus on universal fine-tuning. Such arguments focus on the immense number of seeming contingencies relating to universal constraints, the conditions in the early universe, and the set of physical laws. For example, if the strength of gravity relative to electromagnetism had been significantly weaker, galaxies, stars, and planets never would have formed; if it had been slightly weaker, stars wouldn’t explode in supernovae, which is the main source of heavy elements in the universe; on the other hand, if gravity had been slightly stronger, stars would form from less matter and, as a result, would have shorter lifespans.(Barnes: 33-34) Again these anthropic coincidences have been used as evidence for the God hypothesis. 

However, if there are multiple universes, these anthropic coincidences wouldn’t support the God hypothesis. Just as in the case of earth's apparent fine-tuning, the existence of other universes, which are inhospitable to life, would show that universes aren’t necessarily habitable.  

Historical Teleological Argument:

While fine-tuning arguments have traditionally rooted themselves in physics, this style of argument can be built around any set of anthropic coincidences. The rest of this paper will leverage the set of nuclear close calls during the Cold War as the basis for a version of the fine-tuning argument, henceforth referred to as the Nuclear Fine-Tuning argument, rooted in history (the history of nuclear close calls) rather than physics. As in the other cases, the most common interpretation of how we thus far avoided a global nuclear catastrophe is teleological, defending the assumption that the systems that control nuclear arms (nuclear treaties, MAD doctrine, etc.) are intelligently designed–that is, designed with foresight in order to bring about a certain plan, in which nuclear annihilation is avoided. A defender of this argument would say that our avoiding a nuclear war thus far is evidence that favors the view that the nuclear doctrines used by states to dictate how they manage their nuclear weapons are sufficient to avoid cataclysm. Yet the specifics of many nuclear close calls put pressure on many conventional interpretations. 

As in the other arguments, the particulars initially seem to be non-natural. General Lee Butler, former head of the U.S. Strategic Command -- which controls nuclear weapons and strategy -- wrote that “we escaped the cold war without a nuclear holocaust `by some combination of skill, luck, and divine intervention, and [he] suspect[ed] the latter in greatest proportion”. While this theistic interpretation of nuclear fine tuning is closely analogous to the others we have considered, it differs in that the relevant anthropic coincidences surround humanity's current existence rather than its entire historical existence. Accordingly, theistic interpretations of this set of anthropic coincidences would have to posit a God invested in the particulars of human affairs. On the other hand, the many-worlds interpretation gives a unified explanation of both universal fine-tuning and nuclear fine-tuning. This is because if there are enough universes every possible outcome should happen somewhere.

Brief Clarification:

The existence of many worlds is controversial; however, for this paper, I would like to take their existence as a premise. Insofar as the existence of many worlds would defeat the fine-tuning argument for God’s existence, what effect might it have on our interpretations of other anthropic coincidences. The next section of this paper will provide an interpretation of nuclear close calls that assumes the existence of many worlds.


 

Section 2: The Nuclear Fine-Tuning Argument 

Let us return to Arkansas 1980, September 19th, just after 3 a.m. The nine-megaton warhead, the largest in the US arsenal at the time, the one noted to be particularly unstable by its creator, fell back to earth, crashing into the ground a few hundred feet from the launch complex’s entry gate. Fortunately, it did not detonate.(Christ) Just how fortunate it is hard to say. For this paper, let's suppose that there was a 50% chance of the Damascus missile explosion leading to a nuclear detonation and that an unplanned nuclear detonation on American soil would have led to a global nuclear catastrophe. A single anthropic coincidence isn’t, on its own, noteworthy. Sometimes things do work out for the best. Sometimes the nuclear coin lands on heads and so we all survive. Yet the Damascus Titan missile explosion is not the only example of our good fortune. 

This section will begin by outlining several more cases where nuclear catastrophe was avoided by some amount of chance or luck. It will then discuss how the existence of many universes would mean that our positionality as observers would help to explain why a nuclear war between the United States and the Soviet Union didn’t occur. It will proceed by first discussing a general case and then applying that general case to the specific case of a global nuclear catastrophe. 

Examples:

On January 23rd, 1961, a B-52 bomber carrying two nuclear warheads crashed over North Carolina. The first bomb, which descended by parachute, was found in a tree. The other bomb, which lacked a parachute, plunged into farmland at 700 miles per hour and disintegrated without the detonation of its conventional explosives. Both bombs were partially armed when they left the aircraft. (Jones)

 On October 27th, 1962, the American military trapped the Nuclear-armed USSR Submarine B-59 off the coast of Cuba and made the potentially world-ending decision to fire upon the sub. The vast majority of the crew on the Soviet sub wanted to return fire, making use of the nuclear torpedo. Captain Vasili Arkhipov was the lone dissenting voice on the Soviet side: the sole figure who prevented the firing of the nuclear weapons. (Mozgovi)

On January 25th, 1995, the trajectory of a U.S.-Norwegian rocket carrying scientific equipment resembled the path of a nuclear missile. As a result, Russian President Boris Yeltsin had to decide whether or not to fire a retaliatory nuclear strike on the United States, ultimately deciding not to fire. (Hoffman)

Outline of the Argument Space:

Due to it being impossible to know the real probability of any cases leading to a nuclear exchange between the United States and the Soviet Union, I am going to treat each of them as straight coin flips. The chance of four consecutive coin flips coming up heads is 6.25%. Some readers might dispute the assumption that the risk in each of these cases is 50%, and indeed, while the risk in these cases might have been greater, they might also have been less risky than I’m making them out to be. However, this risk assessment also ignores an uncountable number of other cases. Nuclear risk is of course not limited to these few moments. While there has never been an accidental nuclear detonation on American soil, there have been thousands of nuclear accidents.(Schlosser 327) Additionally, While these moments show how the doctrine of nuclear deterrence leads to moments where humanity risks accidental armageddon, it does not capture the risk of intentional nuclear war. For the sake of argument, let us say that there was a 95% chance of global nuclear catastrophe occurring before 2020.

Regardless of how likely one thinks nuclear war between the United States and the Soviet Union was during the second half of the twentieth century, luck must play some explanatory role–unless one believes there was zero risk. If there are many worlds, however, what luck means in this context fundamentally changes. We did, in this world get lucky, but we were unlikely to be persons within the broader set of worlds who didn’t get lucky–because most people die if there’s a global nuclear catastrophe. This is to say that 1) from the standpoint of this world, global nuclear catastrophe was averted in large part due to random chance, and 2) in many other worlds, different possibilities manifested which led to nuclear catastrophe, but 3) a disproportionate number of people live in worlds in which a global nuclear war did not occur. 

Imagine that there is only one world. If this world faces nuclear close call after nuclear close call, at some point its repeated good fortune cries out for explanation: it is unlikely for every coin toss to go our way. Perhaps it was divine intervention, perhaps something else, but it becomes increasingly unlikely that it is just luck time after time after time.

Now let us imagine that there are many worlds, an arbitrarily large number of which experience nuclear close calls. Even if, in half of the worlds, say, the first close call leads to cataclysm, and in half of the remaining worlds, the second close call leads to cataclysm, that still leaves many worlds without global nuclear catastrophe. If we accept that there are in fact many worlds, a world avoiding nuclear catastrophe merely on the basis of luck seems much more tenable if there are many worlds than if there is only one world. In the same way, someone flipping 10 heads in a row merely on the basis of luck if you have 10 million people flipping coins than if you have only one person doing so. Additionally, the existence of multiple worlds can do even more explanatory work once we consider what type of world one is likely to find themselves on.

People are not evenly distributed throughout time and space. For example, we do not find it surprising that we inhabit Earth rather than Mars. At the same time, if there had been a global nuclear catastrophe, Earth's population would be lower. As a result, the chance of a random person within a set of worlds being in a world where a particular event took place (Y) is non-identical to the chance of that event taking place in a world within the given set (X). This is because, if an event is correlated with a change in the population, persons are more likely to live in worlds where events correlated with a higher population took place at a disproportionate rate. This is due to the number of people living in various worlds. Inversely, persons live in worlds where events correlated with a lower population took place at a disproportionately low rate. 

Volcanic Island Thought Experiment:

To greatly simplify, let us imagine that all persons live on one of two volcanic islands. In the immediate past, island A erupted; island B has not had a recent eruption. First let us imagine that everyone on island A died and, as a result, A has a population of 0 while B has a population of 20. In this case, while the probability that an island has had an eruption in the immediate past is ½, 0% of persons live on such an island. Next, let us imagine that the eruption is a little less deadly, and say that A has a population of 10 while B has a population of 20. Here, while the probability of an island having had an eruption is ½, in this version of the case the probability of being a person living on an island that had an eruption, in the immediate past, is ⅓. As a result, one should not find it surprising to be living on an island that hasn’t had a volcanic eruption. The formula below allows conversion between these two types of probability, Y being the probability of an event being in one's history, X being the probability of having an event occurring, A being the mean population of worlds within the set where the event occurs and B being the mean population of worlds in which the event doesn’t occur: 

Y=AX/((AX)+(BX'))

What is Nuclear War:

To apply this logic to nuclear close calls, we first need a best guess about what a nuclear war between the United States and the Soviet Union might look like. It is pretty clear that such a conflict would be disastrous, and that such a war would greatly reduce the global population;  First, there would be tens of millions killed by the bombs themselves; “more than 90 million people dead and injured within the first few hours of the conflict.”(Glaser) Then there would be hundreds of millions of excess deaths caused by the evisceration of infrastructure in the participating countries;  the collapse of the US power grid would result in the death of 90% of the population “through starvation, disease, and societal collapse.”(Pry) Finally, there would be billions of excess deaths resulting from the environmental and macroeconomic effects of the war; the entire world would experience a nuclear winter: the smoke created by the weapons would cause unprecedented surface darkening, a significant drop in surface temperatures, and large disturbances in the global climate alter local weather patterns.(Turco)  While such a war might well lead to human extinction, for this paper I am going to treat such a conflict as decreasing the population by 95%. For the sake of simplicity, I am going to treat worlds that have had a global nuclear catastrophe as having 5% of the population of this world for any given year. 

Playing with my Toy Model:

If a global nuclear catastrophe would decrease the population by 95% and 50% of worlds have a global nuclear catastrophe then 95% of persons (living in the 2020s) would be living in worlds like ours, where a global nuclear catastrophe has not occurred. Most people living in the 2020s would be living in worlds that have not had a global nuclear catastrophe up until the point that global nuclear catastrophe occurs in more than 95% of worlds. In other words, it is our position as observers which explains why we don’t see a global nuclear catastrophe in our history despite it being extremely likely for such a war to take place.

Section 3: Alternative Interpretations

It is possible that the Nuclear Fine-Tuning argument doesn’t actually apply to nuclear risk. Just as the verified existence of an intelligent designer would make the existence of many worlds irrelevant to the explanation of universal fine-tuning; if another explanation explains why none of the nuclear close calls resulted in a cataclysm then we wouldn’t have reason to believe that other worlds suffered a  global nuclear catastrophe. Generally, there are two leading types of explanations used to account for the lack of global nuclear catastrophe: deterrence theory and high safety standards. Regardless of its form, a conventional explanation would need to show that evidence, such as that provided in this paper, that seems to indicate that the probability of global nuclear catastrophe has historically been extremely high, is in fact misleading evidence. Just like in the case of universal fine-tuning, God might also act as a unified explanation. First I will discuss how the many worlds explanation undercuts standard interpretations. I will discuss the advantages and disadvantages of such explanations. Ultimately it seems that standard interpretations are not fully explanatory. 

The Issues with the Historical Record:

In standard cases, the historical record can give us a sense of what effect a particular policy, if implemented, is likely to have. For example, if one wanted to know the effects of a particular tax policy, one could look at various countries that have implemented such a policy and examine the outcomes. This remains true regardless of the number of worlds, and so there is no argument from the multiverse hypothesis as a premise to the conclusion that we cannot predict what the effect ot the tax policy would be in our own country. However, this style of investigation cannot tell us the effectiveness of nuclear-deterrence policies such as M.A.D. (Mutually Assured Destruction). This is because, even if nuclear deterrence leads to global catastrophe before the 2020s in many worlds, persons studying nuclear deterrence are likely to be within the 5% of worlds where global nuclear catastrophe doesn’t take place. As a result, the empirical data will always show that the policy is effective, regardless of its actual effectiveness. 

This does not mean of course that the adoption of nuclear deterrence measures and particular safety standards around the use of nuclear weapons couldn’t explain the absence of a global nuclear catastrophe; it simply means that the effects of such policies cannot be known merely through their historic outcomes. Rather, it is necessary to recognize that selection bias may be at work in cases that seem to involve anthropic coincidence and, in such cases, to extend our analyses to include the sequence of events that might lead to certain effects. This suggestion is in line with Martin E. Hellman's observation, in his paper “Risk Analysis of Nuclear Deterrence,” that “estimating the failure rate of nuclear deterrence has similarities with estimating the failure rate of a nuclear reactor design that has not yet failed. In addition to estimating the failure rate, such a study also identifies the most likely event sequences that result in catastrophic failure.” As a result, we must examine if these explanations explain why none of the close calls lead to a global nuclear catastrophe.

Nuclear-Weapons Safety:

The Nuclear Fine-Tuning Argument doesn’t entail that any attempts to avoid a nuclear war are ineffective. Efforts to maintain nuclear-weapons safety explains, at least in part, why there haven’t been more nuclear accidents. In the case of the 1961 Goldsboro B-52 crash, the safety mechanisms on the bombs themselves did prevent their detonation. However, despite the absence of a detonation, this case does boost my confidence in this explanation. A declassified report on first the first bomb notes that it “did not possess adequate safety for the airborne alert role on the B-52.”(Jones) Additionally, in the case of both bombs, the safety mechanism was not properly utilized. For the first bomb, which descended by parachute, A single switch, out of four safety mechanisms, prevented its detonation. The other bomb, which lacked a parachute, was partially armed when it left the aircraft. Its arm/safe switch was found in the arm position. While efforts made to make these weapons more safe were well worth the effort; they do not account for the totality of anthropic coincidence. 

Nuclear Deterrence: 

Nuclear deterrence may be part of the explanation for how we have thus far avoided a nuclear exchange. Insofar as nuclear wars are unwinnable; nuclear states acting in their own best interest should never fire a first strike. Any explanation of why there hasn’t been a global nuclear catastrophe must take this fact into account. For example, deterrence does a great job of explaining why neither the US nor the USSR decided to attempt a first strike at any point during the Cold War. However, this explanation only partially explains the phenomenon. While deterrence is a highly relevant factor in many close calls; for example, Captain Vasili Arkhipov’s decision to not fire a nuclear torpedo was informed by his understanding of what such an action would have meant, it is not a complete explanation. The vested interest of states and persons to not fire nuclear weapons doesn’t fully explain this case. First of all the conditions on the sub weren’t ideal for rational decision-making. According to Second Retired V.P. Orlov who was the commander of the special assignment group on the sub,

“[T]he accumulators on B-59 were discharged in a state of water, only emergency lights [were] functioning. The temperature in the compartments was 45-50 C up to 60 in the engine compartment. It was unbearably stuffy. The level of CO2 in the air reached a critical, particularly deadly mark.” Additionally the crew of the sub wasn’t sure in what context they were forced to make their decisions, with one officer remarking “maybe the war has already started up there.” 

While the possibility of nuclear armageddon remains a good reason for not wanting to fire, there were many compelling reasons for them to fire with that same officer saying at the time, “We’re going to blast them now! We will die but we will sink them all! We will not disgrace our navy!”(Mozgovi)

A defender of the deterrence hypothesis could say that even in these conditions deterrence is still a sufficient explanation; despite all of these conditions being reasons to use nuclear arms, the threat of deterrence still outweighs them all. Yet this fails to consider that everyone else on board besides Captain Vasili Arkhipov wanted to fire, in order for deterrence to fully explain cases like this it would need to be impossible for anyone to want to fire a nuclear weapon. If it was possible for the sub to return fire, then there is some degree of anthropic coincidence. Furthermore, given that his opposition to firing the weapon was unique among the crew, in order to maintain that the missile was unlikely to be fired in this case, one would need to maintain Arkhipov’s disposition was typical of those on USSR nuclear subs, in 1962, while the crew of the B-59 were particularly willing to fire a nuclear torpedo. 

Assuming that deterrence necessitates never firing a nuclear missile under any circumstance does explain some other cases. Boris Yeltsin’s decision to not fire in 1995 is explained through this framework. However, such an assumption undermines the principle of deterrence itself. Consider Second Lieutenant Allan D. Childers, the commander of a Titan II missile combat crew at the little rock air force base. Eric Schlosser notes that “Childers had faith in the logic of nuclear deterrence: his willingness to launch the missile ensured that it would never be launched.” (Schlosser: 11) We cannot have it both ways. If it is always irrational to fire a nuclear missile, due to deterrence, then the logic of deterrence fails and first strikes come back onto the table. On the other hand, if a retaliatory strike is rational, then deterrence doesn’t explain cases in which a person believed they would be firing a retaliatory strike.

God:

As in the case of universal fine-tuning, the God hypothesis could be used to fill the explanatory gap. In the case of fine-tuning the argument goes: 1) it is unlikely for our universe to be life-friendly; 2) God, if they existed, would ensure that our universe was life-friendly; therefore, 3) our universe being life-friendly is evidence for God's existence. However, such an explanation lacks any predictive ability, and additionally, it creates more questions than it answers. If God doesn’t want there to be a nuclear catastrophe, why do nuclear weapons even exist, and how was the US able to drop two nuclear bombs on Japan during the second world war? Perhaps God only cares for America, but, if this is the case, why is American infrastructure terrible? Perhaps God works in mysterious ways; yet, if her ways are truly unintelligible, we must look for explanations elsewhere.

All of these alternative explanations ultimately fail to fully explain the set of anthropic coincidences. Of course, the nuclear fine-tuning argument might still fail to explain the historical record. In the next section, I will raise possible objections to my previous argument and respond to each in turn.

 

Section 4: Objections and Responses

I am sure that much of what I have said in this paper will be extremely controversial. This section will serve to make clear my position and respond to some possible objections to it. First I will discuss the relationship that my argument has to something known as the “self-sampling assumption.” Then I will discuss concerns regarding the reference class used in the nuclear fine-tuning argument. Finally, I will explain how some of the seemingly counterintuitive results of my argument are, in fact, perfectly intuitive. 

Engaging with the Self Sampling Assumption:

According to the self-sampling assumption, each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class.(Bostrom 2001: 3) The self-sampling assumption is controversial, partly because it entails the Doomsday Argument, a probabilistic argument that seeks to show that the end of the world is near due to the improbability of being born now if there will be far more people in the future.(Brandon) Fortunately, the nuclear fine-tuning argument does not require me to take a stance upon the self-sampling assumption. Rather, the argument as laid out in this paper relies only on the much weaker assumption that, if we discover our positionality to be probable for an observer of a given type, the fact that we are observers of said type makes such a positionality probable. This is distinct from the self-sampling assumption insofar as this assumption is not predictive, but merely descriptive. It does not extend into the future, and therefore does not allow us access to information about the future based merely on population data. As a result, this assumption does not entail the Doomsday Argument. 

Turning to another point, some objections may focus on the reference class used in the nuclear fine-tuning argument. In my argument, I maintain a reference class of humans living in worlds that were identical up until the precise moment of the Trinity Test, living in the 2020s. This is distinct from the universal fine-tuning argument, which takes the observer class to be life in general. One could maintain that one who exists in a particular world could not exist in another world, and therefore persons in other worlds must be part of different reference classes. However, I find such a position uncompelling. If, while sleeping, a person was swapped with their counterpart in another world indistinguishable from their own, except for the conditions of stars lightyears away, they couldn’t possibly discover the change in their world. As a result, from one’s subjective perspective, one could be in any number of worlds. 

On the other hand, others might argue that this reference class is too restrictive, that one could be a person living in another time, and that, if the nuclear fine-tuning argument holds, we should find it surprising that we are living in the 2020s, a full eight decades after the Trinity Test. I’ll just grant that one could be in another time, this is true to some degree. However, it isn’t that surprising. If we assume that global nuclear catastrophe is the only relevant factor and that 95% of worlds experience a global nuclear catastrophe that eliminates 95% of the population before 2020, then the mean population of worlds in 2020 is .78 billion. On the other hand, given that 0% of worlds experience a global nuclear catastrophe before 1820, the mean population of worlds in 1820 is 1 billion. So there still wouldn’t be that many more people living in the 19th century than in the 21st. As a result, the argument still holds if people from all times are included in the reference class. If one hundred years pass without either a reduction in nuclear risk or a global nuclear catastrophe, this particular argument will hold more weight. However, since this objection suffers from problems similar to those facing the next objection, it will be resolved in the next paragraph.

Seemingly Counter-Intuitive Results:

Some consequences of this argument might seem counterintuitive. For example, by this argument, if a given person lives in a world destroyed by a global nuclear catastrophe, as opposed to a world that hasn’t had a global nuclear catastrophe, they should find this fact surprising. However, the counterintuitive ring of this objection flows from how it has been framed. As stated, the objection merely reframes the idea that one should find it surprising to have survived a global nuclear catastrophe. I would be shocked to survive such a conflict, so I happily concede the objection as so stated. Similarly, the second objection gets stronger over time only because, as time passes, it gets more surprising that there hasn’t been a global nuclear catastrophe.


 

 

Conclusion

If there are many worlds, then many of them have been destroyed by nuclear fire. In the same way that the existence of many worlds undercuts the inference to God from universal fine-tuning, it also undercuts the inference from our survival thus far to global nuclear catastrophe being unlikely. This is because, insofar as no one lives in universes where life is impossible, few people live after nuclear armageddon. The existence of other worlds provides the only unified explanation for the set of anthropic coincidences surrounding nuclear weapons. Alternative explanations either fail to address the totality of coincidence or, in the case of divine intervention, could explain any conceivable set of events, and as a result, explain nothing. As a result, we should take the threat of global nuclear catastrophe extremely seriously. 

As I write, Russia is attempting to leverage the threat of nuclear war to gain control of Ukraine. The entire world holds its breath as the task of avoiding a global nuclear catastrophe is shifted to the United States and her allies. This is not the first time that our world has teetered on the brink of nuclear armageddon–but, for many worlds, it will be the last. 

At one time there was pressure placed on governments to move toward disarmament however this effort has lost steam. In my view this is largely due to people, largely subconsciously, believing in the teleological interpretation of the nuclear fine-tuning argument. That they have nothing to fear because they have survived thus far. We are not powerless, we can put pressure on our governments to give up their nukes. The many worlds interpretation of the nuclear fine-tuning argument gives us another way to understand our history I am sure our children will find themselves in worlds in which we did.

 

Bibliography

Barnes, Luke A., 2012, “The fine-tuning of the universe for intelligent life”, Publications of the Astronomical Society of Australia, 29(4): 529–564. doi:10.1071/AS12015

Becker, Rachel. “Human Error in a Nuclear Facility Nearly Destroyed Arkansas.” The Verge , 10 Jan. 2017. 

Bostrom, Nick (2001). The Doomsday Argument Adam & Eve, UN++, and Quantum Joe. Synthese 127 (3):359-387.

Bostrom, Nick. “The Vulnerable World Hypothesis.” Global Policy 10, no. 4 (2019): 455-476. URL = https://nickbostrom.com/papers/vulnerable.pdf 

 Brandon Carter; McCrea, W. H. (1983). "The anthropic principle and its implications for biological evolution". Philosophical Transactions of the Royal Society of London.

Butler, Lee. “Address to the Canadian Network Against Nuclear Weapons.” Montreal, Canada, 11 March 1999.

Christ, Mark K. “Titan II Missile Explosion (1980).” Encyclopedia of Arkansas, 11 Feb. 2020, https://encyclopediaofarkansas.net/. Accessed 4 Aug. 2022. 

Friederich, Simon. “Reconsidering the Inverse Gambler’s Fallacy Charge Against the Fine-Tuning Argument for the Multiverse.” Journal for General Philosophy of Science 50 (2019): 29–41. https://doi.org/10.1007/s10838-018-9422-3

Glaser, Alex. “Plan A | Princeton Science & amp; Global Security.” Princeton University, The Trustees of Princeton University, 6 Sept. 2019, https://sgs.princeton.edu/the-lab/plan-a. 

Jones, Parker, “Goldsboro Revisited: Account of Hydrogen Bomb near-Disaster over North Carolina – Declassified Document.” The Guardian, Guardian News and Media, 20 Sept. 2013, https://www.theguardian.com/world/interactive/2013/sep/20/goldsboro-revisited-declassified-document. 

Henderson, L. J. The Fitness of the Environment. New York: Macmillan Company, 1913.

Kopparapu, Ravi Kumar, et al. “Habitable Zones around Main-Sequence Stars: New Estimates.” The Astrophysical Journal 765, no. 2 (2013): 131-127. https://doi.org/10.1088/0004-637x/765/2/131 

Hoffman, David. “Cold-War Doctrines Refuse to Die.” Washington Post, 15 Mar. 1998. 

Mozgovi, Alexander. “The Cuban Samba of the Quartet of Foxtrots: Soviet Submarines in the Caribbean Crisis of 1962.” Military Parade, Moscow, 2002. Translated by Svetlana Savraskaya, The National Security Archive

Ord, Toby. The Precipice: Existential Risk and the Future of Humanity. London: Bloomsbury Publishing, 2021.

Pry, Peter Vincent. Statement for the Record Joint Hearing before the Subcommittee on National Security Subcommittee on the Interior House Committee on Oversight and Government Reform. 13 May 2015, https://republicans-oversight.house.gov/wp-content/uploads/2015/05/Pry-Statement-5-13-EMP.pdf 

Schlosser, Eric. Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety. New York: Penguin, 2013.

Siddiqa, Ayisha. “On Another Panel About Climate, They Ask Me to Sell the Future and All I’ve Got is a Love Poem.” On Being. June 10, 2022, https://onbeing.org/poetry/on-another-panel-about-climate-they-ask-me-to-sell-the-future-and-all-ive-got-is-a-love-poem/ 

Swinburne, Richard (2010). The Argument to God from Fine-Tuning. In Science and Religion in Dialogue. Wiley-Blackwell. pp. 223--233.

Turco RP, Toon OB, Ackerman TP, Pollack JB, Sagan C. Nuclear winter: global consequences of multple nuclear explosions. Science. 1983 Dec 23;222(4630):1283-92. doi: 10.1126/science.222.4630.1283. PMID: 17773320.


 

16

0
0

Reactions

0
0

More posts like this

Comments28
Sorted by Click to highlight new comments since: Today at 8:24 AM

Yes, agree. Two more points:

Not all population counts, but only those who can think about anthropic. A nuclear war will disproportionally destroy cities with universities, so the population of scientists could decline 10 times, while other population will be only halved. 

Anthropic shadow means higher fragility: we underestimate how easy it is to trigger a nuclear war. Escalation is much easier. Accidents are more likely to be misinterpreted. 

I am skeptical of this first claim about anthropics. To me, it seems like every observer type can be relevant. The particular type used in the context of anthropic reasoning is ultimately subjective and is chosen insofar as it is illuminating. I agree that people thinking about anthropics are particularly unlikely after nuclear war.

Yes, I agree. This paper is framed around the fine-tuning argument because the multiverse undermining the teleological interpretation is uncontroversial within the theory, and so extending this argument to nuclear war doesn't require someone who accepts anthropic shadow to accept the conclusion of this paper. I happen to believe in anthropic shadow, and such a belief implies our situation is worse than what is implied by this paper. 

Do you regard animals as observers? 

Yes. I am a random animal within the set of animals. I am also a random human,  a random American, a random anthropics enthusiast, a random person on the EA forum, a random non-binary person, a random Jewish person... etc 

When considering different problems I experience different forms of selection effects in different ways. For example, Insofar as I am Jewish I am more likely to exist in a world where the nazis lost ww2. 

I am unsure how these different categories interact. I imagine that I am more likely to live in a world with more humans, but fewer total animals than I am to live in a world with more animals but fewer humans. I take any category to be a legitimate starting point and am unsure how to weigh them against each other. 

If you were random animal, you will be an ant with 99.999999 probability. So either anthropic is totally wrong,  or animals is wrong reference class.

The fact that I am not an ant doesn't undermine it because I know that I am human. Humans will always be humans, and so have a tendency to discover themselves to be humans. This selection effect is even more extreme than the tendency for humans to find themselves in worlds without a nuclear war. 

I could not be anything but what I am, as then I would not be myself. A reference class of just me is however not useful. So to do anthropic reasoning I conceive of myself as one of a set to which I belong and consider how general observation biases within that set might be misleading me. 

In the fine-tuned planet case the fact that Animals couldn't have occurred on a planet without liquid water is useful. The various contingencies around the seeming fine-tuning of the earth are explained by my being an animal. I am where animals, and by extension me, could exist. 

I think that here are presented two different conjectures:

"I am animal" - therefore liquid water on the planets etc.

"I am randomly selected from all animals".

The first is true and the second is false.

"I am randomly selected from all animals" I don't endorse this claim. It implies that my essence is prior to my existence, and I disagree with this assumption. I do believe I was once a soul placed into a random body within a set.

 My essence follows from my existence, if I was different I would be someone else. I do stand by the claim, "I can reason as if I am randomly selected from all animals" this is true for any set I am a part of, if you did select a random member of that set I am a possible result, some sets just give unintuitive results, but that's simply because reasoning from a particular positionality only gives part of the picture. 

Anthropic shadow only requires the later epistemic claim to be valid and is not dependent on the metaphysical claim.

I didn't try to make any metaphysical claims. I just pointed on conditional probability: if someone is writing comments on LW, (s)he is (with very high probability) not an animal. Therefore LW-commentators are special non-random subset from all animals.

Do you regard animals as observers? 

[anonymous]2y4
0
0

This is very interesting! Thanks for taking the time to write it up.

I have a concern regarding the B-2 crash that happened over North Carolina. It seems to me that, if it is true that this lead to accidental nuclear detonations in most worlds, then we should expect to live in a world where the B-2 crash did  lead to an accidental nuclear detonation but a nuclear holocaust was averted nevertheless.

My reasoning is as follows. In a fraction of worlds where nuclear detonation occurrs, we can expect that the American's would recognize these detonations as accidental. Because the signs are there. For example,  North Carolina does not seem like a prime target for a nuclear attack (though I'm not American, so maybe I'm mistaken about this).  It would also seem odd that there were only two nukes---an attack from the Soviets would likely be bigger than that (reasoning along these lines is what caused Stanislav Petrov to recognize a false alarm in the Soviet missile detection system). The Americans might also remember that they had a B-2 flying over the over North Carolina. So in worlds where detonation occurs but the Americans pick up on these details, it is plausible that the Americans do not launch a retaliatory strike against the Soviets, and a nuclear war is averted so humanity can continue on without significant population loss.

Because of this, it seems like the observer selection effects should favor us living in a world where the B-2 crash lead to two accidental nuclear detonations in North Carolina without causing nuclear war. Because the suggestion seems to be that in most worlds this crash did lead to accidental detonation. But not all of these worlds experienced significant population loss. So depending on how we crunch the numbers, we may even expect most observers to exist in worlds where the B-2 crash lead to accidental nuclear detonation. But we do not live in one of these worlds. So arguably the observations we are making (of the B-2 crash not leading to detonation) are observations we are not likely to be making.

So here's my question: do you think the fact that there wasn't a nuclear detonation in North Carolina should lower our confidence in the Nuclear Fine Tuning Hypothesis?

I suspect that you might say no because the Nuclear Fine Tuning Hypothesis does not rely on the self-sampling assumption. As you mentioned, it only requires that we make descriptive rather than predictive claims. But it's still not entirely clear to me how this distinction works. Because if we are explaining our current observations on account of them being probable on anthropic grounds, it seems unavoidable that we should predict that what we will observe will be probable on anthropic grounds too, simply because anthropic probability is on the table and the observations we make in the future are most likely going to be observations we are likely to make. Or at least that's how it seems to me. I'd be interested to hear if/why you disagree with this.

I don't maintain that the B-2 crash lead to nuclear detonation in most worlds, the examples are treated in the paper as straight coinflips but this is mostly to illustrate the general threat of accidental nuclear war. Zeroing in on this case I'd be surprised if it is more likely than not for this crash to lead to a nuclear detonation than not, while the safety mechanisms were inadequate for something as dangerous as a nuclear device they are still a relevant factor.

Throughout this whole project, this is a question that has continued to bug me. Would the US have had the ability to understand that it had nuked itself rather than blaming it on the Soviet Union? I honestly don't know how to go about answering this question, my gut says no, and this is why there's never been an accidental nuclear detonation on US soil. This is an assumption that carries through this paper, but I'd be interested to read a more nuanced take on it. It might also be that while the B-2 crash leading to a nuclear detonation wouldn't have immediately led to nuclear war it would have increased tensions during the Cuban missile crisis(i.e. it might not immediately lead to nuclear war but it does decrease the odds). If the nuclear detonation(in the B-2 case) is even correlated with nuclear war then we should expect persons to disproportionately live in worlds where it doesn't occur.

It is very weird that there has never been an accidental nuclear detonation on American soil, one accidental detonation would significantly change my assessment. As a result, I still think the B-2 crash is still evidence in favor of my hypothesis.

This is a promising counterargument, I think ultimately making it would require a complex assessment of US cold-war propaganda and deep dive into those in positions of power during different parts of the cold war.

 

I'm not exactly sure if my distinction between descriptive claims and predictive claims really works. My intuition is that descriptive claims are simply safer than predictive claims as they are less vulnerable to unseen factors. I will think about this more.

I think it is due to how nuclear war(in the past) would mean that I specifically wouldn't exist. On the other hand if theirs a trillion people in the future I still would. 

This is a really interesting topic.

I believe what you are describing here is the 'Anthropic Shadow' effect, which was described in this Bostrom paper: https://nickbostrom.com/papers/anthropicshadow.pdf

From what I can tell, your arguments are substantially the same as those in the paper, although I could be wrong?

Personally I've become pretty convinced that the anthropic shadow argument doesn't work. I think if you follow the anthropic reasoning through properly, a long period of time without a catastrophe like nuclear war is strong Bayesian evidence that the catastrophe rate is low, and I think this holds under pretty much whatever method of anthropic reasoning you favour.

I spelled out my argument in an EA forum post recently, so I'll link to that rather than repeating it here. It's a confusing topic and I'm not very sure of myself, so would appreciate your thoughts on whether I'm right, wrong, or whether it's actually independent of what you're talking about here: https://forum.effectivealtruism.org/posts/A47EWTS6oBKLqxBpw/against-anthropic-shadow 

This paper isn't using anthropic shadow as its framing as it's looking at the issue through the lens of the fine-tuning argument, rather than through the methodology in the anthropic shadow paper. I didn't bring in anthropic shadow as I didn't want to parse exactly how my view was the same or different from it and figured adding discussion of an additional paper would add unneeded complexity. 

Notably, my paper uses close calls that seem to have been avoided only by luck as evidence that risk is higher than it appears and then explains our luck through our position as observers, rather than simply using our position as observers as evidence that risk is higher than it appears.

I am curious to know how close calls avoided only through luck, like the Damascus missile explosion affect your assessment. I will read and respond to your paper.(EDIT: I left a comment)

I've replied to your comment on the other post now.

I don't want to repeat myself here too much, but my feeling is that explaining our luck in close calls using our position as observers does have the same problems that I think the anthropic shadow argument does.

It was never guaranteed that observers would survive until now, and the fact that we have is evidence of a low catastrophe rate.

It was never guaranteed that observers would survive until now, and the fact that we have is evidence of a low catastrophe rate.

This is why this paper assumes there are an arbitrarily large number of worlds. If there are an arbitrarily large number of worlds then if it is possible for a particular event to occur it becomes guaranteed that it will occur in a particular world. 

To formalize my claim if

  1. it is possible for observers to survive until now, and
  2. There are an arbitrarily large number of worlds, then
  3. All possibilities will occur in some world

Of course, if you don't believe in many worlds this argument remains valid but isn't sound. This is the same line of argument that allows the many world's views to block the inference to God from universal fine-tuning. Do you have a view on the fine-tuning cases?

I will respond to your comment on the other post, we can move into DMs if you are interested in discussing this further as that would consolidate the conversation. 

I think I just found a way to roughly estimate nuclear risk by using assumptions about existence of parallel worlds and that I exist in the world that hasn't ever experienced nuclear war because this is more probable to be in such world. x is total number of worlds whare I was born. Let's also assume that time flows at the same rate in all worlds, so in each world current year is 2022.  In some worlds I, for various reasons, died. In other ones I'm still alive. Obviously, I'm more likely to be alive in worlds where there are more people. And worlds where nuclear war happened will be significantly less populated. I read that nuclear war in our world would kill 5 billions of people, out fo current 8 billion people. So, in the most generous case, post-nuclear-war world would have about 3 billions people, while  worlds that avoided nuclear war would have about 8 billions of people. Now we can make inequality, x*y*3<x*(1-y)*8. y means risk of nuclear war, namely share of worlds that experienced it since my birth up to now. Total population of all post-nuclear-war worlds combined must be less than total combined population of all no-nuclear-war-worlds, as I judge that my current existence in no-nuclear-war world is evidence that there is higher chance for me to exist in no-nuclear-war world in this year. We can divide both sideds by x to get rid of it, and then solve for y. https://www.wolframalpha.com/input?i=y*3%3C%281-y%29*8   As you can see, y<8/11, or in other words, no more than 73% of all worlds (where I was born) got nuked since my birth if we assume that this inequality is true. 

We don't necessarily need to assume that time moves the same in all worlds we can simply use the relevant observer class of persons living 60 years into nuclear deterrence. 

I worry about the assumption that "it is more probable to be in a world that hasn't experienced a nuclear war". We can't know that it is more likely that we are in a world that hasn't experienced nuclear war, we are however justified in believing that it is more likely that we would be in a world that hasn't experienced nuclear war(as that's our immediate context). So if nuclear war kills 5 billion people, we can say that it is unlikely that nuclear war has destroyed more than 73% of world, but it shouldn't be treated as an upper limit.

EDIT: I initially thought this was a recreation of the "playing with my toy model section" and then realized you were saying something different. I cut the part of this comment where I addressed that 

I don't know how much my calculations are different from yours as I hasn't been able to comprehend how to use your formula. Can you give me an elaborate example of using it, step-by-step? 

"We can't know that it is more likely that we are in a world that hasn't experienced nuclear war, we are however justified in believing that it is more likely that we would be in a world that hasn't experienced nuclear war." Sorry, but I fail to see the difference between "it is more likely that we are in a world that hasn't experienced nuclear war" and "it is more likely that we would be in a world that hasn't experienced nuclear war"

This is my bad I have discovered that the area around the equation has a lot of typos. So Y=AX/((AX)+(B(1-X)) with Y being the probability of an event being in one's history, X is the probability of having an event occurring, A being the mean population of worlds within the set where the event occurs and B being the mean population of worlds in which the event doesn’t occur: 

So let's say there are two islands Island A has had a volcanic eruption Island B hasn't. As a result, A has a population of 5 while B has a population of 10.

We can plug in 50% for X as half the islands have had eruptions. So we get

Y=(5*.5)/((5*.5)+((10*(1-.5))

So the probability of being on an island that has had a volcanic eruption is 1/3 

So in the Playing with my Toy Model section, I'm saying that if worlds without nuclear wars have a population of 8bil and worlds that haven't have a population of .4bil(i.e. nuclear war kills 95% of people) then 

Y=(8*.05)/((8*.05)+(.4*.95))

Y=.51

So If nuclear war kills 95% of people you are more likely to be living in a world where nuclear war doesn't occur if nuclear war destroys 95% of worlds.

Basically, all I'm saying with "We can't know that it is more likely that we are in a world that hasn't experienced nuclear war, we are however justified in believing that it is more likely that we would be in a world that hasn't experienced nuclear war." is that it wouldn't be that surprising for us to be in a world where nuclear war hadn't occurred if it turned out that there was only a 40% chance to be in a world where nuclear war didn't occur and 60% of people were in a worlds where it had occurred. This point might be pedantic.

1."Y=AX/((AX)+(B(1-X))" How have you got this formula? Sorry, my probability math is a bit rusty, so maybe I'm missing something obvious.

2.I find it a poor choice to use two islands as your example. In context of this problem we deals with two sets, and each set >1. Even more, such example biases us to think that an observer to be moke likely to belong to a set of worlds where catastrophe hasn't happened, as there are only two islands. It doesn't need to be the case. While each world that experienced catastrophe is less populated, combined population of post-catastrophe worlds can be still greater than population of no-catastrophe worlds if there is too many of post-catastrophe worlds and too few no-catastrophe worlds. IMHO, it would be better to use set A (islands where volcanic eruption happened) and set B (islands where were no volcanic eruptions). 

3."So the probability of being on an island that has had a volcanic eruption is 1/3 " Why would we want to know this? I think that calculation of risk of catastrophe (i.e. share of ruined worlds/islands) is much more relevant.  

4.It's unclear to me how you select worlds among which M.A.D. either happened or didn't. For an example, in my comment I limited myself to all worlds where I was born. If you don't do the same, then you will run into problems. Consider this. There are parallel worlds where Roman civilization never crumbled. Where they had time to achieve everything that current Western civilization achieved + several addtional centuries to go beyond this. In 2022 there would be Roman worlds that colonized several planets of Solar system, maybe even terraformed them and found ways to sustainably support much bigger population than 8 billions. It seems plausible for population to be distributed heavely in favor of Roman worlds in current year 2022. Yet you and me aren't is a Roman world. Curious, don't you think?

  1. The formula is just the fraction of people who live in a world where a given event happened. You take, (the number average number of persons in a world where an event took place * the probability of the event taking place), and divide it by, (the number average number of persons in a world where an event took place times the probability of the event taking place + the average number of persons in a world where the event didn't take place * the probability of the event not taking place) Math is admittedly not my strong suit.
  2. the two island example is just to give the simplest possible example, this is why there are only two. You are correct that there could be more people in post-catastrophe worlds.
  3. Yeah, certainly we might rather know the percentage of worlds where the catastrophe occurred. The formula is useful because it lets you convert between the percentage of worlds in which a thing happens and the percentage of persons that thing happens to(if you know the average populations of worlds where the thing both happens and doesn't)
  4. I imagine the set of worlds to be identical up until the precise moment of the trinity test(July 16th, 1945 5:29 AM), however, this is just a narrative choice, and ultimately it's kind of arbitrary. My suspicion is that the doomsday argument is valid due to space colonization being logistically impossible(or at least so implausible that it basically never happens/when space colonization does happen very few people actually live off-world due to logistical complications), and this is why we don't find ourselves in a roman world. We might also just be in an unlikely circumstance.

Thank you for the very thorough and well researched essay. 

You are very welcome :)

Great article. I had thought about similar reasoning in terms of X-risk, but thinking about how it applies to catastrophes that reduces the number of observers is important, too.

Hadn't thought about this in terms of nuclear risk, either.

I'm glad you enjoyed it :) 

If that B52 had exploded, the death toll would probably have been smaller than that of the Hiroshima bomb. (It landed in a random bit of countryside, not a city). Unless you think that accident would have triggered all out nuclear war? Sure, it would have a large impact on American politics.  Quite possibly lead to total nuclear disarmament, at least of America. But any anthropic or fine tuning argument doesn't apply. 

Suppose we were seeing lots of "near misses". These were near misses of events that, had they happened, would have destroyed a random american town. Clearly this isn't anthropic effects or similar. I would guess something about nuclear safety engineers being more/less competent in various ways. Or nuclear disarmament supporters in high places that want lots of near miss scares. Or the bombs are mostly duds, but the government doesn't want to admit this. 

To clarify I’m of the view that during this time period the unplanned detonation of a nuclear weapon on American soil would have prompted a nuclear exchange between the US and the Soviet Union. I think this is a relatively safe assumption due to how high the tensions were and how gunho the US was with nuclear weapons. I think the coincidences surrounding nuclear weapons also support this interpretation.

The alternative explanations you suggest are ad-hawk and don’t hold up to scrutiny. Disarmament supporters wouldn’t favor a system with more close calls so that fear over those close calls might lead to disarmament. This only make sense if they support disarmament for reasons other then being worried about nuclear risk. The bombs being duds also doesn’t make sense, none of the bombs in these cases turn out to be duds, if this is the case then this would be a massive conspiracy and such a conspiracy would be hard to keep under raps.(it also conflicts with the US’s general nuclear strategy)