Hide table of contents

Update on 16 October 2023: I have now done my own in-depth analysis of nuclear winter.

This is a crosspost for Nuclear Winter by Bean from Naval Gazing, published on 24 April 2022. It argues Toon 2008 has overestimated the soot ejected into the stratosphere following a nuclear war by something like a factor of 191[1] (= 1.5*2*2*(1 + 2)/2*(2 + 3)/2*(4 + 13)/2). I have not investigated the claims made in Bean's post, but that seems worthwhile. If its conclusions hold, the soot ejected into the stratosphere following the 4.4 k nuclear detonations analysed in Toon 2008 would be 0.942 Tg[2] (= 180/191) instead of "180 Tg". From Fig. 3a of Toon 2014, the lower soot ejection would lead to a reduction in temperature of 0.2 ºC, and in precipitation of 0.6 %. These would have a negligible impact in terms of food security, and imply the deaths from the climatic effects being dwarfed by the "770 million [direct] casualties" mentioned in Toon 2008.

For context, Luísa Rodriguez estimated "30 Tg" of soot would be ejected into the stratosphere in a nuclear war between the United States and Russia. Nevertheless, Luísa notes the following:

As a final point, I’d like to emphasize that the nuclear winter is quite controversial (for example, see: Singer, 1985; Seitz, 2011; Robock, 2011; Coupe et al., 2019; Reisner et al., 2019; Pausata et al., 2016; Reisner et al., 2018; Also see the summary of the nuclear winter controversy in Wikipedia’s article on nuclear winter). Critics argue that the parameters fed into the climate models (like, how much smoke would be generated by a given exchange) as well as the assumptions in the climate models themselves (for example, the way clouds would behave) are suspect, and may have been biased by the researchers’ political motivations (for example, see: Singer, 1985; Seitz, 2011; Reisner et al., 2019; Pausata et al., 2016; Reisner et al., 2018). I take these criticisms very seriously — and believe we should probably be skeptical of this body of research as a result. For the purposes of this estimation, I assume that the nuclear winter research comes to the right conclusion. However, if we discounted the expected harm caused by US-Russia nuclear war for the fact that the nuclear winter hypothesis is somewhat suspect, the expected harm could shrink substantially.

As Luísa, I have been assuming "the nuclear winter research comes to the right conclusion", but I suppose it is worth bringing more attention to potential concerns. I have also not flagged them in my posts, so I am crossposting Bean's analysis for some balance.

Nuclear Winter

When I took a broad overview of how destructive nuclear weapons are, one of the areas I looked at was nuclear winter, but I only dealt with it briefly. As such, it was something worth circling back to for a more in-depth look at the science involved.

First, as my opponent here, I’m going to take What the science says: Could humans survive a nuclear war between NATO and Russia? from the prestigious-sounding “Alliance For Science”, affiliated with Cornell University, and the papers it cites in hopes of being fair to the other side. Things don’t start off well, as they claim that we’re closer to nuclear war than any time since the Cuban Missile Crisis, which is clearly nonsense given Able Archer 83 among others. This is followed with the following gem: “Many scientists have investigated this question already. Their work is surprisingly little known, likely because in peacetime no one wants to think the unthinkable. But we are no longer in peacetime and the shadows of multiple mushroom clouds are looming once again over our planet.” Clearly, I must have hallucinated the big PR push around nuclear winter back in the mid-80s. Well, I didn’t because I wasn’t born yet, but everyone else must have.

Things don’t get much better. They take an alarmist look at the global nuclear arsenal, and a careful look at the casualties from Hiroshima and Nagasaki, bombs vastly smaller than modern strategic weapons and with rather different damage profiles. Hilariously, their ignorance of the nuclear war literature extends to the point of ignoring fallout because there was relatively little fallout from those two airbursts, although it’s well-known that groundbursts produce much more and are likely to be used in a modern nuclear war.

But now we get to the actual science, although before digging in I should point out that there are only a few scientists working on this area. The papers they cite include at least one of Rich Turco, Owen Toon or Alan Robock as an author, sometimes more than one. Turco was lead author on the original 1983 nuclear winter paper in Science and Toon was a co-author, while Robock has also been in the field for decades. The few papers I found elsewhere which do not include one or more of these three tend to indicate notably lower nuclear winter effects.

There are three basic links in the chain of logic behind the nuclear winter models here: how much soot is produced, how high it gets, and what happens in the upper atmosphere.

First, the question of soot production. Environmental Consequences of Nuclear War, by Toon, Robock and Turco gives the best statement of the methodology behind soot production that I’ve found. Essentially, they take an estimate of fuel loading based on population density, then assume that the burned area scales linearly with warhead yield based on the burned area from Hiroshima. This is a terrible assumption on several levels. First, Hiroshima is not the only case we have for a city facing nuclear attack, and per Effects of Nuclear Weapons p.300, Nagasaki suffered only a quarter as much burned area as Hiroshima thanks to differences in geography despite a similar yield. Taking only the most extreme case for burned area does not seem like a defensible assumption, particularly as Japanese cities at the time were unusually vulnerable to fire. For instance, the worst incendiary attack on a city during WWII was the attack on Tokyo in March 1945, when 1,665 tons of bombs set a fire that ultimately burned an area of 15.8 square miles, as opposed to 4.4 square miles burned at Hiroshima. To put this into perspective, Dresden absorbed 3,900 tons of bombs during the famous firebombing raids, which only burned about 2.5 square miles. Modern cities are probably even less flammable, as fire fatalities per capita have fallen by half since the 1940s.

Nor is the assumption that burned area will scale linearly with yield a particularly good one. I couldn’t find it in the source they cite, and it flies in the face of all other scaling relationships around nuclear weapons. Given that most of the burned area will result from fires spreading and not direct ignition, a better assumption is probably to look at the areas where fires will have an easy time spreading due to blast damage, which tends to rip open buildings and spread flammable debris everywhere. per Glasstone p.108, blast radius typically scales with the 1/3rd power of yield, so we can expect damaged area from fire as well as blast to scale with the yield2/3. Direct-ignition radius is more like yield0.4, so for a typical modern strategic nuclear warhead (~400 kT), this will overstate burned area by a factor of 2 for direct-ignition and a factor of 3 for blast radius.

And then we come to their targeting assumptions, which are, if anything, worse. The only criteria for where weapons are placed are the country in question and how much flammable material is nearby, and they are carefully spaced to keep the burned areas from overlapping. This is obvious nonsense for any serious targeting plan. 100 kT weapons are spaced 15.5 km apart, far enough to spare even many industrial targets if we apply more realistic assumptions about the burned area. A realistic targeting plan would acknowledge that many hardened military targets are close together and would have overlap in their burned areas, and that a lot of nuclear warheads will be targeted at military facilities like missile silos which are in areas with far lower density of flammable materials.

Their inflation of soot production numbers is clearly shown by their own reference 9, which is a serious study of smoke/soot production following an attack on US military assets (excluding missile silos) by 3,030 500 kT warheads. This team estimated that about 21 Tg of soot would be produced after burning an area similar to what Toon, Robock and Turco estimated would be burned after an attack by only 1000 100 kT weapons, which they claim would produce 28 Tg of smoke. They attribute this to redundancy in targeting on the part of the earlier study, instead of their repeatedly taking steps to inflate their soot estimates. They also assume 4,400 warheads from the US and Russia alone, significantly higher than current arsenals. Looking at their soot estimates more broadly, their studies consistently fail to reflect any changes from the world’s shrinking nuclear arsenals. A 2007 paper uses 150 Tg as the midpoint of a set of estimates from 1990, despite significant reductions in arsenals between those two dates.

One more issue before we leave this link is that all fuel within the burned area is consumed. This is probably a bad assumption, given that Glasstone mentions that collapsed buildings tend to shield flammable materials inside of them from burning. I don’t have a better assumption here, but it’s at least worth noting and adding to the pile of worst-case assumptions built into these models.

But what about soot actually getting to high altitudes? After all, if the soot stays in the lower atmosphere, it’s going to be rained out fairly quickly. Yes, the people downwind won’t have a great time of it for a few days, but we’re not looking at months or years of nuclear winter. As best I can tell, the result here is deafening silence. The only factor I can see at any point between stuff burning and things entering the upper atmosphere is a 0.8 in Box 1. Other than that, everything is going into the upper troposphere/stratosphere.

I am extremely skeptical of this assumption, and figured it was worth checking against empirical data from the biggest recent fire, the 2019-2020 Australian bushfires. These burned something like 400 Tg of wood[3] which in turn would produce somewhere between 1.3 Tg and 4 Tg of soot based on a 1990 paper from Turco and Toon, depending on how much the fires were like vegetation fires vs urban wood fires. Under the Toon/Robock assumptions, it sounds like we should have at least 1 Tg of soot in the stratosphere, but studies of the fires estimate between 0.4 and 0.9 Tg of aerosols reached the stratosphere, with 2.5% of this being black carbon (essentially another term for soot). This suggests that even being as generous as possible, the actual percentage of soot which reaches the stratosphere is something like 2%, not 80%. The lack of climatic impact of the Kuwait oil fires, which released about 8 Tg of soot in total, also strongly suggests that the relevant assumptions about soot transport into the upper atmosphere need to be examined far more closely. Robock attempts to deal with this by claiming that the Kuwait fires were too spread out and a larger city fire would still show self-lofting effects. The large area covered by the Australian bushfires and the low amount of soot reaching the stratosphere calls this into question.

More evidence of problems in this area comes from a paper by Robock himself, attempting to study the effects of the firebombings in WWII. Besides containing the best sentence ever published in a scientific paper in its plain-language abstract[4], it also fails to find any evidence that there was a significant drop in temperature or solar energy influx in 1945. Robock tries to spin this as positively as he can, but is forced to admit that it doesn’t provide evidence for his theory, blaming poor data, particularly around smoke production. I would add in potential issues around smoke transport. More telling, however, is that all of the data points to at most a very limited effect in 1945-1946, with no trace of signal surviving later, despite claims of multi-year soot lifetimes in the papers on nuclear winter.

Which brings us neatly to the last question, involving how long anything which does reach the stratosphere will last. A 2019 paper from Robock and Toon suggests that the e-folding life will be something like 3.5 years, while a paper published the same year and including both men as authors has smoke from the 2017 Canadian wildfires persisting in the stratosphere for a mere 8 months, which they themselves noting that this is 40% shorter than their model predicted. They attempt to salvage the thesis here, even suggesting that organic smoke will contribute more than expected, but this looks to me like reporting results different from what they actually got. They attempt to salvage this by claiming that the smoke will reach higher, but at this point, I simply don’t trust their models without a through validation on known events, and Kuwait is only mentioned in one paper, where they claim it doesn’t count.

A few other aspects bear mentioning. First is the role of latitude. Papers repeatedly identify subtropical fires as particularly damaging, apparently due to some discontinuity in the effects of smoke at 30° latitude which greatly increases smoke persistence. This seems dubious, given that Kuwait falls just within this zone, as does most of Australia, where the wildfires have yet to show this kind of effect. Second, all of the nuclear winter papers are short on validation data, usually pointing only to modeling of volcanic aerosols, which they themselves usually admit are very different from the soot they’re modeling. There is generally no discussion of validation against more relevant data.

A few academic papers also call the Toon/Robock conclusion into question, most notably this one from a team at Los Alamos National Laboratory. They model one of the lower-end scenarios, an exchange of 100 15 kT warheads in a hypothetical war between India and Pakistan, which Toon and Robock model as producing 5 Tg of soot and a significant nuclear winter. This team used mathematical models of both the blast and the resulting fire, and even in a model specifically intended to overestimate soot production got only 3.7 Tg of soot, of which only about 25% ever reached above 12 km in altitude and persisted in the long term, with more typical simulations seeing only around 6% of the soot reaching that altitude. The other three-quarters stayed in the lower atmosphere, where it was rapidly removed by weather. This was then fed into the same climate models that were used by Robock and Toon, and the results were generally similar to earlier studies of a 1 Tg scenario, which showed some effect but nowhere near the impacts Robock predicted from the scale of the conflict. It’s worth noting that these models appear to have significantly overestimated soot lifetime in the stratosphere, as shown by the data from the Canadian fire.

Robock argued back against the paper, claiming that the area it looked at was not densely populated enough, lowering the production of soot and preventing a firestorm from forming. The Los Alamos team responded, running simulations with higher fuel density that showed a strongly nonlinear relationship between fuel density and soot production, with a factor of 4 increase in fuel density doubling soot and a factor of 72 increasing soot production by a factor of only 6, as oxygen starvation limited the ability of the fire to burn. In both of these cases, the percentage of soot reaching an altitude where it could persist in the stratosphere was around 6%, and the authors clearly emphasize that their earlier work is a reasonably upper bound on soot production.

So what to make of all of this? While I can’t claim a conclusive debunking of the papers behind nuclear winter, it’s obvious that there are a lot of problems with the papers involved. Most of the field is the work of a tiny handful of scientists, two of whom were involved from its beginnings as an appendage of the anti-nuclear movement, while the last has also been a prominent advocate against nuclear weapons. And in the parts of their analysis that don’t require a PhD in atmospheric science to understand or a supercomputer simulation to check, we find assumptions that, applying a megaton or two of charity, bespeak a total unfamiliarity with nuclear effects and targeting, which is hard to square with the decades they have spent in the field[5]. A tabulation of the errors in their flagship paper is revealing:

Error FactorCause
1.5Overestimating the number of warheads
2Targeting for flammability rather than efficacy
2Flammability for Hiroshima rather than normal city
1-2Flammability for 1940s city instead of 2020s city
2-3Linear burn area scaling
4-1380% soot in the stratosphere vs 6%-20%
48-468Total

Even using the most conservative numbers here, an all-out exchange between the US and Russia would produce a nuclear winter that would at most resemble the one that Robock and Toon predict for a regional nuclear conflict, although it would likely end much sooner given empirical data about stratospheric soot lifetimes. Some of the errors are long-running, most notably assumptions about the amount of soot that will persist in the atmosphere, while others seem to have crept in more recently, contributing to a strange stability of their soot estimates in the face of cuts to the nuclear arsenal. All of this suggests that their work is driven more by an anti-nuclear agenda than the highest standards of science. While a large nuclear war would undoubtedly have some climatic impact, all available data suggests it would be dwarfed by the direct (and very bad) impacts of the nuclear war itself.

Acknowledgements

Thanks to Johannes Ackva for discussing concerns around the nuclear winter literature, which increased my interest in the topic.

  1. ^

    Calculated multiplying the factors given by Bean in the last table, converting intervals to the mean between their lower and upper bound.

  2. ^

    1 Tg corresponds to 1 million tonnes.

  3. ^

    Based on reported CO2 emissions of 715 Tg and a wood-to-CO2 ratio of 1 to 1.8.

  4. ^

    “We discovered that [the bombing of Hiroshima and Nagasaki] was actually the culmination of a genocidal U.S. bombing campaign.”

  5. ^

    A source I expect many of my readers will have looked at is Luisa Rodriguez’s writeup on nuclear winter for Effective Altruism organization Rethink Priorities. Her conclusion is that a US-Russia nuclear war is unlikely to be an existential risk, as she believes Robock and Toon, who form the basis of most of her analysis, have overestimated the soot from an actual war. It’s obvious that I agree that they have done so, and also think they have exaggerated the climatic consequences.

Comments42
Sorted by Click to highlight new comments since: Today at 12:29 PM

It argues Toon 2008 has overestimated the soot ejected into the stratosphere following a nuclear war by something like a factor of 191[1] (= 1.5*2*2*(1 + 2)/2*(2 + 3)/2*(4 + 13)/2).

I think a geometric mean would be more appropriate, so (48*468)^0.5 = 150. But I disagree with a number of the inputs.

They also assume 4,400 warheads from the US and Russia alone, significantly higher than current arsenals.

Current US + Russia arsenals are around 11,000 warheads, but current deployed arsenals are only about 3000. With Putin pulling out of New START, many nuclear weapons that are not currently deployed could become so. Also, in an all-out nuclear war, currently nondeployed nuclear weapons could be used (with some delay). Furthermore, even if only two thirds as many nuclear weapons are used, the amount of soot would not scale down linearly because of hitting higher average combustible loading areas.

I agree that targeting would likely not maximize burned material, and I consider that in my Monte Carlo analysis.

While it is true that most city centers have a higher percentage steel and concrete than Hiroshima, at least in the US, suburbs are still built of wood, and that is the majority of overall building mass. So I don't think the overall flammability is that much different. There is also been the counteracting factor of much more building area per person, and taller average buildings in cities. Of course steel buildings can still burn, as shown by 9/11.

The linear burn area scaling is a good point. Do you have data for the 400 kT average? I think if you have multiple detonations in the vicinity, then you could have burn area outside the burn area that one would calculate for independent detonations. This could be due to combined thermal radiation from multiple fireballs, but also the thermal radiation from multiple surrounding firestorms so it creates one big firestorm. Also, because assuming a linear burn area means small/less dense cities would be targeted, correcting the linear burn area downward by a factor of 2-3 would not decrease the soot production by a factor of 2-3. 

The large area covered by the Australian bushfires and the low amount of soot reaching the stratosphere calls this into question.

There is a fundamental difference between a moving front fire (conflagration) like a bushfire and a firestorm where it all burns at once. If you have a moving front, the plume is relatively narrow, so it gets heavily diluted and does not rise very high (also true for a oil well fire). Whereas if you have a large area burning at once, it gets much less diluted and will likely go into the upper troposphere. Then solar lofting typically takes it to the stratosphere. Nagasaki was a moving front fire, and I do give significant probability mass to moving front fires instead of firestorms in my analysis.

So overall I got a median of about 30 Tg to the stratosphere (Fig. 6) for a full-scale nuclear war, similar to Luísa's. I could see some small downward adjustment based on the linear burn area assumption, but significantly smaller than Bean's adjustment for that factor.

Added 20 September: though the blasted area goes with the 2/3 exponent of the yield because energy is dissipated in the shock wave, the area above the threshold thermal radiation for starting fires would be linear if the atmosphere were transparent. In reality, there is some atmospheric absorption, but it would be close to linear. So I no longer think there should be a significant downward adjustment from my model.

>Current US + Russia arsenals are around 11,000 warheads, but current deployed arsenals are only about 3000. With Putin pulling out of New START, many nuclear weapons that are not currently deployed could become so.

Possibly the single most important goal of the deployed warheads is to stop the other side from deploying their warheads, both deployed and non-deployed.  Holding to deployed only is probably a reasonable assumption given that some of the deployed will not make it, and most of the non-deployed definitely won't.  And this was written a year before Putin pulled out of New START.  I have doubts that they'll be able to actually deploy all that many more without spending a lot of money that they frankly don't have.

>Do you have data for the 400 kT average? I think if you have multiple detonations in the vicinity, then you could have burn area outside the burn area that one would calculate for independent detonations. This could be due to combined thermal radiation from multiple fireballs, but also the thermal radiation from multiple surrounding firestorms so it creates one big firestorm.

400 kT is basically my guess looking at a list of strategic warheads, and it might well be lower.  As for scaling, that's going to be really complicated, but superlinear scaling seems unlikely.

>While it is true that most city centers have a higher percentage steel and concrete than Hiroshima, at least in the US, suburbs are still built of wood, and that is the majority of overall building mass. So I don't think the overall flammability is that much different.

Yeah, but it's clearly more fire-resistant than Japanese buildings in the 40s.  They burned really well compared to everywhere else that firebombing was tried.  US suburbs may have a lot of building mass in aggregate, but it's also really spread out and generally doesn't contain that much which is likely to draw nuclear attack.  And a lot of the outside material is reasonably flame-resistant in a way I'm pretty sure Hiroshima wasn't.

>There is a fundamental difference between a moving front fire (conflagration) like a bushfire and a firestorm where it all burns at once. 

Yeah, sorry, I've heard enough crying wolf on this (Sagan on Kuwait being the most prominent) that I don't buy it, at least not until I see good validation of the models in question on real-world events.  Which is notably lacking from all of these papers.  So I'll take the best analog, and go from there.  Also, note that your cite there is from 1990, when computers were bad and Kuwait hadn't happened yet.  Also note that the doommonger's best attempt to puzzle stratospheric soot out of atmospheric data from WWII didn't really show more than a brief gap at most.

US suburbs may have a lot of building mass in aggregate, but it's also really spread out and generally doesn't contain that much which is likely to draw nuclear attack.

There are only 55 metropolitan areas in the US with greater than 1 million population. Furthermore, the mostly steel/concrete city centers are generally not very large, so even with a nuclear weapon targeted at the city center, it would burn a significant amount of suburbs. So with 1500 nuclear weapons countervalue even spread across NATO, a lot of the area hit would be suburbs.

Yeah, sorry, I've heard enough crying wolf on this (Sagan on Kuwait being the most prominent) that I don't buy it, at least not until I see good validation of the models in question on real-world events.  Which is notably lacking from all of these papers.  So I'll take the best analog, and go from there.  Also, note that your cite there is from 1990, when computers were bad and Kuwait hadn't happened yet.

"As Toon, Turco, et al. (2007) explained, for fires with a diameter exceeding the atmospheric scale height (about 8 km), pyro-convection would directly inject soot into the lower stratosphere." Another way of getting at this is looking at the maximum height of buoyant plumes. It scales with the thermal power raised to the one quarter exponent. The Kuwait oil fires were between 90 MW and 2 GW. Whereas firestorms could be ~three orders of magnitude more powerful than the biggest Kuwait oil fire. So that implies much higher lofting. Furthermore, volcanoes are very high thermal power, and they regularly reach the stratosphere directly.

Also note that the doommonger's best attempt to puzzle stratospheric soot out of atmospheric data from WWII didn't really show more than a brief gap at most.

I don't see this as a significant update, because the expected signal was small compared to the noise.

Furthermore, the mostly steel/concrete city centers are generally not very large, so even with a nuclear weapon targeted at the city center, it would burn a significant amount of suburbs. So with 1500 nuclear weapons countervalue even spread across NATO, a lot of the area hit would be suburbs.

First, remind me why we're looking at 1500 countervalue weapons?  Do we really expect them to just ignore the ICBM silos?  Second, note that there's a difference between "a lot of the area hit would be suburbs" and "a lot of the suburbs would be hit".  The US has a vast amount of suburbs, and the areas damaged by nuclear weapons would be surprisingly small.

"As Toon, Turco, et al. (2007) explained,

Let me repeat.  I am not interested in anything Turco, Toon et al have to say.  They butchered the stuff I can check badly.  As such, I do not think it is good reasoning to believe them on the stuff I can't.  The errors outlined in the OP are not the sort of thing you can make in good faith.  They are the sort of thing you'd do if you were trying to keep your soot number up in the face of falling arsenals.

Re firestorms more broadly, I don't see any reason to assume those would routinely form.  It's been a while since I looked into this, but those are harder to generate than you might think when that's the goal, and I don't think it's likely to be a goal of any modern targeting plan.  The only sophisticated model I've seen is the one by the Los Alamos team, which got about 70% of the soot production that Robock et al did, and only 12% of that reached the stratosphere.  That's where my money is.

First, remind me why we're looking at 1500 countervalue weapons?  Do we really expect them to just ignore the ICBM silos?  

My understanding is that the warning systems are generally designed such that the ICBMs could launch before the attacking warheads reach the silos. I do have significant probability on counterforce scenarios, but I can't rule out counter value scenarios, so I think it's an important question to estimate what would happen in these counter value scenarios.

 

Possibly the single most important goal of the deployed warheads is to stop the other side from deploying their warheads, both deployed and non-deployed.  Holding to deployed only is probably a reasonable assumption given that some of the deployed will not make it, and most of the non-deployed definitely won't.

I would think the non-deployed warheads could just be stored deep underground so they would nearly all survive.

 

Second, note that there's a difference between "a lot of the area hit would be suburbs" and "a lot of the suburbs would be hit".  The US has a vast amount of suburbs, and the areas damaged by nuclear weapons would be surprisingly small.

Other people have probably done more rigorous analyses now, but my rough estimate in 2015 was that 1500 nukes to the US would destroy nearly all the suburb area of 100,000+ population metro areas. If they were spread over NATO, of course it would be a lower percentage, but I would estimate still the majority of wood in suburbs would be hit.

 

Let me repeat.  I am not interested in anything Turco, Toon et al have to say.  They butchered the stuff I can check badly.  As such, I do not think it is good reasoning to believe them on the stuff I can't.  The errors outlined in the OP are not the sort of thing you can make in good faith.  They are the sort of thing you'd do if you were trying to keep your soot number up in the face of falling arsenals.

I think we should try to evaluate this argument on its merits. If there is a fire 8 km wide, I would argue that it is intuitive that it could rise ~16 km to the stratosphere. It turns out it's a little more complicated than this, because it depends on the potential temperature. The potential temperature is the temperature of a parcel of air would have to be at sea level in order to become neutrally buoyant at the particular altitude. A typical value for the potential temperature at the tropopause is about 100°C. Because combustion temperatures are more like 1000°C, if you don't have dilution, it would rise high in the stratosphere. And if the fire is very wide, there is not that much dilution. In reality, there is significant dilution, but another fire model generally found the smoke going to the upper troposphere (the Lawrence Livermore National Lab study).

Re firestorms more broadly, I don't see any reason to assume those would routinely form.  

Some argue that Hiroshima was exceptional that it did firestorm, and some argue that Nagasaki was exceptional that it did not, but I just went with around 50/50 for my model.

It's been a while since I looked into this, but those are harder to generate than you might think when that's the goal, and I don't think it's likely to be a goal of any modern targeting plan.  The only sophisticated model I've seen is the one by the Los Alamos team, which got about 70% of the soot production that Robock et al did, and only 12% of that reached the stratosphere.

As noted above, the Livermore model does generally support Robock's estimates of lofting of particles. However, the Livermore model did have a shorter particle lifetime in the stratosphere (~4 years vs 8-15 for Robock), partly because the Livermore particles were not mostly soot (black carbon). So I think this is a potentially important possibility. However, inclusion of the non-soot smoke may actually make the sunlight reduction greater than Robock's estimate (for the same amount burned).

My understanding is that the warning systems are generally designed such that the ICBMs could launch before the attacking warheads reach the silos. I do have significant probability on counterforce scenarios, but I can't rule out counter value scenarios, so I think it's an important question to estimate what would happen in these counter value scenarios.

Even leaving aside the ICBMs, "countervalue" was one of McNamara's weird theories, and definitely wouldn't be implemented as a pure thing.  If nothing else, a lot of those warheads are going after military targets, not cities.

Other people have probably done more rigorous analyses now, but my rough estimate in 2015 was that 1500 nukes to the US would destroy nearly all the suburb area of 100,000+ population metro areas.

Maybe if they were targeted specifically with that goal in mind, but again, that seems unlikely, particularly with modern guidance systems.  You'll do better for yourself shooting at specific things rather than asking "how many civilians can we kill"?  A lot of those will be far away from cities, or will have overlap with something else nearby that is reasonably hard and also needs to die.

In reality, there is significant dilution, but another fire model generally found the smoke going to the upper troposphere (the Lawrence Livermore National Lab study).

I might be misreading it, but that paper seems to bury a lot of the same assumptions that I'm objecting to.  They assume a firestorm will form as part of the basis of how the fire is modeled, and then explicitly take the 5 Tg of stratospheric soot per 100 fires number and use that as the basis for further modeling.  For other fuel loadings, the amount of soot in the stratosphere is linear with fuel loading, which is really hard to take seriously in the face of the "wildfires are different" assertion.  Sure, they accurately note that there are a lot of assumptions in the usual Turco/Toon/Robock model and talk a good game about trying to deal with all four parts of the problem, then go and smuggle in the same assumptions.  Points for halving the smoke duration, I guess.

Edit:

I would think the non-deployed warheads could just be stored deep underground so they would nearly all survive.

Deep bunkers like that are expensive and rare, and even if the bunker itself survived, ground bursts are messy and would likely leave it inaccessible.  Also, there's the problem of delivering the warheads to the target in an environment where a lot of infrastructure is gone.  Missile warheads are only of use as a source of raw materials, and while you might be able to get gravity bombs to bombers, you wouldn't get many, and probably couldn't fly all that many sorties anyway.  It's a rounding error, and I'm probably being generous in using that to cancel out the loss of deployed warheads.  (Why do we keep them, then, you ask?  Good question.  Some of it is in case we need to up deployed warheads quickly.  A lot is that various politicians don't want to be seen as soft on defense.)

Deep bunkers like that are expensive and rare, and even if the bunker itself survived, ground bursts are messy and would likely leave it inaccessible. 

There are thousands of underground mines in the US (14000 active mines, but many are surface mines), and I think it would only require 1 or a few to store thousands of nuclear weapons. Maybe the weapons would be spread out over many mines. It would not be feasible to make thousands of mines inaccessible. 

Missile warheads are only of use as a source of raw materials, and while you might be able to get gravity bombs to bombers, you wouldn't get many, and probably couldn't fly all that many sorties anyway. 

Are you saying that missile warheads could not be quickly configured to be used as a gravity bomb? I'm not claiming that most of the non-deployed nuclear weapons could be used in a few days, but I would think it would be feasible in a few months (it would only take a few surviving bombers if the warheads could be used as gravity bombs).

OK, remember that we're dealing with nuclear weapons, which inspire governments to levels of paranoia you maybe see when dealing with crypto.  Dropping a dozen nukes down a mine somewhere is not going to happen without a lot of paperwork and armed guards and a bunch of security systems.  And those costs don't really scale with number of warheads.  Sure, if you were trying to disperse the stockpile during a period of rising tension, you could take a few infantry companies and say "hang the paperwork".  But that requires thinking about the weapons in a very different way from how they actually do, and frankly they wouldn't be all that useful even if you did do that, because of the other problems with this plan.

Are you saying that missile warheads could not be quickly configured to be used as a gravity bomb?

Yes, I am.  The first commandment of nuclear weapons design since 1960 or so has been "it must not go off by accident".  So a modern missile warhead has an accelerometer which will not arm it unless it is pretty sure it has been fired by the relevant type of missile.  And trying to bypass it is probably a no-go.  The design standard is that one of the national labs couldn't set a US warhead off without the codes, so I doubt you can easily bypass that.

it would only take a few surviving bombers if the warheads could be used as gravity bombs

A modern bomber is a very complex machine, and the US hasn't set ours up to keep working out of what could survive a nuclear exchange.  (This is possible, but would require mobile servicing facilities and drills, which we do not have.)  Not to mention that they can't make a round-trip unrefueled from CONUS to any plausible enemy, and the odds of having forward tankers left are slim to none.

Thanks for the engagement, David and Bean!

I might be misreading it, but that paper seems to bury a lot of the same assumptions that I'm objecting to.  They assume a firestorm will form as part of the basis of how the fire is modeled, and then explicitly take the 5 Tg of stratospheric soot per 100 fires number and use that as the basis for further modeling.

For reference, here is what they say about their fire modelling:

The goal of the fire simulations in this work is to better characterize the spatial and temporal distribution of smoke from a mass urban fire resulting from a 15 kt nuclear detonation. Therefore, our modeling is informed by the Hiroshima firestorm, and the Hamburg firestorm, due to its rough similarity to the Hiroshima firestorm in size and duration. The assumption that all 100 detonations cause fires, and that these fires are more like the Hiroshima firestorm than Nagasaki, is a worst-case scenario. The studies of Penner et al. (1986) and Toon et al. (2007) also use fire parameters based on these historical cases (Hiroshima and Hamburg), so our fire parameters have the additional benefit of being similar to these previous studies. To produce simulations of fires similar to Hiroshima and Hamburg, it is assumed that the terrain is flat (i.e., topography does not provide shielding of thermal radiation) and there is uniform fuel loading over the area where thermal radiation is sufficient to ignite standard construction materials, such as wood. The WRF model source code is modified to allow for specification of surface fluxes of heat, water vapor and smoke (or black carbon), requiring quantification of these three fluxes, as well as the fire shape, size and duration.

The Hiroshima firestorm burned an area of about 11 to 13 km2 in 4 to 9 h, taking 20 to 30 min to develop into a firestorm (Glasstone, 1962; Rodden et al., 1965). The Hamburg firestorm burned a comparable 12 km2 in about 6 h (Carrier et al., 1985). Therefore, we specify a circular area with a 2 km radius (12.57 km2) for our fires. Each fire has a 30 min ramp-up period as surface fluxes increase linearly from zero, followed by a 4 h fire duration where surface fluxes are constant. The 4 h duration is chosen because it is the shortest time estimate for the fire in Hiroshima, and releasing a given mass of emissions and burning a given fuel amount over the shorter time period will result in higher smoke concentrations and heat fluxes, thus providing a worst-case estimate.

I very much agree that a larger area burning all at once will loft soot higher on expectation than a thinner moving flame front, because the hot gasses at the center of the burning mass essentially have nowhere to diffuse but up. That specific argument isn't really a matter of crying wolf, it was just very silly for people to claim that oil well fires could have such big effects in the first place. 

That said, this it doesn't really matter in terms of the estimates here because the soot loft estimate is based on the Los Alamos model, which is also where I'd place my bet for accuracy. Also, the initial Los Alamos model was based on Atlanta's suburbs which means their outputs are highly relevant for burning in U.S. cities, and cover some of Dave's objections as well.

Difficult to interpret a lot of this as it seems to be a debate between potentially biased pacifists, and potentially biased military blogger. As with many disagreements the truth is likely in the middle somewhere (as Rodriguez noted). Need new independent studies on this that are divorced from the existing pedigrees. That said, much of the catastrophic risk from nuclear war may be in the more than likely catastrophic trade disruptions, which alone could lead to famines, given that nearly 2/3 of countries are net food importers, and almost no one makes their own liquid fuel to run their agricultural equipment. 

Agreed, Matt!

That said, much of the catastrophic risk from nuclear war may be in the more than likely catastrophic trade disruptions, which alone could lead to famines, given that nearly 2/3 of countries are net food importers, and almost no one makes their own liquid fuel to run their agricultural equipment.

Makes sense. I suppose getting a handle of the climatic effects is mostly relevant for assessing existential risk. Assuming the climatic effects are negligible, my guess is that the probability of extinction until 2100 given a global thermonuclear war without other weapons of mass destruction (namely bio or AI weapons) is less than 10^-5.

[anonymous]8mo17
1
3

In addition to this, there was a post on LessWrong pointing out that the nuclear winter agricultural impacts studies carried out by Robock Toon and others was based on the assumption that humans take no adaptive response to global cooling of ten degrees C. This study found 5 billion dead, but this is an obvious overestimate for a realistic response to massive global cooling. I suspect the death estimates they give are out by at least two orders of magnitude, given the various unrealistic assumptions they use

Thanks for noting that, John!

Preliminary results from ALLFED suggest that (see Fig. 1), in a 150 Tg scenario[1] without food trade, stopping the consumption of animals and biofuels, reducing waste, and rationing stored food[2] would increase calorie supply by a factor of 3 (= 0.63/0.19). On the other hand, "failure of electrical grids, transportation infrastructure, telecommunications, or other infrastructure destruction due to the nuclear war is not considered" (neither in ALLFED's preliminary results nor Xia 2022).

  1. ^

    The one for which Xia 2022 report 5 billion deaths.

  2. ^

    Xia 2022 assumes "all stored food is consumed in Year 1".

This study found 5 billion dead, but this is an obvious overestimate for a realistic response to massive global cooling. I suspect the death estimates they give are out by at least two orders of magnitude, given the various unrealistic assumptions they use

50 million dying from starvation (more than 50 million would die from the direct impacts of the nuclear war) is possible with a ~90% reduction in non-adapted agriculture (with current applications of fertilizers, pesticides, etc), but trade, resilient foods, and subsidies would have to go very well. I have significant probability mass for current awareness and preparation on most of all trade (not just food trade) breaking down with loss of industry in most areas, and significantly greater than 5 billion dead. This is partly because in overshoot scenarios, population can go to significantly below the equilibrium due to people "eating the seed corn."

[anonymous]8mo4
0
0

I was thinking of all of the assumptions, i.e. about the severity of the winter and the adaptive response. 

Sorry if I'm being thick, but what do you mean by 'eating the seed corn' here?

Thanks for clarifying. If instead one uses a mean (though I do think the tails should be weighted more heavily) closer to Luisa's and my analysis of 30 Tg, then Xia predicts about 1.6 billion starvation fatalities and about 110 million direct fatalities (though this latter number would probably be higher because Xia assumes that all areas hit would firestorm, which I don't, so I think more area would be hit to produce that amount of soot to the stratosphere). This is pessimistic in that it assumes no international food trade, no planting location adaptation, limited response in biofuels and animals, stored food runs out in a year, and no resilient foods. However, it is optimistic in assuming food only goes to the people who would survive. The extreme case would be if there is not enough food to go around, if the food is shared equally, then everyone would die. "Eating the seed corn" means that desperate people would eat the seeds and not be able to grow food in the future. This could apply to eating wild animals including fish to extinction, and then not being able to have food in the future. The Xia number is also optimistic in assuming that nonfood trade will continue, such that countries will still have the agricultural equipment, fuel, fertilizers, pesticides, etc. to produce the crops. There is also the chance of losing cooperation within countries, significantly increasing mortality. For instance, if farming is lost, there would be around 99.9% mortality returning to hunting and gathering with the current climate, worse in nuclear winter conditions (assuming we can figure out how to revert to hunting and gathering, which is not guaranteed). Overall, I think we should have large uncertainties in the response to a shock of this magnitude.

Vasco - thanks for a fascinating, illuminating, and skeptical review of the nuclear winter literature.

It seems like about 1000x as much effort has gone into modeling global warming as into modeling nuclear winter. Yet this kind of nuclear winter modelling seems very important -- it's large scope, neglected, and tractable -- perfect for EA funding. 

Given the extremely high stakes for understanding whether nuclear war is likely to be a 'moderately bad global catastrophe' (e.g. 'only' a few hundred million dead, civilization set back by decades or centuries, but eventual recovery possible), or an extinction-level event for humanity (i.e. everybody dies forever), clarifying the likely effects of 'nuclear winter' seem like a really good use of EA talent and money.

Thanks, Geoffrey! Credits go to Bean, the author of the post I shared!

I should note Open Phil and FLI have funded nuclear winter modelling:

  • Open Phil recommended a grant of 2.98 M$ in 2017, and 3 M$ in 2020 to support Alan Robock's and Brian Toon's research.
  • FLI has made a series of grants in 2022 funding nuclear war research totalling 3.563 M$, of which 1 M$ to support Alan Robock's and Brian Toon's research.

So it looks like EA-aligned organisations have directed 9.543 M$ to nuclear war research between 2017 and 2022 (1.59 M$/year), of which 6.98 M$ (73.1 %) to support Alan Robock's and Brian Toon's research. Someone worried about how truth-seeking their past research was may have preferred more funding going to other groups.

[anonymous]8mo25
5
0

I think funding of the Robock, Toon Turco research group was a mistake and little was gained from the research outputs of the funding, which was more of the same. I think it would be good if someone funded a physicist not associated with them to do an in-depth review of the topic. 

It remains a really puzzling choice on the part of OP as well given the concerns around Robock are very easy to find and, indeed, have been well-known in the EA community by 2020.

Thanks for sharing your view on this, John!

I think funding of the Robock, Toon Turco research group was a mistake and little was gained from the research outputs of the funding, which was more of the same.

Do you have any guesses for why Open Phil and FLI funded Alan Robock's and Brian Toon's teams despite concerns around their past research?

I think it would be good if someone funded a physicist not associated with them to do an in-depth review of the topic.

For reference, a team of physicists (Wagman 2020) found a significantly shorter soot lifetime (BC below stands for black carbon, i.e. soot):

First, thank you for the informative post.

they claim that we’re closer to nuclear war than any time since the Cuban Missile Crisis, which is clearly nonsense

John Mearsheimer also believes P(nuclear war) is higher now that at any time during the Cold War. If you like, I can try to find where he says that in a video of one of his speeches and interviews.

His reasoning is that the US national-security establishment has become much less competent since the decisive events of the Cold War with the result that Russia is much more likely to choose to start a nuclear war than they would be if the security situation the US currently finds itself in were being managed by the US's Cold-War leaders. I'm happy to elaborate if there is interest.

Writer of the original article here, and I am skeptical.  Largely on the grounds that I think the odds of someone using nukes deliberately are actually fairly low, and we have a lot less nukes sitting around where accidents can happen.  I also have a complicated relationship with IR Realism, and this sounds like it's part of a theory that ends with "therefore, we should let the Russians do whatever they want in Ukraine."  Which I am very much not OK with.

Is this really a fair description of IR Realism?

Mearsheimer, to his credit, was able to anticipate the Russian invasion of Ukraine. If his prescriptions were heeded to sooner, perhaps this conflict could have narrowly been avoided.

You could just as easily argue that Mearsheimer's opponents have done more to enable the Russians.

I'm not saying I agree with Mearsheimer or understand his views fully, but I'm grateful his school of thought exists and is being explored.

This is a tangent, but I think it's important to consider predictors' entire track records, and on the whole I don't think Mearsheimer's is very impressive. Here's a long article on that.

Indeed. And there are other forecasting failures by Mearsheimer, including one in which he himself apparently admits (prior to resolution) that such a failure would constitute a serious blow to his theory. Here’s a relevant passage from a classic textbook on nuclear strategy:[1]

In an article that gained considerable attention, largely for its resolute refusal to share the general mood of optimism that surrounded the events of 1989, John Mearsheimer assumed that Germany would become a nuclear power. Then, as the Soviet Union collapsed, he explained why it might make sense for Ukraine to hold on to its nuclear bequest. In the event Germany made an explicit renunciation of the nuclear option at the time of the country’s unification in 1990, while Japan, the other defeated power of 1945, continued to insist that it had closed off this option. Nor in the end did Kiev agree that the nuclear component of Ukraine’s Soviet inheritance provided a natural and even commendable way of affirming a new-found statehood. Along with Belarus and Kazakhstan, Ukraine eased out of its nuclear status. As it gained its independence from the USSR, Ukraine adopted a non-nuclear policy. The idea that a state with nuclear weapons would choose to give them up, especially when its neighbour was a nuclear state with historic claims on its territory, was anathema to many realists. One of his critics claimed that when asked in 1992, ‘What would happen if Ukraine were to give up nuclear weapons?’ Mearsheimer responded, ‘That would be a tremendous blow to realist theory.’

  1. ^

    Lawrence Freedman & Jeffrey Michaels, The Evolution of Nuclear Strategy, 4th ed., London, 2019, pp. 579–580

Is this really a fair description of IR Realism?

It's not a fair description of all IR realism, which I think is a useful theory for illuminating certain interests of state actors.  But some proponents (and the article Stephen linked confirms that Mearsheimer is one of them) seem to elevate it to a universal theory, which I don't think it is.  Frankly, Realism is the intellectual home of Russia-apologism, and while I'll admit that Mearshimer seems to come by this honestly (in that he honestly thinks Russia is likely to respond with nukes and isn't just a Putin fanboy) I take a rather different view of how things are likely to turn out if we keep backing Ukraine.

Realism as a school of IR thought far predates Russia-Ukraine, Wikipedia lists ~1600s as the origin of IR realism in the West but in college I was taught that the intellectual foundation originated with at least Thucydides. 

[anonymous]8mo7
1
0

I don't know whether this is getting too into the weeds on realism, but the claim that the US national security establishment is less competent since the end of the Cold War seems straightforwardly incompatible with realism as a theory anyway since realism assumes that states rationally pursue their national interest. I have found this in interviews with Mearsheimer where he talks about 'Russia's' only rational option given US policy towards Ukraine, but then says that the US is not acting in its own national interest. Why can't Russia also not be acting in its own national interest? 

Once you grant that the US isn't pursuing its national interest, aren't you down the road to a public choice account, not a realist account?

Mearsheimer does claim that states rationally pursue security. However, the assumption that states are rational actors--shared by most contemporary realists--is a huge stretch.  The original--and still most influential--statement of neorealist theory, Kenneth Waltz's Theory of International Politics, did not employ a rational actor assumption, but rather appealed to natural selection--states that did not behave as if they sought to maximize security would tend to die out (or, as Waltz put it, 'fall by the wayside'). In contrast to Mearsheimer, Waltz at least motivated his assumption of security-seeking, rather than simply assuming it.

In subsequent publications, Waltz argued that states would be very cautious with nuclear weapons, and that the risk of nuclear war was very low--almost zero. Setting aside the question of whether almost zero is good enough in the long term, this claim is very questionable. From outside the realist paradigm, Scott Sagan has argued that internal politics are likely to predispose some states--particularly new nuclear states with military-dominated governments--to risky policies. 

In a recent critique of both Waltz and Mearsheimer (https://journals.sagepub.com/doi/full/10.1177/00471178221136993), I myself argue that (a) on Waltz's natural selection logic, we should actually expect great powers to act as if they were pursuing influence, not security--which should make them more risk-acceptant; and (b) Sagan's worries about internal politics leading to risky nuclear policies should be plausible even within neorealist theory, properly conceived (for the latter argument, see my section 'Multilevel selection theory').

Bottom line: When you dig down into neorealist logic,the claim that states will be cautious and competent in dealing with nuclear weapons starts to look really shaky. Classical realists like Hans Morgenthau and John Herz had a better handle on the issue.

I find "realism for thee but not for me" to be a pretty common pattern in casual IR discussions, possibly as a subbranch of outgroup homogeneity

Thanks for commenting!

I'm pretty sure John Mearsheimer believes P(nuclear war) is higher now that at any time during the Cold War. If you like, I can try to find where he says that in a video of one of his speeches and interviews.

His reasoning is that the US national-security establishment has become much less competent since the decisive events of the Cold War. I'm happy to elaborate if there is interest.

Sure, feel free to elaborate. I would be curious to know a little more about why you think John Mearsheimer's views are relevant (not saying they are not; I had to google him!).

Note the post I am sharing was published on 24 April 2022.  I have now added this important detail to the start of my post. Metaculus' community was predicting 8 % chance of a global thermonuclear war by 2070 then[1], and now it is forecasting 13 %, which suggests the chance is now higher. On the other hand, Metaculus' community prediction for a nuclear weapon being detonated as an act of war by 2050 has remained at 33 %.

The members of the Bulletin of the Atomic Scientists believe it, too, as expressed in their collective decisions about the time shown on the "Doomsday Clock": source.

I am not a fan of that clock. As argued by Christian Ruhl, I think probabilistic forecasts are more informative.

  1. ^

    This question will resolve as Yes if there is an armed conflict before January 1, 2070 where either of the following conditions are true:

    - Three countries each detonate at least 10 nuclear devices of at least 10 kiloton yield outside of their own territory

    - Two countries each detonate at least 50 nuclear devices of at least 10 kiloton yield outside of their own territory.

After I wrote my paragraph about the clock (which you quoted) I noticed that the Bulletin has expanded the meaning of the clock to include risks from climate change, i.e., the clock is no longer about nuclear war specifically, so I deleted that paragraph.

Thanks for writing this - it seems very relevant for thinking about prioritization and more complex X-risk scenarios.

I haven't engaged enough to have a particular object-level take, but was wondering if you /others had a take on whether we should consider this kind of conclusion somewhat infohazardous? Ie. Should we be making this research public if it at all increases the chance that nuclear war happens?

This feels like a messy thing to engage with, and I suppose it depends on beliefs around honesty and trust in governments to make the right call with fuller information (of course there might be some situations where initating a nuclear war is good).

I think it's not at all a reasonable place to worry about infohazard, for 2 reasons related to incentives.

First, the decisionmakers in a nuclear conflict are very likely going to die regardless - they are located in large cities or other target areas. Whether or not there is a nuclear winter is irrelevant.

Second, the difference between hundreds of millions dead and billions dead is morally critical, but strategically doesn't matter - if the US is involved in a large scale nuclear exchange with Russia or China, it's going to be a disaster for the US, and it doesn't matter to them whether it also devastates agriculture in Australia and Africa.

On top of this, I think that this is a bad situation to argue for infohazard risk, since hiding the damage or lack thereof of an extant risk is imposing ignorance on an affected group. This wouldn't be critical if the infohazard creates the risk, but here it does not.

Thanks David that all makes sense. Perhaps my comment was poorly phrased but I didn't mean to argue for caring about infohazards per se, but was curious for opinions on it as a consideration (mainly poking to build my/others'understanding of the space ). I agree that imposing ignorance on affected groups is bad by default.

Do you think the point I made below in this thread regarding pressure from third party states is important? Your point "it doesn't matter to them whether it also devastates agriculture in Africa or Australia" doesn't seem obviously true at least considering indirect effects. Presumably, it would matter a lot to Australia/African countries/most third party states, and they might apply relevant political pressure. It doesn't seem obvious that this would be strategically irrelevant in most nuclear scenarios.

Even if there is some increased risk, I feel it is a confusing question about how this trades off with being honest/having academic integrity. Perhaps the outside view (in almost all other contexts I can think of, researchers being honest with governments seems good -perhaps the more relevant class is military related research which feels less obvious) dominates here enough to follow the general principles.

I don't think pressure from third-party states is geostrategically relevant for most near-term decisions, especially because there is tremendous pressure already around the norm against nuclear weapons usage.

I strongly agree that the default should be openness, unless there is a specific reason for concern. And I even more strongly agree that honesty is critical for government and academia - whihc is why I'm much happier with banning research because of publicly acknowledged hazards, and preventing the discovery of information that might pose hazards, rather than lying about risks if they are discovered.

Hi David,

if the US is involved in a large scale nuclear exchange with Russia or China, it's going to be a disaster for the US, and it doesn't matter to them whether it also devastates agriculture in Australia and Africa.

Unless the relevant decision makers are scope-sensitive to the point of caring about the indirect deaths in the US, which may be significant if there is a severe nuclear winter? I guess it would not matter, because the direct deaths and destruction would already be super bad!

Nitpick, Australia may have more food depending on the amount of soot ejected into the stratosphere. Table S2 of Xia 2022:

Thanks for commenting!

Paul Ingram has done some relevant research about the effects of increasing awareness for the impact of a nuclear war. The study involved 3 k people, half from the US (1.5 k), and half from the UK (1.5 k). The half of the citizens (750 from the US, and 750 from the UK) in the treatment group were shown these infographics for 1 min:

The numbers from the 1st graphic are from Xia 2022, whose 150 Tg scenario (3rd row of the 1st graphic) considers the 4.4 k nuclear detonations studied in Toon 2008. These were the effects on opposition/support for nuclear retalation:

I am not a fan of this way of analysing the results. I believe it would have been better to assign a score to each of the levels of opposition/support (e.g. ranging from -2 for strong opposition to 2 for strong support). Then one could calculate the effect size of the intervention from ("mean score in the treatment group" - "mean score in the control group")/"standard deviation in the 2 groups", and report the p-value. Having lots of categories allows one to test many different hypotheses, and then selectively report the statistically significant ones.

In any case, the 1st infographic contains information about both the direct and indirect deaths, so it is not straightforward to interpret whether the increase in opposition to retaliation illustrated just above was due to gaining awareness of the direct deaths from the detonations, or indirect ones from the nuclear winter. The survey asks one question to understand what motivated the change in the level of opposition/support (see Q2 in the last page). One of the potential answers was "Avoid risk of killing civilians in other countries, or triggering global famine" (4), which refers to nuclear winter.

The study does not report the results of the answers to Q2. However, I speculate the difference between the fraction of people answering 4 in the control and treatment group was not statistically significant at a 95 % confidence level (i.e. the p-value was higher than 0.05). In the section "Key findings", there is a box saying:

All statistics quoted in this section are statistically significant at a 95% confidence level.

The study is called "Public awareness of nuclear winter and implications for escalation control", so a statistically significant difference in the fraction of people answering 4 would presumably have been reported. I would say that, even without statistical significance, it is generally better to report all results to avoid selection bias.

I do not have any particular expertise in government decision-making. However, I personally think the chance of nuclear war would not increase much if it turned out that the climatic effects of nuclear war were negligible:

  • In the 150 Tg scenario of Xia 2022, which considers the 4.4 k nuclear detonations studied in Toon 2008, direct fatalities are estimated at 360 million, and the number of people without food at the end of year 2 at 5.08 billion (see Table 1).
  • My guess is that governments are not sufficiently scope-sensitive to think that 360 million direct deaths in the involved countries ("France, Germany, Japan, United Kingdom, United States, Russia and China") is much better than that plus 5.08 billion people globally without food 2 years after the nuclear war.

I suppose the chance of nuclear war would still increase, but I guess negligibly. 360 million deaths would be an unprecedently bad catastrophe!

Thanks for the reply and link to the study - I feel quite surprised by how minor the effect of impact awareness is but I suppose nuclear war feels quite salient for most people. I wonder if this could be some kind of metric used for evaluating the baseline awareness of a danger (ie. I would be very interested to see the same study applied to pandemics, AI, animals etc)

Re. The effects on government decision making, I think I agree intuitively that governments are sufficiently scope insensitive (and self interested in nuclear war circumstances?) that it would not make a big difference necessarily to their own view.

However, it seems plausible to me that a global meme of "any large-scale nuclear war might kill billions globally" might mean that there is far greater pressure from third party states to avoid a full nuclear exchange. I might try thinking more about this and write something up, but it does seem like having that situation could make a country far less likely to use them.

Obviously nuclear exchanges are not ideal for third parties even with no climate effect, and I feel unsure how much of a difference this might make. It also doesn't seem like the meme is currently sufficiently strong as to affect government stances on nuclear war, although that is a reasonably uninformed perspective.

You are welcome!

I feel quite surprised by how minor the effect of impact awareness is but I suppose nuclear war feels quite salient for most people

It is worth having in mind that the intervention was only 1 min, so it is quite low cost, and even a small effect size can result in high cost-effectiveness.

However, it seems plausible to me that a global meme of "any large-scale nuclear war might kill billions globally" might mean that there is far greater pressure from third party states to avoid a full nuclear exchange.

Right, to be honest, that sounds plausible to me too (although I would rather live in the world where nuclear winter was not a thing!). The countries with nuclear weapons "only" have 55.6 % of global GDP[1], so third parties should still exerce some reasonable influence even if it may be limited by alliances. In that case, finding out nuclear war had negligible climatic effects would counterfactually increase, in the sense of continuing to fail to decrease, the expected damage of nuclear war.

Another important dynamic is the different climatic impacts across countries. Here are the results from Fig. 4 of Xia 2022 for the 27 Tg scenario (closest to the 30 Tg expected by Luísa):

The climatic effects are smaller in the US than in China and Russia. So the US may have a military incentive to hide that nuclear winter is real, while secretely preparing for it, such that China and Russia are caught unprepared in case of a nuclear war. Tricky...

In general, I still abide by principles like:

  • Having less weapons of mass destruction (e.g. nuclear weapons) is almost always good.
  • Governments being honest with their citizens (e.g. about nuclear winter) is almost always good.
  1. ^

    From The World Bank, the countries with nuclear weapons except for North Korea had a combined GDP in 2022 of 55.8 T$ (25.46 T$ from the US, 17.96 T$ from China, 3.39 T$ from India, 3.07 T$ from the UK, 2.78 T$ from France, 2.24 T$ from Russia, 0.52203 T$ from Israel, and 0.37653 T$ from Pakistan), and the global GDP was 100.56 T$.

Curated and popular this week
Relevant opportunities