The views expressed here are my own, not those of Alliance to Feed the Earth in Disasters (__ALLFED__), for which I work as a contractor. Please assume this is always the case unless stated otherwise.

# Summary

- The initial motivation for my analysis
__was__combining the results of__2 views__about__nuclear winter__:- One linked to Alan Robock (Rutgers University), Michael Mills (National Center for Atmospheric Research), and Brian Toon (University of Colorado), which is illustrated in
__Xia 2022__. “We estimate more than 2 billion people could die from nuclear war between India and Pakistan, and more than 5 billion could die from a war between the United States and Russia”. - Another linked to Jon Reisner (Los Alamos National Laboratory), which is illustrated in
__Reisner 2018__. “Our analysis demonstrates that the probability of significant global cooling from a limited exchange scenario as envisioned in previous studies is highly unlikely, a conclusion supported by examination of natural analogs, such as large forest fires and volcanic eruptions”.

- One linked to Alan Robock (Rutgers University), Michael Mills (National Center for Atmospheric Research), and Brian Toon (University of Colorado), which is illustrated in
- I estimate
__12.9 M__expected famine deaths due to the climatic effects of nuclear war before 2050, multiplying:__3.30 %__probability of large nuclear war before 2050, multiplying:__392 M__famine deaths due to the climatic effects of a large nuclear war, multiplying:__4.43 %__famine death rate due to the climatic effects for__22.1 Tg__(22.1 trillion grams, i.e. million tonnes^{[1]}) of soot injected into the stratosphere in a large nuclear war, multiplying:__2.09 k__offensive nuclear detonations in a large nuclear war.__21.5 %____countervalue__nuclear detonations.- 0.0491 Tg per countervalue nuclear detonation, multiplying:
__189 kt__of yield per countervalue nuclear detonation.__2.60*10^-4 Tg/kt__of soot injected into the stratosphere per countervalue yield.

__8.86 G__people (8.86 billion^{[2]}).

- My expected annual famine deaths due to the climatic effects of nuclear war before 2050
__are__496 k, and my 5th and 95th percentile are 0 and 30.9 M. My 95th percentile is 62.3 times my best guess, which means there is lots of uncertainty. Bear in mind my estimates only refer to the famine deaths due to the climatic effects. I exclude famine deaths resulting directly or indirectly from infrastructure destruction, and heat mortality. - I obtained my best guess for the soot injected into the stratosphere per countervalue yield giving the same weight to results I inferred from Reisner’s and Toon’s views, but they differ substantially. If I attributed all weight to the result I deduced from Reisner’s (Toon’s) view, my estimates for the expected mortality
__would__become 0.121 (8.27) times as large. In other words, my best guess is hundreds of millions of famine deaths due to the climatic effects, but tens of millions putting all weight in the result I deduced from Reisner’s view, and billions putting all weight in the one I deduced from Toon’s view. Further research would be helpful to figure out which view should be weighted more heavily. - My expected famine deaths due to the climatic effects of a large nuclear war
__are__17.7 M/Tg (per soot injected into the stratosphere) and 0.992 M/Mt (per total yield). These are 32.3 % and 7.81 % of the 54.8 M/Tg and 12.7 M/Mt of__Xia 2022__, which I deem too pessimistic. - My estimate of 12.9 M expected famine deaths due to the climatic effects of nuclear war before 2050
__is__2.05 % the 630 M implied by Luisa Rodriguez’s__results____for__nuclear exchanges between the United States and Russia, so I would say they are significantly pessimistic^{[3]}. I am also surprised by Luisa’s distribution for the famine death rate due to the climatic effects given at least one offensive nuclear detonation in the United States or Russia. Her 5th and 95th percentile are 41.0 % and 99.6 %, which I think are too close and high. - I
__believe__Mike underweighted Reisner’s view. - I
__guess__the famine deaths due to the climatic effects of a large nuclear war would be 1.16 times the direct deaths. Putting all the weight in the soot injected into the stratosphere per countervalue yield I inferred from Reisner’s (Toon’s) view, the famine deaths due to the climatic effects would be 0.140 (9.59) times the direct deaths. In other words, my best guess is that famine deaths due to the climatic effects are within the same order of magnitude of the direct deaths, but 1 order of magnitude lower putting all weight in the result I inferred from Reisner’s view, and 1 higher putting all weight in the one I inferred from Toon’s view. - Combining my mortality estimates with data from
__Denkenberger 2016__, I__estimate__the expected cost-effectiveness of planning, research and development of__resilient food__solutions is 28.7 $/life, which is 2 orders of magnitude more cost-effective than GiveWell’s__top charities__. Nevertheless, I__suspect__the values from__Denkenberger 2016__are very optimistic, such that I am greatly overestimating the cost-effectiveness. I guess the true cost-effectiveness is within the same order of magnitude of that of GiveWell’s__top charities__, although this adjustment is not__resilient__. Furthermore, I have__argued__corporate campaigns for chicken welfare are 3 orders of magnitude more cost-effective than GiveWell’s__top charities__. - I do not
__think__activities related to resilient food solutions are cost-effective at increasing the longterm value of the future. By not cost-effective, I mostly mean I do not see those activities being competitive with the best opportunities to decrease__AI risk__, and improve__biosecurity and pandemic preparedness____at the margin__, like__Long-Term Future Fund__’s__marginal grants__. - It
__is__often hard to find interventions which are robustly beneficial. In my mind, decreasing the famine deaths due to the climatic effects of nuclear war is no exception, and I__think__it is unclear whether that is beneficial or harmful from both a nearterm and longterm perspective (although I strongly oppose killing people, including via nuclear war). - Feel free to check my personal
__recommendations__for funders.

# Introduction

I __have__ __been__ __assuming__ the importance of the climatic effects of nuclear war is roughly in agreement with __Denkenberger 2018__ and Luisa’s __post__, but I had not looked much into the relevant literature myself. I got interested in doing so following some of the discussion in my global warming __post__, and __Bean’s__ and __Mike’s__ analyses.

The initial motivation for my analysis was combining the results of __2 views__ about __nuclear winter__:

- One linked to Alan Robock (Rutgers University), Michael Mills (National Center for Atmospheric Research), and Brian Toon (University of Colorado), which is illustrated in
__Xia 2022__. “We estimate more than 2 billion people could die from nuclear war between India and Pakistan, and more than 5 billion could die from a war between the United States and Russia”. - Another linked to Jon Reisner (Los Alamos National Laboratory), which is illustrated in
__Reisner 2018__. “Our analysis demonstrates that the probability of significant global cooling from a limited exchange scenario as envisioned in previous studies is highly unlikely, a conclusion supported by examination of natural analogs, such as large forest fires and volcanic eruptions”.

__Denkenberger 2018__ did not integrate the results of __Reisner 2018__, which was published afterwards^{[4]}. Luisa __says__:

As a final point, I’d like to emphasize that the nuclear winter is quite controversial (for example, see:

Singer, 1985;Seitz, 2011;Robock, 2011;Coupe et al., 2019;Reisner et al., 2019;Pausata et al., 2016;Reisner et al., 2018; Also see the summary of the nuclear winter controversy inWikipedia’s article on nuclear winter). Critics argue that the parameters fed into the climate models (like, how much smoke would be generated by a given exchange) as well as the assumptions in the climate models themselves (for example, the way clouds would behave) are suspect, and may have been biased by the researchers’ political motivations (for example, see:Singer, 1985;Seitz, 2011;Reisner et al., 2019;Pausata et al., 2016;Reisner et al., 2018). I take these criticisms very seriously — and believe we should probably be skeptical of this body of research as a result. For the purposes of this estimation, I assume that the nuclear winter research comes to the right conclusion. However, if we discounted the expected harm caused by US-Russia nuclear war for the fact that the nuclear winter hypothesis is somewhat suspect, the expected harm could shrink substantially.

I also felt like Bean’s __analysis__ underweighted Rutgers’ view, and __Michael Hinge’s__ underweighted Los Alamos’ (see my comments).

My goal is estimating the famine deaths due to the climatic effects of nuclear war, not all famine deaths, nor heat mortality (related to hot or cold exposure). I also:

- Do a very shallow analysis of the cost-effectiveness of activities related to resilient food solutions.
- Discuss potential negative effects of decreasing famine deaths.

# Famine deaths due to the climatic effects

## Overview

I arrived at 12.9 M (= 0.0330*392*10^6) famine deaths due to the climatic effects of nuclear war before 2050, multiplying:

- 3.30 % probability of a large nuclear war before 2050.
- 392 M famine deaths due to the climatic effects of a large nuclear war, which I determined by multiplying:
- Famine death rate due to the climatic effects of a large nuclear war, which I obtained from the soot injected into the stratosphere in a large nuclear war
^{[5]}. I calculated this from the product between:- Offensive nuclear detonations in a large nuclear war.
- Countervalue nuclear detonations as a fraction of the total.
- Soot injected into the stratosphere per countervalue nuclear detonation.

- Global population.

- Famine death rate due to the climatic effects of a large nuclear war, which I obtained from the soot injected into the stratosphere in a large nuclear war

Unlike __Denkenberger 2018__ and __Luisa__, I did not run a __Monte Carlo simulation__ modelling all non-probabilistic variables as distributions, but I do not think that would meaningfully move my estimate of the expected deaths:

- Assuming all 4 factors describing the soot injected into the stratosphere before 2050 given at least one offensive nuclear detonation before 2050 are
__independent__, as I would do for simplicity anyway in a__Monte Carlo simulation__, the product between their expected values would be the expected product (__E__(X Y) = E(X) E(Y) if X and Y are independent). - From
__Fig. 5b__of__Xia 2022__, the number of people without food in year 2 is roughly__proportional__to the soot injected into the stratosphere^{[6]}.- To be precise, from the data on
__Table 1__, the__linear regression__with null__intercept__of the former on the latter has a__coefficient of determination__(R^2) of__96.8 %__. - Therefore, since the mean is a linear operator (E(a X + b) = a E(X) + b), one can obtain the expected number of people without food in year 2 from the expected soot injected into the stratosphere.
- Christian Ruhl argues for the
__non-linearity of nuclear war effects__. I agree, as I guess starvation deaths increase__logistically__with the soot injected into the stratosphere, but I believe injections of soot into the stratosphere for__large nuclear wars__fall in its roughly linear part. - I defined such wars as having at least 1.07 k offensive nuclear detonations, and Figure 2b of
__Toon 2008__, presented below, suggests emitted soot increases linearly with the number of detonations in that case. - If the linear part of the logistic curve starts sooner/later, the starvation resulting from small nuclear wars will tend to be larger/smaller, and therefore I would be underestimating/overestimating expected mortality.

- To be precise, from the data on
- My point estimates respect the expected values, not medians, of the variables to which the result of interest is proportional to.

## Probability of large nuclear war

I put the probability of large nuclear war before 2050 at 3.30 % (= 0.32*0.103), which is the product between:

- 32 % probability of at least one offensive nuclear detonation before 2050.
- 10.3 % probability of large nuclear war conditional on the above.

I motivate these values below.

### Probability of at least one offensive nuclear detonation

I placed the probability of at least one offensive nuclear detonation before 2050 at 32 %, in agreement with Metaculus’ community __prediction__ on 31 August 2023^{[7]}. This is reasonable based on:

- The base rate:
- There have been offensive nuclear detonations in 1 year (1945) over the 79 (= 2023 - 1945 + 1) during which they could occur. This suggests an annual probability of at least one offensive nuclear detonation of 1.27 % (= 1/79).
- There are still 26 years (= 2050 - 2024) before 2050.
- So the base rate implies a probability of at least one offensive nuclear detonation before 2050 of 28.3 % (= 1 - (1 - 0.0127)^26), which is 88.4 % (= 28.3/32) of Metaculus’ community
__prediction__.

- Luisa’s
__prediction__^{[8]}:- 1.1 %/year (see table).
- 25.0 % (= 1 - (1 - 0.011)^26) before 2050, which is 78.1 % (= 25.0/32) of Metaculus’ community
__prediction__.

### Probability of escalation into large nuclear war

I presupposed a __beta distribution__ for the fraction of nuclear warheads being detonated before 2050 given at least one offensive nuclear detonation before then. I defined it from 61th and 89th percentiles equal to 1.06 % (= 100/(9.43*10^3)) and 10.6 % (= 1*10^3/(9.43*10^3)), given:

- Metaculus’ community
__predictions__on 26 September 2023 of 39 % (= 1 - 0.61) and 11 % (= 1 - 0.89) for the probability of at least 100 and 1 k offensive nuclear detonations before 2050 given at least one offensive nuclear detonation before then. - 9.43 k (= (9.50 + (9.22 - 9.50)/(2052 - 2032)*(2037 - 2032))*10^3 - 1) expected
__nuclear warheads__minus 1^{[9]}, which I obtained:- For 2037 (= (2050 - 2024)/2), which is midway between now and 2050
^{[10]}. - Linearly interpolating between the
__mean__of Metaculus’ 25th and 75th percentile community__predictions__on 11 September 2023 for^{[11]}:- 2032, 9.50 k (= (8.29 + 10.7)*10^3/2).
- 2052, 9.22 k (= (4.84 + 13.6)*10^3/2).

- For 2037 (= (2050 - 2024)/2), which is midway between now and 2050

The alpha and beta parameters of the beta distribution __are__ 0.189 and 5.03, and its cumulative distribution function (__CDF__) __is__ below. The horizontal axis is the fraction of nuclear warheads being detonated, and the vertical one the probability of less than a certain fraction being detonated. The probability of escalation into a large nuclear war, which I defined as at least __1.07 k__ offensive nuclear detonations, corresponding to 11.3 % (= 1.07*10^3/(9.43*10^3)) of nuclear warheads being detonated, is 10.3 %^{[12]}.

## Soot injected into the stratosphere

I expect 22.1 __T__g (= 2.09*10^3*0.215*0.0491) of soot being injected into the stratosphere in a large nuclear war. This is the product between:

- 2.09 k offensive nuclear detonations in a large nuclear war.
- 21.5 %
__countervalue__nuclear detonations^{[13]}. - 0.0491 Tg (= 189*2.60*10^-4) per countervalue nuclear detonation, multiplying:
- 189 kt yield per countervalue nuclear detonation.
- 2.60*10^-4 Tg/kt of soot injected into the stratosphere per countervalue yield.

I explain the above estimates in the next sections. I neglected __counterforce__ nuclear detonations because:

- From Figure 4 of
__Wagman 2020__, the soot injected into the stratosphere for an available fuel per area of 5 g/cm^2 is negligible^{[14]}. - I estimated an available fuel per area of g/cm^2 for counterforce nuclear detonations of
__3.07 g/cm^2__, which is lower than the above 5 g/cm^2.

### Offensive nuclear detonations

I expect 2.09 k (= 1 + 0.221*9.43*10^3) offensive nuclear detonations in a large nuclear war. This is 1 plus the product between:

- 22.1 % of nuclear warheads being offensively detonated in a large nuclear war, which I
__computed__:- Generating 1 M
__Monte Carlo__samples of the beta distribution describing the fraction of nuclear warheads being detonated before 2050 given at least one offensive nuclear detonation before then. - Taking the mean of the above samples larger or equal to
__11.3 %__, which is the minimum fraction for a large nuclear war.

- Generating 1 M
__9.43 k__expected__nuclear warheads__minus 1.

The 5th and 95th percentile fraction of nuclear warheads being detonated in a large nuclear war __are__ 11.8 % and 43.6 %, which correspond to 1.11 k (= 1 + 0.118*9.43*10^3) and 4.11 k (= 1 + 0.436*9.43*10^3) offensive nuclear detonations.

I compared the offensive nuclear detonations, given at least one before 2050, implied by __my beta distribution__ with those of a Metaculus’ __question__ whose predictions I ended up not using. The 5th, 50th and 95th percentile of the beta distribution are 1.84*10^-6 %, 0.362 % and 19.2 %^{[15]}, and the respective detonations given at least one are:

- 1.00 (= 1 + 1.84*10^-8*9.43*10^3), which is 90.1 % (= 1.00/1.11) Metaculus’ 5th percentile community
__prediction__of 1.11. - 35.1 (= 1 + 0.00362*9.43*10^3), which is 3.97 (= 35.1/8.84) times Metaculus’ median community
__prediction__of 8.84^{[16]}(= (8.56 + 9.11)/2). - 1.81 k (= 1 + 0.192*9.43*10^3), which is 21.5 % (= 1.81/8.42) Metaculus’ 95th percentile community
__prediction__of 8.42 k^{[17]}(= (7.18 + 9.66)/2*10^3).

The mean of my beta distribution is 3.62 % (= 0.189/(0.189 + 5.03)), and therefore I expect 342 (= 1 + 0.0362*9.43*10^3) offensive nuclear detonations given one offensive nuclear detonation before 2050, which is 9.74 (= 342/35.1) times my median detonations. Additionally, my 95th percentile is 1.81 k (= 1.81*10^3/1.00) times my 5th percentile. Such high ratios illustrate nuclear war is predicted to be __heavy-tailed__, as __has__ been the case for non-nuclear wars.

From the above bullets, the predictions for the number of detonations I arrived at fitting a beta distribution to the forecasts for 2 Metaculus’ __questions__ about the probability of escalation to large nuclear wars (100 and 1 k detonations) are not quite in line with the forecasts for another Metaculus’ __question__ explicitly about the number of detonations. The large difference for the 95th percentile is relevant because the right tail has a significant influence on the expected detonations, as can be seen from the high ratio between my mean and median detonations. I decided to rely on the 2 Metaculus’ __questions__ about escalation because:

- Of the importance of the right tail. The
__other__requires forecasters to estimate the entire probability distribution, which I expect to lead to less accurate forecasts for the right tail. - I would have to arbitrarily select 2 quantiles from the
__other__in order to define the beta distribution.

### Countervalue nuclear detonations

I assumed 21.5 % of the offensive nuclear detonations to respect __countervalue__ targeting. This was Metaculus’ median community __prediction__ on 30 September 2023 for the fraction of offensive nuclear detonations before 2050 which will be countervalue.

I presumed 100 % total burned area as a fraction of the burned area assuming different detonations did not compete for fuel, i.e. that overlapping between burned areas is negligible. David Denkenberger commented that some additional area would be burned thanks to the combined effects of multiple detonations. I tend to agree, but:

- This is not discussed in
__Reisner 2018__nor__Toon 2008__. - For this effect to be significant, I guess there would have to be a meaningful overlap between the burned areas of
__countervalue__detonations, whereas I am assuming it is negligible. - I think the areas which would burn thanks to the combined effects of
__countervalue__detonations would have low fuel load, thus not emitting much soot, because they would tend to be far away from the city centre:- The detonation points would presumably be near the dense city centres, and therefore population density, and fuel load would tend to decrease with the distance from the detonation point.
- The radius of my burned area
__is__7.23 km.

### Yield

I considered a yield per countervalue nuclear detonation of 189 kt (= (600*335 + 200*300 + 1511*90 + 25*8 + 384*455 + 500*(5*150)^0.5 + 288*400 + 200*(0.3*170)^0.5)/3708). This is the mean yield of the United States nuclear warheads in 2023 (deployed or in reserve, but not retired), which I got from data in Table 1 of __Kristensen 2023__. For the rows for which a range was provided for the yield, I used the __geometric mean__ between its lower and upper bound^{[18]}.

For context, my yield of 189 kt is:

- 47.2 % (= 189/400) the 400 kt
__mentioned__by Bean “for a typical modern strategic nuclear warhead”. - 1.14 (= 189/166) times the yield of 166 kt (= 30.2^(3/2)) linked to the mean yield to the power of 2/3 implied by the data in Table 1 of
__Kristensen 2023__^{[19]}. Bean argues for an exponent of 2/3, but the difference does not seem to matter much, as 1.14 is a small factor. - 1.89 (= 189/100) times that of
__Toon 2008__. - 12.6 (= 189/
__15__) times that of Hiroshima’s nuclear detonation.

For the __2.09 k__ offensive nuclear detonations I expect in a large nuclear war, the minimum and maximum mean yield are 66.1 kt (= (200*(0.3*170)^0.5 + 25*8 + 500*(5*150)^0.5 + 1365*90)/(2.09*10^3)) and 290 kt (= (384*455 + 288*400 + 600*335 + 200*300 + 618*90)/(2.09*10^3)).

I investigated the relationship between the burned area and yield a little, but, as I said just above, I do not think it is that important whether the area scales with yield to the power of 2/3 or 1. Feel free to skip to the next section. In short, an exponent of:

- 2/3 makes sense if the energy released by the detonation is uniformly distributed in a spherical region (centred at the detonation point). This is apparently the case for blast/pressure energy, so an exponent of 2/3 is appropriate for the blasted area.
- 1 makes sense if the energy released by the detonation propagates outwards with negligible losses, like the Sun's energy radiating outwards into space. This is seemingly the case for thermal energy, so an exponent of 1 is appropriate for the burned area.

The emitted soot is proportional to the burned area. So using the mean yield as I did presupposes burned area is proportional to yield, which is what is supposed in __Toon 2008__. “In particular, since the area within a given thermal energy flux contour varies linearly with yield for small yields, we assume linear scaling for the burned area”. I guess this is based on the following passage of __this__ chapter of __The Medical Implications of Nuclear War__ (the source provided in __Toon 2008__):

Thermal energy, unlike blast energy [which “fills the volume surrounding it”], instead radiates out into the surroundings. Thermal energy from a detonation will therefore be distributed over a hypothetical sphere that surrounds the detonation point. If the sphere's area is larger in direct proportion to the yield of a detonation, then the amount of energy per unit area passing through its surface would be unchanged. The radius of this hypothetical sphere varies as the square root of its area. Hence, the range at which a given amount of thermal energy per unit area is deposited varies as the square root of the yield.

Presumably, __Toon 2008__ assumes the burned area is defined by this range, and therefore it is proportional to yield (since a circular area is proportional to the square of its radius). With respect to this, Bean __said__:

Nor is the assumption that burned area will scale linearly with yield a particularly good one. I couldn’t find it in

the source they cite, and it flies in the face of all other scaling relationships around nuclear weapons.[...]

per

Glasstone p.108, blast radius typically scales with the 1/3rd power of yield, so we can expect damaged area from fire as well as blast to scale with the yield^2/3 [since area is proportional to the square of the radius].

According to __The Medical Implications of Nuclear War__ (see quotation above), the blasted area is indeed proportional to yield to the power of 2/3, but the same may not apply to burned area (see quotation above starting with “Thermal energy”). In fact, the results of __Nukemap__ seem to be compatible with the assumption that the ground area enclosed by a spherical surface of a given energy flux is proportional to yield. For 0.1, 1 and 10 times my yield of 189 kt, i.e. 18.9, 189 and 1.89 k kt, the ground area enclosed by a spherical surface whose energy flux is 146 J/cm^2, for which “dry wood usually burns”, are:

- For an airburst height of 0 (just above the surface),
__4.11__,__37.1__and__317__km^2. Based on the 1st and last pair of these estimates, burned area would be proportional to yield to the power of 0.956 (= log10(37.1/4.11)) and 0.928 (= log10(314/37.1)). - For airburst heights of 0.832, 1.83 and 3.93 km, which maximise the radius of the
__overpressure__ring of 5__psi__^{[20]}(0.34__atm__) of each yield,__1.94__,__26.6__and__268__km^2. Based on the 1st and last pair of these estimates, burned area would be proportional to yield to the power of 1.14 (= log10(26.6/1.94)) and 1.00 (= log10(268/26.6)).

The mean of the above 4 exponents is 1.01^{[21]} (= (0.956 + 0.928 + 1.14 + 1.00)/4), which suggests a value of 1 is appropriate. Nevertheless, I do not know how the above areas are estimated in __Nukemap__.

Energy flux following an __inverse-square law__, as __described__ in __The Medical Implications of Nuclear War__, makes sense if atmospheric losses are negligible, like with the Sun’s energy radiating outwards into space. Intuitively, I would have thought the losses were sufficiently high for the exponent to be lower than 1, and GPT-4 also guessed an exponent of 2/3 would be a better approximation. However, __Nukemap__’s results do support an exponent of 1.

### Soot injected into the stratosphere per countervalue yield

I set the soot injected into the stratosphere per countervalue yield to 2.60*10^-4 __T__g/kt (= (3.15*10^-5*0.00215)^0.5). This is the geometric mean between 3.15*10^-5 and 0.00215 Tg/kt^{[18]}, which I arrived at by adjusting results from __Reisner 2018__ and __Reisner 2019__, and __Toon 2008__ and __Toon 2019__. I describe how I did this in the next 2 sections, and discuss some considerations I did not cover in these sections in the one after them.

There are other studies which have analysed how much of the emitted soot is injected into the stratosphere, but I think only __Reisner 2018__, __Reisner 2019__ and __Wagman 2020__ modelled the whole causal chain. From __Wagman 2020__:

An analysis of whether fires ignited by a nuclear war will cause global climatic and environmental consequences must address the following:

- The characteristics of the fires ignited by nuclear weapons (e.g., intensity, spread, and whether they generate sufficient buoyancy for lofting emissions to high altitudes); these are a function of many factors, including number and yield of weapons, target type, fuel availability, meteorology, and geography.
- The composition of the fire emissions (whether emissions include significant amounts of black carbon [BC] and organic carbon [OC] aerosols, and gases affecting atmospheric chemistry); these are a function of the fuel type, carbon loading, oxygen availability, and other factors.
- Whether the emissions are self-lofted by the absorption of solar radiation and to what heights; this is a function primarily of meteorology and particle size, composition, and absorption of solar radiation.
- The physical and chemical evolution of BC and other aerosol species in the stratosphere; this is a function of stratospheric chemistry and dynamics.
[...]

The Reisner et al. (2018) approach deviates from previous efforts by modeling aspects of all four bullet points above

[...]

Motivated by the different conclusions that have been reached for this scenario, we make our own assessment, which also uses numerical models to address aspects of all four factors bulleted above.

I did not integrate evidence from __Wagman 2020__ (whose main author is affiliated with Lawrence Livermore National Laboratory), because, rather than estimating the emitted soot as __Reisner 2018__ and __Reisner 2019__, it sets it to the soot injected into the stratosphere in __Toon 2007__:

Finally, we choose to release 5 Tg (5·10^12 g) BC into the climate model per 100 fires, for consistency with the studies of Mills et al. (2008, 2014), Robock et al. (2007), Stenke et al. (2013), Toon et al. (2007), and Pausata et al. (2016). Those studies use an emission of 6.25 Tg BC and assume 20% is removed by rainout during the plume rise, resulting in 5 Tg BC remaining in the atmosphere.

I did not include direct evidence from the __atomic bombings of Hiroshima and Nagasaki__ because I did not find empirical data about the resulting injections of soot into the stratosphere. Relatedly, __Robock 2019__ says:

- Between 3 February and 9 August 1945, an area of 461 km2 in 69 Japanese cities, including Hiroshima and Nagasaki, was burned during the U.S. B-29 Superfortress air raids, producing massive amounts of smoke
- Because of multiple uncertainties in smoke injected to the stratosphere, solar radiation observations, and surface temperature observations, it is not possible to formally detect a cooling signal from World War II smoke
- These results do not invalidate nuclear winter theory that much more massive smoke emissions from nuclear war would cause large climate change and impacts on agriculture

I also excluded evidence from Tambora’s eruption. There were global impacts according to __Oppenheimer 2003__, but their magnitude is unclear, and I think the world has evolved too much in the last 200 years for me to extrapolate.

**Reisner 2018 and Reisner 2019**

I estimated a soot injected into the stratosphere per countervalue yield of 3.15*10^-5 Tg/kt (= 0.0473/(1.50*10^3)) for __Reisner 2018__ and __Reisner 2019__. I calculated it from the ratio between:

- 0.0473 Tg (= 0.224*0.211) of soot injected into the stratosphere, multiplying:
- 0.224 Tg of emitted soot.
- 21.1 % of emitted soot being injected into the stratosphere

- Total yield of 1.50 k kt (= 100*15), given “100 low-yield weapons of 15 kilotons”.

I got 0.224 Tg (= 12.3*0.855*0.0213) of emitted soot, multiplying:

- 12.3 Tg (= 8.454 + (23.77 - 8.454)/(72.62 - 5.24)*(22.1 - 5.24)) of emitted soot if there was no-rubble, which I determined:
- For my available fuel per area for countervalue nuclear detonations of
__22.1 g/cm^2__. - Linearly interpolating the no-rubble results of
__Reisner 2019__(see Table 1). For 5.24 and 72.62 g/cm^2, 8.454 and 23.77 Tg

- For my available fuel per area for countervalue nuclear detonations of
- 85.5 % (= 3.158/3.692) to adjust for the presence of rubble. This is the ratio between the emitted soot of the rubble and no-rubble results of
__Reisner 2018__(see Table 1 of__Reisner 2019__). - 2.13 % to account for the overestimation of emitted soot per burned fuel.
__Reisner 2019__says their “BC [black carbon, i.e. soot] emission factor is high by a factor of 10–100”, and__Denkenberger 2018__models the “percent of combustible material that burns that turns into soot” as a lognormal distribution with 2.5th and 97.5th percentiles equal to 1 % and 4 % (see Table 2), whose mean__is__2.13 %^{[22]}. The production of soot would ideally be determined via chemical modelling of the combustion of fuel in the conditions of a firestorm, but I do not think we have that^{[23]}.

I concluded 21.1 % (= 0.0621*3.39) of emitted soot is injected into the stratosphere, multiplying:

- 6.21 % (= 0.196/3.158) of emitted soot being injected into the stratosphere in the 1st 40 min, which is implied by the results of
__Reisner 2018__(see Table 1 of__Reisner 2019__). I estimated it from the ratio between the 0.196 Tg of soot injected into the stratosphere in the 1st 40 min, and 3.158 Tg of emitted soot in the rubble case. I must note:- The 0.196 Tg is referred to in
__Reisner 2019__as being injected “above 12 km”, not into the stratosphere. Nonetheless, I am assuming the stratosphere starts there, as__Reisner 2018__attributes that height to the__tropopause__(which marks the start of the stratosphere). “Note that a majority of black carbon is found significantly below the tropopause (roughly 12 km) and hence can be easily washed away by precipitation produced by the climate model”. Interestingly, the stratosphere only starts at 16.6 km according to Figure 4 of__Wagman 2020__^{[24]}(eyeballing the dashed black lines). __Reisner 2019__does not explicitly say the 0.196 Tg refers to the 1st 40 min, but I think it does^{[25]}:__Reisner 2018__’s discussion of the fire simulation for the no-rubble case is compatible with 0.23 Tg (= 3.69 - 3.46) of soot being injected into the stratosphere in the 1st 40 min, which is quite similar to 0.236 Tg in Table 1 of__Reisner 2019__. “The total amount of BC produced is in line with previous estimates (about 3.69 Tg from no-rubble simulation); however, the majority of BC resides below the stratosphere (3.46 Tg below 12 km) and can be readily impacted by scavenging from precipitation either via pyrocumulonimbus produced by the fire itself (not modeled) or other synoptic weather systems”.__Reisner 2019__only discusses the fire simulations, which only last 40 min. From__Reisner 2018__, “HIGRAD-FIRETEC simulations for this domain used 5,000 processors and took roughly 96 h to complete for 40 min of simulated time”.

- The 0.196 Tg is referred to in
- 3.39 (= 0.8/0.236) times as much soot being injected into the stratosphere in total as in the 1st 40 min. This respects the no-rubble case of
__Reisner 2018__, and is the ratio between:- 0.8 Tg of soot injected into the stratosphere in total. “The BC aerosol that remains in the atmosphere, lifted to stratospheric heights by the rising soot plumes, undergoes sedimentation over a time scale of several years (Figures 8 and 9). This mass represents the effective amount of BC that can force climatic changes over multiyear time scales. In the forced ensemble simulations, it is about 0.8 Tg after the initial rainout, whereas it is about 3.4 Tg in the simulation with an initial soot distribution as in Mills et al. (2014)”.
- 0.236 Tg of soot injected into the stratosphere in the 1st 40 min, in line with the last row of Table 1 of
__Reisner 2019__.

The estimate of 6.21 % of emitted soot being injected into the stratosphere in the 1st 40 min is derived from the rubble case of __Reisner 2018__, which did not produce a firestorm. However, in response to __Robock 2019__, __Reisner 2019__ run:

Two simulations at higher fuel loading that are in the firestorm regime (Glasstone & Dolan, 1977): the first simulation (4X No-Rubble) uses a fuel load around the firestorm criterion (4 g/cm2) and the second simulation (Constant Fuel) is well above the limit (72 g/cm2).

These simulations led to a soot injected into the stratosphere in the 1st 40 min per emitted soot of 5.45 % (= 0.461/8.454) and 6.44 % (= 1.53/23.77), which are quite similar to the 6.21 % of __Reisner 2018__ I used above. __Reisner 2019__ also notes:

Of note is that the Constant Fuel case is clearly in the firestorm regime with strong inward and upward motions of nearly 180 m/s during the fine-fuel burning phase. This simulation included no rubble, and since no greenery (trees do not produce rubble) is present, the inclusion of a rubble zone would significantly reduce BC production and the overall atmospheric response within the circular ring of fire.

This suggests a firestorm is not a sufficient condition for a high soot injected into the stratosphere per emitted soot.

**Toon 2008 and Toon 2019**

I deducted a soot injected into the stratosphere per countervalue yield of 0.00215 Tg/kt (= 945/(440*10^3)) for __Toon 2008__ and __Toon 2019__. I computed it from the ratio between:

- 945 Tg (= 1.35*10^3*0.700) of soot injected into the stratosphere, multiplying:
- 1.35 k Tg of emitted soot.
- 70.0 % of emitted soot being injected into the stratosphere.

- “440-Mt total yield [4.4 k detonations of 100 kt]”.

I got 1.35 k Tg (= 180*7.52) of emitted soot, multiplying:

- “180 Tg of [“generated”] soot”.
- 7.52 (= 22.1/2.94) to adjust for the available fuel per area:
- Emitted soot is proportional to burned area, in agreement with the 2nd equation of
__Toon 2008__. - I estimated an available fuel per area for countervalue nuclear detonations of
__22.1 g/cm^2__. - I think the results of
__Toon 2008__imply 0.0294 Tg/km^2 (= 11.2*10^3/(4.4*10^3*86.6)) of available fuel per area, i.e. 2.94 g/cm^2 (= 0.0294*10^(12 - 5*2)), given:- 11.2 k Tg (= 180/0.016) of fuel, which is the ratio between the above soot and “0.016 kg of soot per kg of fuel”.
- “A SORT conflict with 4400 nuclear explosions”.
- A burned area per detonation of 86.6 km^2. “In our model we considered 100-kt weapons, since that is the size of many of the submarine-based weapons in the US, British, and French arsenals. In that case we assume a burned area of 86.6 km2 per weapon”.

- Emitted soot is proportional to burned area, in agreement with the 2nd equation of

I concluded 70.0 % (= (1 - 0.20)*(1 - 0.125)) of emitted soot is injected into the stratosphere, in agreement with __Toon 2019__. This stems from:

- “On the basis of limited observations of pyrocumulus clouds (16) [
__Toon 2007__], we assume that 20% of the BC is removed by rainfall during injection into the upper troposphere”. - “Further smoke is rained out by the climate model before the smoke is lofted into the stratosphere by solar heating of the smoke. The fraction of the injected mass that is present in the model over 15 years is shown in fig. S5. In the first few days after the injection, 10 to 15% of the smoke is removed in the climate model before reaching the stratosphere”. So I considered an additional soot removal of 12.5 %
^{[21]}(= (0.10 + 0.15)/2).

You might have noticed that I discounted the results of __Reisner 2018__ to account for their overestimation of the emitted soot per burned fuel, but that I did not do that for __Toon 2008__. I think this is right because, right after “how much of the fuel is converted into soot”, there is a reference to __Turco 1990__, which estimates an emitted soot per burned fuel very similar to what I assumed in the previous section^{[22]}.

__Toon 2019__ justifies the 20 % soot removal during injection into the upper troposphere citing __Toon 2007__, which in turn backs it up citing __Turco 1990__^{[26]}, but I noted this does not justify the value that well. From the header of Table 2 of __Turco 1990__, “the prompt soot removal efficiency [i.e. soot removal during injection into the upper troposphere^{[27]}] is taken to be 20% (range of 10 to 25%)”, which checks out, but it is mentioned that:

Originally, we (2) [

Turco 1983] estimated that 25 to 50% of the smoke mass would be immediately scrubbed from urban fires by induced precipitation. However, based on current data, it is more reasonable to assume that, on average, <=10 to 25% of the soot emission is likely to be removed in such a manner.

Nevertheless, as far as I can tell, the “current data” is not discussed in __Turco 1990__. I would have expected to see a justification for the update, as the 20 % prompt soot removal assumed in __Turco 1990__ is lower than the lower bound of 25 % attributed to __Turco 1983__. In addition, I was not able to confirm the soot removal of 25 % to 50 % quoted above, searching in __Turco 1983__ for “%”, “25 percent”, “50 percent”, “0.25”, “0.5” and “rain”. It is possible a soot removal of 25 % to 50 % is implied by the assumptions or results of __Turco 1983__, although it is not explicitly mentioned, but it looks like this might not be so. __Turco 1983__ appears to have used a soot removal of 20 % as __Turco 1990__. From Table 2, “80 percent [of the soot was assumed to be injected] in the stratosphere”. I did not find an explanation of this value searching for “80 percent” and “0.8”.

Brian Toon, the 1st author of __Toon 2007__, __Toon 2008__ and __Toon 2019__, and 2nd of __Turco 1983__ and __Turco 1990__, clarified the 20 % prompt soot removal in __Toon 2007__ was calculated from (1 minus) the ratio between the concentration of smoke and carbon monoxide at the stratosphere and near natural fires. I tried to obtain the 20 % with this approach, but did not have success. I assume Brian’s clarification refers to the following passage of __Toon 2007__:

According to Andreae et al. (2001) in natural fires the ratio of injected smoke aerosol larger than 0.1 µm to enhanced carbon monoxide concentrations is in the range 5–20 cm^3/ppb near the fires. Jost et al. (2004) found ratios ∼7 [cm^3/ppb] in smoke plumes deep within the stratosphere over Florida that had originated a few days earlier in Canadian fires, implying that the smoke particles had not been significantly depleted during injection into the stratosphere (or subsequent transport over thousands of kilometers in the stratosphere). Such evidence is consistent with the choice of R=0.8 for smoke removal in pyroconvection.

On the one hand, I agree with the last sentence, as the quoted evidence is consistent with a smoke removal in pyroconvection between 0 (7 > 5) and 65 % (= 1 - 7/20), which encompasses 20 % (= 1 - 0.8). On the other hand, this value seems to be pessimistic. Assuming a ratio between the concentration of smoke and carbon monoxide near the fires of 12.5 cm^3/__ppb__^{[21]} (= (5 + 20)/2), R = 56.0 % (= 7/12.5) of smoke would be injected into the upper troposphere, which suggests a prompt soot removal of 44.0 % (= 1 - 0.560), 2.20 (= 0.440/0.20) times as high as the value supposed in __Toon 2007__.

I shared the above reasoning with Brian, but his best guess continues to be 20 % soot removal during the injection into the upper troposphere. So I relied on that value to estimate the soot injected into the stratosphere per countervalue yield at the start of this section.

As a side note, __Turco 1983__ presents an emitted soot per yield of land near-surface and surface detonations of 1.0*10^-4 and 3.3*10^-4 Tg/kt (see Table 2), which are 3.26 % (= 1.0*10^-4/0.00307) and 10.7 % (= 3.3*10^-4/0.00307) the 0.00307 Tg/kt (= 0.00215/0.7) I inferred from __Toon 2008__^{[28]}. Brian Toon clarified the lower soot emissions in __Toon 2008__ are explained by this study considering a less fuel per area owing to more detonations with larger yield, which imply a larger burned area with lower population density. I think this makes sense.

**Considerations influencing the soot injected into the stratosphere**

There are a number of considerations I have not covered influencing the soot injected into the stratosphere per countervalue yield. I have little idea about their net effect, but I point out some of them below. Relatedly, feel free to check __Hess 2021__, and the comments on __Bean’s__ and __Mike’s__ post.

__Overestimating soot injected into the stratosphere__

Besides the pessimistic assumption regarding the soot emissions per burned area, which I corrected for, __Reisner 2018__ says:

For the vertical transport of the BC, very calm ambient winds are assumed in the model, so to prevent rapid dispersion of the BC in the plume. The height of burst is determined as twice the fallout-free height, so to minimize building damage and to maximize the number of ignited locations. Fire propagation in the model occurs primarily via convective heat transfer and spotting ignition due to firebrands, and the spotting ignition model employs relatively high ignition probabilities as another worst case condition

[...]

The wind speed profile was chosen to be high enough to maintain fire spread but low enough to keep the plume from tilting too much to prevent significant plume rise (worst case). Wind direction is set as 270° (west-to-east, +x direction) for all heights, with no directional shear, and a weakly stable atmosphere was used below the tropopause to assist plume rise (worst case).

David:

- Thinks one does not need wind to maintain fire spread if one includes secondary ignitions, or the fireball ignites everything at once.
- Commented the worst case would be an unstable atmosphere (rather than a “weakly stable” one), like in a thunderstorm.

__Underestimating soot injected into the stratosphere__

Secondary ignitions were neglected in __Reisner 2018__:

The impact of secondary ignitions, such as gas line breaks, is not considered and research is still needed to determine their impact on a mass fire's intensity. For example, evidence of secondary ignitions in the Hiroshima conflagration ensuing the nuclear bombing (National Research Council, 1985), or utilization of incendiary bombs in Dresden and Hamburg (Hewitt, 1983), led to unique conditions that resulted in significantly enhanced fire behavior.

David __commented__ “existing heating/cooking fires spreading” “is all that was required for the San Francisco earthquake firestorm”. Bean noted “urban fires are down 50% since the 1940s and way more since 1906”, when the __San Francisco earthquake and firestorm__ happened. GPT-4 very much agreed urban fires are now less likely to occur^{[29]}. On the other hand, David commented:

- Urban fires have decreased mostly due to the installation of
__sprinkler systems__, smoke detectors, and reductions in smoking and the combustibility of certain materials (e.g. mattresses). - The above would not help much to mitigate the house fires caused by nuclear detonations, which have multiple ignition points.

As noted in __Robock 2019__, fires, and therefore soot production and elevation, were only modelled for 40 min:

Reisner et al. stated that their fires were of surprisingly short duration, “because of low wind speeds and hence minimal fire spread, the fires are rapidly subsiding at 40 min.” However, they do not show the energy release rate so that we can tell if the fuel has been consumed within 40 minutes. And their claims of low wind speed are erroneous, as they choose wind speeds higher than typically observed in Atlanta. Real-world experience with firestorms such as in Hiroshima or Hamburg during World War II or in San Francisco after the 1906 earthquake (London, 1906), and of conflagrations, such as after the bombing of Tokyo during World War II (Caidan, 1960), suggests that a 40-minute mass fire is a dramatic underestimate; most of these fires last for many hours. A longer fire would make available more heat and buoyancy to inject soot to higher altitudes. If their fire had a short duration, and did not simply blow off their grid, it was likely due to the low fuel load assumed in their target area and combustion that did not consume all of the available fuel.

__Reisner 2019__ replied that:

Another important point concerning these simulations is that the rapid burning of the fine fuels leads to both a reduction in oxygen that limits combustion and a large upward transport of heat and mass that stabilizes the upper atmosphere above and downwind of the firestorm. These dynamical and combustion processes help limit fire activity and BC production once the fine material has been consumed (timescale < 30 min). Hence, the primary time period for BC injection that could impact climate occurs during a relatively short time period compared to the entirety of the fire or the continued burning and/or smoldering of thicker fuels.

[...]

While the full duration is not modeled, we argue that the primary atmospheric response from a nuclear detonation is the rapid burning of the fine fuels. Thick fuels will take longer to burn but will induce less atmospheric response and produce and inject less BC to upper atmosphere. Further, during the later time period, the upper atmosphere stabilizes from the large injection of heat and mass. Firestorms such as Dresden were maintained not only by burning of thick fuels but also by the injection of highly flammable fuel from the incendiary bombs, which we believe acted as fine fuel replacement.

In any case, it still seems to me __Robock 2019__ might have a valid point:

- From the legend of Figure 6 of
__Reisner 2018__, the soot emissions in the rubble case for 40 min are 1.32 (= 3.16/2.39) times those for 20 min, so it is not obvious that soot emissions after 40 min would be negligible. - From Figure 7, soot continues to be injected into the stratosphere in the climate simulation (run after the fire simulation), which means soot not injected into the stratosphere in the 1st 40 min can still do it afterwards. Nevertheless, I guess the initial conditions of the climate simulation, which I think are supposed to represent a random typical atmosphere, are less favourable to soot being injected into the stratosphere than the final ones of the fire simulation. If true, this would result in underestimating the injection of soot into the stratosphere.

I guess these 2 arguments are stronger for firestorms, which were not produced in __Reisner 2018__. The 2 simulations of __Reisner 2019__ concern firestorms, but I would like to see:

- On the 1st point above, data on soot emissions for a longer fire simulation demonstrating they are negligible after 40 min.
- On the 2nd, climate simulations demonstrating the soot injected into the stratosphere in total as a fraction of that in the 1st 40 min is similar to the ratio of
__3.39__respecting the no-rubble case of__Reisner 2018__.

__Overestimating/Underestimating soot injected into the stratosphere__

__Robock 2019__ contended that:

Water vapor allows for latent heat release when clouds form. Numerous studies have shown that sensible and latent heat release is essential to lofting smoke in either firestorms (e.g., Penner et al., 1986) or conflagrations (Luderer et al., 2006). Reisner et al. stated “A dry atmosphere was utilized, and pyrocumulus impacts or precipitation from pyro-cumulonimbus were not considered. While latent heat released by condensation could lead to enhanced vertical motions of the air, increased scavenging of soot particles by precipitation is also possible. These processes will be examined in future studies using HIGRAD-FIRETEC.” By not considering pyrocumulonimbus clouds, which by the latent heat of condensation can inject soot into the stratosphere, they have eliminated a major source of buoyancy that would loft the soot. They seem to suggest that any lofting of soot would be balanced by significant precipitation scavenging, but there is no evidence for that assumption. In fact, forest fires triggered pyrocumulonimbus clouds that lofted soot into the lower stratosphere in August 2017 over British Columbia, Canada. Over the succeeding weeks, the soot was lofted many more kilometers, as observed by satellites, because it was heated by the Sun (Yu et al., 2019). This fire is direct evidence of the self-lofting process Robock et al. (2007) and Mills et al. (2014) modeled before. It also shows that precipitation in the cloud still allowed massive amounts of smoke to reach the stratosphere.

__Reisner 2019__ replied that:

The latent heat release may or may not lead to enhanced smoke lofting depending on the complex microphysical and mesoscale processes. Robock et al. (2019) cite wildfires in extremely dry conditions that prevent precipitation formation and do not model the process. Precipitation scavenging of BC can be much higher than is currently assumed (20%) (Yu 2018). We and the community agree that research is needed to quantify the role latent heat plays in BC movement and washout.

Meanwhile, __Tarshish 2022__ concluded:

Direct numerical and large-eddy simulations indicate that dry firestorm plumes possess temperature anomalies that are less than the requirements for stratospheric ascent by a factor of two or more. In contrast, moist firestorm plumes are shown to reach the stratosphere by tapping into the abundant latent heat present in a moist environment. Latent heating is found to be essential to plume rise, raising doubts about the applicability of past work [namely,

Reisner 2018andReisner 2019] that neglected moisture.

Nonetheless, as hinted by __Reisner 2019__, moisture not only helps the emitted soot reach the stratosphere, but it also contributes to it being rained out. This latter process is not modelled in __Tarshish 2022__:

A limitation of the theory and simulations presented here is the absence of soot microphysics. Soot aerosols provide cloud condensation nuclei that may alter the drop size distribution and impact auto-conversion. This aerosol effect is expected to invigorate convection (Lee et al., 2020), lofting the plume higher. Coupling soot to microphysics, however, also enables soot to rain out, which could remove much of the soot from the rising plume as suggested in Penner et al. (1986). Given the essential role of moisture in lofting firestorm plumes we identified here, future research should investigate how these second-order microphysical effects impact firestorm soot transport. Another aspect not addressed here and deserving of future study is the radiative lofting of plumes, which has been observed to substantially lift wildfire plume soot for months after the fire (Yu et al., 2019).

### Available fuel

**Available fuel for counterforce**

For counterforce, I calculated an available fuel per burned area of 3.07 g/cm^2 (= (11*10^6*2.06*10^3 + 8*10^9)*10^(-5*2)). I got this from the 1st equation in Box 1 of __Toon 2008__:

- The equation respects a linear regression of the fuel load (available fuel per area) on population density, relying on 1 data point for
__San Jose__, 5 for the United States, and 3 for__Hamburg__(see Fig. 9 of__Toon 2007__).- The slope is 11 Mg/person.
- The fuel load for null population density is 8 Gg/km^2.

- I used a population density of 2.06 k person/km^2 (= ((0.492*1.69 + 0.675*2.90 + 0.921*2.21 + 0.492*2.02 + 0.860*1.47)/(0.492 + 0.675 + 0.921 + 0.492 + 0.860))*10^3). This is a
__weighted mean__with:- Weights proportional to the counterforce nuclear detonations in each of 5 countries as a fraction of total. I guess the vast majority of offensive nuclear detonations will be (launched) by these countries. I obtained the weights supposing the offensive nuclear detonations by each country is the same, and using Metaculus’ median community
__predictions__on 30 August 2023 for the fraction of countervalue offensive nuclear detonations before 2050 by these countries^{[30]}. I got the following weights^{[31]}:- 49.2 % (= (1 - 0.0154)/2) for China, considering it is targeted by half of the countervalue nuclear detonations by the United States.
- 67.5 % (= 1 - 0.325) for India, considering it is targeted by all of the countervalue nuclear detonations by Pakistan.
- 92.1 % (= 1 - 0.079) for Pakistan, considering it is targeted by all of the countervalue nuclear detonations by India.
- 49.2 % (= (1 - 0.0154)/2) for Russia, considering it is targeted by half of the countervalue nuclear detonations by the United States.
- 86.0 % (= 1 - 0.118 - 0.0218) for the United States, considering it is targeted by all of the countervalue nuclear detonations by China and Russia.

- The following urban population densities. For:
- China, 1.69 k person/km^2 (= 883*10^6/(522*10^3)), respecting an urban population in 2021 of
__883 M__, and an urban land area in 2015^{[32]}of__522 k km^2__. - India, 2.90 k person/km^2 (= 498*10^6/(172*10^3)), respecting an urban population in 2021 of
__498 M__, and an urban land area in 2015 of__172 k km^2__. - Pakistan, 2.21 k person/km^2 (= 86.6*10^6/(39.1*10^3)), respecting an urban population in 2021 of
__86.6 M__, and an urban land area in 2015 of__39.1 k km^2__. - Russia, 2.02 k person/km^2 (= 107*10^6/(52.9*10^3)), respecting an urban population in 2021 of
__107 M__, and an urban land area in 2015 of__52.9 k km^2__. - The United States, 1.47 k person/km^2 (= 275*10^6/(187*10^3)), respecting an urban population in 2021 of
__275 M__, and an urban land area in 2015 of__187 k km^2__.

- China, 1.69 k person/km^2 (= 883*10^6/(522*10^3)), respecting an urban population in 2021 of

- Weights proportional to the counterforce nuclear detonations in each of 5 countries as a fraction of total. I guess the vast majority of offensive nuclear detonations will be (launched) by these countries. I obtained the weights supposing the offensive nuclear detonations by each country is the same, and using Metaculus’ median community
- Relying on the urban population density presupposes the burned area by counterforce nuclear detonations is uniformly distributed across urban land area, which I guess makes sense a priori.

**Available fuel for countervalue**

For countervalue, I considered an available fuel per burned area of 21.1 g/cm^2 (= (0.00770*34.6 + 0.325*27.9 + 0.079*13.9 + 0.00770*13.0 + 0.140*8.95)/(0.00770 + 0.325 + 0.079 + 0.00770 + 0.140)). This is a __weighted mean__ with:

- Weights proportional to the countervalue nuclear detonations in each of the aforementioned 5 countries as a fraction of total. Once again, I obtained the weights supposing the offensive nuclear detonations by each country is the same, and using Metaculus’ median community
__predictions__on 30 August 2023 for the fraction of countervalue offensive nuclear detonations before 2050 by these countries. I got the following weights^{[33]}:- 0.770 % (= 0.0154/2) for China, considering it is targeted by half of the countervalue nuclear detonations by the United States.
- 32.5 % for India, considering it is targeted by all of the countervalue nuclear detonations by Pakistan.
- 7.9 % for Pakistan, considering it is targeted by all of the countervalue nuclear detonations by India.
- 0.750 % (= 0.0154/2) for Russia, considering it is targeted by half of the countervalue nuclear detonations by the United States.
- 14.0 % (= 0.118 + 0.0218) for the United States, considering it is targeted by all of the countervalue nuclear detonations by China and Russia.

- Available fuel per burned area adjusting the values in Table 13 of
__Toon 2007__for population density and burned area:__Toon 2007__used population density data from 2003^{[34]}, but it has generally been increasing due to population growth and urbanisation, thus increasing fuel load. So I multiplied the values in Table 13 by the ratio between the fuel loads computed with the 1st equation in Box 1 of__Toon 2008__(see previous section) for urban population densities:- In 2023 (numerator), given by the ones I determined in the previous section.
- In 2003 (denominator), dividing
__urban population__in 2003 by urban land area in 2000^{[35]}. For:- China, 1.78 k person/km^2 (= 776*10^6/(437*10^3)), respecting an urban population of
__776 M__, and an urban land area of__437 k km^2__. - India, 2.53 k person/km^2 (= 319*10^6/(126*10^3)), respecting an urban population of
__319 M__, and an urban land area of__126 k km^2__. - Pakistan, 3.06 k person/km^2 (= 56.0*10^6/(18.3*10^3)), respecting an urban population of
__56.0 M__, and an urban land area of__18.3 k km^2__. - Russia, 2.00 k person/km^2 (= 106*10^6/(52.9*10^3)), respecting an urban population of
__106 M__, and an urban land area of__52.9 k km^2__. - The United States, 1.39 k person/km^2 (= 231*10^6/(166*10^3)), respecting an urban population of
__231 M__, and an urban land area of__166 k km^2__.

- China, 1.78 k person/km^2 (= 776*10^6/(437*10^3)), respecting an urban population of

- In addition,
__Toon 2007__refers to a yield per detonation of 15 kt, and burned area of 13 km^2^{[36]}, whose radius () is 2.03 km (= (13/3.14)^0.5). I assumed burned area__is__proportional to yield, so it is 164 km^2 (= 13*189/15) for my yield of__189 kt__, and the respective radius is 7.23 km (= (164/3.14)^0.5). Since population density decreases as distance to the city centre increases, the fuel load has to be adjusted downwards. As I believe is usually the case in__urban economics__, I presumed population density () decreases exponentially with the distance to the city centre () according to a certain__density gradient__(), such that , where is the population density at the city centre^{[37]}. Consequently, the population density in a circle of radius centred at the city centre equals^{[38]}. I set the density gradient to 0.1, which is the mean of those of the 47 cities analysed in__Bertaud 2003__(see pp. 96 and 97 of the PDF). As a result, the population densities for the smaller and larger radii of 2.03 and 7.23 km are 0.874 (= 2/0.1/2.03^2*(1/0.1 - e^(-0.1*2.03)*(2.03 + 1/0.1))) and 0.627 (= 2/0.1/7.23^2*(1/0.1 - e^(-0.1*7.23)*(7.23 + 1/0.1))) times that at the city centre. So I also multiplied the values in Table 13 by 0.717 (= 0.627/0.874). - I ended up with the following fuel loads:
- 34.6 g/cm^2 (= 50*0.964*0.717) for China, updating the original 50 g/cm^2 by a factor of 0.964 (= (11*1.69 + 8)/(11*1.78 + 8)) to account for population growth and urbanisation, and 0.717 to correct for different burned area.
- 27.9 g/cm^2 (= 35*1.11*0.717) for India, updating the original 35 g/cm^2 by factors of 1.11 (= (11*2.90 + 8)/(11*2.53 + 8)) and 0.717.
- 13.9 g/cm^2 (= 25*0.776*0.717) for Pakistan, updating the original 25 g/cm^2 by factors of 0.776 (= (11*2.21 + 8)/(11*3.06 + 8)) and 0.717.
- 13.0 g/cm^2 (= 18*1.01*0.717) for Russia, updating the original 18 g/cm^2 by factors of 1.01 (= (11*2.02 + 8)/(11*2.00 + 8)) and 0.717.
- 8.95 g/cm^2 (= 12*1.04*0.717) for the United States, updating the original 12 g/cm^2 by factors of 1.04 (= (11*1.47 + 8)/(11*1.39 + 8)) and 0.717.

For context, my available fuel per area for countervalue nuclear detonations is:

- 1.32 (= 21.1/16) times the 16 g/cm^2 used in the “base case simulations” of
__Wagman 2020__. - 7.18 (= 21.1/2.94) times the 2.94 g/cm^2 I think is implied by
__Toon 2008__. - 20.1 (= 21.1/1.05) and 16.1 (= 21.1/1.31) times the 1.05 and 1.31 g/cm^2 related to the rubble and non-rubble cases of
__Reisner 2018__(see Table 1 of__Reisner 2019__).

## Famine deaths due to the climatic effects

I expect 392 M deaths (= 0.0443*8.86*10^9) following a nuclear war which resulted in __22.1 Tg__ of soot being injected into the stratosphere. I found this multiplying:

I explain these estimates in the next sections.

### Famine death rate due to the climatic effects

**Defining large nuclear war**

I agree with Christian __that__ deaths in a nuclear war increase superlinearly with offensive nuclear detonations. __As__ Luisa, I guess famine deaths due to the climatic effects increase __logistically__ with soot injected into the stratosphere. For simplicity, I approximate the logistic function as a piecewise linear function which is 0 for low levels of soot.

The minimum offensive nuclear detonations based on which I define a large nuclear war marks the end of the region for which famine deaths due to the climatic effects are 0. From __Fig. 5b__ of __Xia 2022__, for the case in which there is no international food trade, all livestock grain is fed to humans, and there is no household food waste (top line), adjusted to include international food trade without equitable distribution dividing by 94.8 % food support “when food production does not change [0 Tg] but international trade is stopped”, there are no deaths for 10.5 Tg^{[39]}. I guess the societal response will have an effect equivalent to assuming international food trade, all livestock grain being fed to humans, and no household food waste (see next section), so I supposed the famine deaths due to the climatic effects are negligible up to the climate change induced by 10.5 Tg of soot being injected into the stratosphere in __Xia 2022__.

I believe __Xia 2022__ overestimates the duration of the climatic effects, so I considered the linear part of the logistic function starts at 11.3 Tg (instead of 10.5 Tg):

- My estimate is that the
__e-folding__time of stratospheric soot is 4.72 years (= (2*(1.4 + 2.3)/2 + 6 + 6.5 + (4.0 + 4.6)/2 + (8.4 + 8.7)/2 + 4)/(2 + 5)). This is a weighted mean of the estimates provided in Table 3 of__Wagman 2020__for 6 different climate models^{[21]}, and a stratospheric soot injection of 5 Tg^{[40]}. For the cases in which an interval was provided, I used the mean between the lower and upper bound^{[21]}. I attributed 2 times as much weight to the “EAMv1” model introduced in that study as to each of the other models, because it sounds like it should be expected to be more accurate. “In this study, the global climate forcing and response is predicted by combining two atmospheric models, which together span the micro-scale to global scale processes involved”. - In
__Xia 2022__, “the atmospheric model is the Whole Atmosphere Community Climate Model version 4 [WACCM4]”, whose e-folding time is 8.55 years^{[21]}(= (8.4 + 8.7)/2) according to Table 3 of__Wagman 2020__. - If stratospheric soot decays exponentially with an e-folding time , the mean stratospheric soot over a time , as a fraction of the initial soot , is
^{[41]}. - In
__Xia 2022__, “in all the simulations, the soot is arbitrarily injected during the week starting on May 15 of Year 1”, and 2010 is the baseline year. So the time from this week until the end of year 2 is T = 1.62 years (= (7.5 + 12)/12). - For the e-folding time of
__Xia 2022__of 8.55 years, the mean stratospheric soot over the above time, as a fraction of the initial stratospheric soot, is 91.1 % (= 8.55/1.62*(1 - e^(-1.62/8.55))). So an initial stratospheric soot of 10.5 Tg results in a mean stratospheric soot over the above time of 9.57 Tg (= 0.911*10.5). - For my e-folding time of 4.72 years, the mean stratospheric soot over the above time, as a fraction of the initial stratospheric soot, is 84.6 % (= 4.72/1.62*(1 - e^(-1.62/4.72))). So 11.3 Tg (= 9.57/0.846) of soot have to be injected into the stratosphere to induce the climate change associated with 10.6 Tg in
__Xia 2022__.

The similarity between the soot injections just above means the shorter climatic effects end up having a minor difference. What matters is the severity of the worst initial years, and my e-folding time is still sufficiently long for these to be roughly as bad.

I estimated __0.0491 Tg__ of soot injected into the stratosphere per countervalue nuclear detonation, so I expect an injection of 11.3 Tg requires 230 (= 11.3/0.0491) countervalue nuclear detonations. Since I only expect __21.5 %__ of offensive nuclear detonations to be countervalue, I defined a large nuclear war as having at least 1.07 k (= 230/0.215) offensive nuclear detonations, and assume no famine deaths due to the climatic effects for less than that.

David thinks having famine deaths due to the climatic effects starting to increase linearly after an injection of soot into the stratosphere of 0 Tg is much more accurate than after 11.3 Tg, because there is already significant famine now. The deaths from nutritional deficiencies and protein-energy malnutrition __were__ 252 k and 212 k in 2019, and I suspect the real death toll is about 1 order of magnitude higher^{[42]}. Nevertheless, I am not trying to estimate all famine deaths. I am only attempting to arrive at the famine deaths due to the climatic effects, not those resulting directly or indirectly from infrastructure destruction. I expect this will cause substantial disruptions to international food trade. As Matt Boyd __commented__:

Much of the catastrophic risk from nuclear war may be in the more than likely catastrophic trade disruptions, which alone could lead to famines, given that nearly 2/3 of countries are net food importers, and almost no one makes their own liquid fuel to run their agricultural equipment.

Relatedly, from __Xia 2022__:

Impacts in warring nations are likely to be dominated by local problems, such as infrastructure destruction, radioactive contamination and supply chain disruptions, so the results here apply only to indirect effects from soot injection in remote locations.

**Famine death rate due to the climatic effects of large nuclear war**

I would say the famine death rate due to the climatic effects of a large nuclear war would be 4.43 % (= 1 - (0.993 + (0.902 - 0.993)/(24.6 - 14.6)*(18.7 - 14.6))). I calculated this:

- For
__22.1 Tg__of soot injected into the stratosphere, i.e. a mean of 18.7 Tg (=__0.846__*22.1) until the end of year 2. - Supposing the famine death rate due to the climatic effects equals 1 minus the fraction of people with food support (1,911 kcal/person/d), which is plotted in
__Fig. 5b__of__Xia 2022__. - Getting the fraction of people with food support linearly interpolating between the scenarios of
__Fig. 5b__of__Xia 2022__in which there is no international food trade, all livestock grain is fed to humans, and there is no household food waste (top line), adjusted to include international food without equitable distribution trade dividing by 94.8 % food support “when food production does not change [0 Tg] but international trade is stopped”^{[39]}:- 99.3 % (= 0.941/0.948) for an injection of soot into the stratosphere of 16 Tg, which corresponds to a mean of 14.6 Tg (=
__0.911__*16) until the end of year 2. - 90.2 % (= 0.855/0.948) for an injection of soot into the stratosphere of 27 Tg, which corresponds to a mean of 24.6 Tg (=
__0.911__*27) until the end of year 2.

- 99.3 % (= 0.941/0.948) for an injection of soot into the stratosphere of 16 Tg, which corresponds to a mean of 14.6 Tg (=

Some reasons why my famine death rate due to the climatic effects may be too:

- Low:
- There would be disruptions to international food trade. I only assumed it would not in order to compensate for other factors, and because I guess it would mostly be a direct or indirect consequence of infrastructure destruction, not the climatic effects I am interested in.
__Xia 2022__assumes there is no disruption of national trade, nor of international non-food trade. This includes important inputs to agriculture, such as agricultural machinery, fertilisers, fuel, pesticides, and seeds.- Not all livestock grain would be fed to humans. I only assumed it would in order to compensate for other factors.
- There would be some household food waste, but arguably not much. I also assumed it would not in order to compensate for other factors.
- Some food would go to people who would die. I assumed it would not (by getting the famine death rate due to the climatic effects from 1 minus the fraction of people with food support), for simplicity, and in order to compensate for other factors.
- Lower consumption of healthy food. “While this [
__Xia 2022__’s] analysis focuses on calories, humans would also need proteins and micronutrients to survive the ensuing years of food deficiency (we estimate the impact on protein supply in Supplementary Fig. 3)”. On this topic, you can check__Pham 2022__.

- High:
__Foreign aid__to the more affected countries, including international food assistance.__Increase__in__meat production__per capita from 2010, which is the reference year in__Xia 2022__, to 2037^{[43]}.- Increase in
__real GDP per capita__from 2010 to 2037 (see graph below). - In
__Xia 2022__:- “Scenarios assume that all stored food is consumed in Year 1”, i.e. no
__rationing__. - “We do not consider farm-management adaptations such as changes in cultivar selection, switching to more cold-tolerating crops or greenhouses31 and alternative food sources such as mushrooms, seaweed, methane single cell protein, insects32, hydrogen single cell protein33 and cellulosic sugar34”.
- “Large-scale use of alternative foods, requiring little-to-no light to grow in a cold environment38, has not been considered but could be a lifesaving source of emergency food if such production systems were operational”.
- “Byproducts of biofuel have been added to livestock feed and waste27. Therefore, we add only the calories from the final product of biofuel in our calculations”. However, it would have been better to redirect to humans the crops used to produce biofuels.

- “Scenarios assume that all stored food is consumed in Year 1”, i.e. no
- The minimum calorie supply is 1,911 kcal/person/d. In reality, lower values are possible with apparently tiny famine death rate due to the climatic effects from malnutrition:
- The calorie supply in the
__Central African Republic__(CAR) in 2015__was__1,729 kcal/person/d. - The disease burden from nutritional deficiencies in that year
__was__143 k__DALY__, which corresponds to 2.80 k deaths (= 143*10^3/51) based on the 51 DALY/life implied by GiveWell’s moral weights^{[44]}. - The above number of deaths amounts to 0.0581 % (= 2.80*10^3/(4.82*10^6)) of CAR’s
__population__in 2015.

- The calorie supply in the
- Lower consumption of unhealthy food.

I stipulate the above roughly cancel out, although I am not so confident. I think high income countries without significant infrastructure destruction would respond particularly well. Historically, __famines__ have only affected countries with low real __GDP__ per capita.

On the topic of lower consumption of healthy and unhealthy food, __Alexander 2023__ studies the effect of energy and export restrictions on deaths due to changes in red meat, fruits and vegetables consumption, and the fraction of the population who is underweight, overweight and obese. Lower red meat consumption, and less people being overweight and obese decreases deaths. Lower consumption of fruits and vegetables, and more people being underweight increases deaths. The results of the study are below.

The figure suggests the net effect corresponds to an increase in deaths. I am confident this would be the case for Sub-Saharan Africa, but not so much for other regions. The fraction of calories coming from animals __increases__ with GDP per capita, so cheaper diets have a lower fraction of calories coming from meat, and the relative reduction in meat consumption would be higher than that in fruits and vegetables. I think __Alexander 2023__ takes this into account:

As prices increase, the model represents a consumption shift away from ‘luxury’ goods such as meat, fruit, and vegetables back towards staple crops, as well as lower consumption overall.

__Alexander 2023__ still concludes higher prices would lead to more deaths, but I wonder whether __rationing__ efforts would ensure sufficient consumption of fruits and vegetables. I sense the deaths owing to decreased consumption of fruits and vegetables are overestimated in the figure above, but I have barely looked into the question.

### Population

I considered a global population of 8.86 __G__ (= (8.61 + (9.59 - 8.61)/(2052 - 2032)*(2037 - 2032))*10^9):

- For 2037 (= (2024 + 2050)/2), which is midway from now until 2050.
- Linearly
__interpolating__between Metaculus’ median community__predictions__on 3 September 2023 for:- 2032, 8.61 G.
- 2052, 9.59 G.

## Uncertainty

To obtain a distribution for the famine death rate due to the climatic effects of a large nuclear war, without running a Monte Carlo simulation, I assumed a __beta distribution__ with a ratio between the 95th and 5th percentiles equal to 702 (= e^((ln(3.70)^2 + ln(4.39)^2 + ln(68.3)^2 + ln(100)^2)^0.5)). This is the result of supposing the following follow independent __lognormal distributions__ with ratios between the 95th and 5th percentile equal to^{[45]}:

- 3.70 (= 4.11*10^3/(1.11*10^3)), which is the ratio between my 95th and 5th percentile
__offensive nuclear detonations__for a large nuclear war. - 4.39 (= 290/66.1), which is the ratio between the maximum and minimum mean
__yield__of the United States nuclear warheads in 2023 for a large nuclear war. - 68.3 (= 0.00215/(3.15*10^(-5))), which is the ratio between the soot injected into the stratosphere per countervalue yield
__I inferred__for (not directly retrieved from)__Reisner 2018__and__Reisner 2019__, and__Toon 2007__and__Toon 2008__. - 100, which is my out of thin air guess for the ratio between the 95th and 5th percentile famine death rate due to the climatic effects for an actual (not expected) injection of soot into the stratosphere of
__22.1 Tg__. A key contributing factor to such a high ratio is__uncertainty in societal response__. If I changed the ratio to:- 10 (10 % as large), the overall ratio would become 181, i.e. 25.8 % (= 181/702) as large.
- 1 k (10 times as large), the overall ratio would become 4.16 k, i.e. 5.93 (= 4.16*10^3/702) times as large.

Simpler approaches to determine the ratio would lead to significantly different results:

- The maximum of the above ratios is 14.2 % (= 100/702) of my ratio. Using the maximum would only be fine if the factors were more like normal distributions.
- The product of the above ratios is 158 (= 3.70*4.39*68.3*100/702) times as large as mine. Using this product would only be correct if all the factors were perfectly correlated.

Ideally, I would have run a __Monte Carlo simulation__ with my best guess distributions, instead of assuming just lognormals. Regardless, I would have used independent distributions for simplicity, so the results would arguably be similar.

For an expected famine death rate due to the climatic effects of __4.43 %__, a beta distribution with 95th percentile 702 times the 5th percentile __has__ alpha and beta parameters equal to 0.522 and 11.3. The respective __CDF__ __is__ below. The horizontal axis is the famine death rate due to the climatic effects, and the vertical one the probability of less than a certain death rate. The 5th and 95th percentile famine death rate due to the climatic effects __are__ 0.0233 % and 16.4 %, which correspond to 2.06 M (= 2.33*10^-4*8.86*10^9) and 1.45 G (= 0.164*8.86*10^9) deaths given at least one offensive nuclear detonation before 2050.

Given my __3.30 %__ probability of a large nuclear war before 2050, there is a 96.7 % (= 1 - 0.0330) chance of negligible famine deaths due to the climatic effects before then, thus my 5th percentile deaths before 2050 are 0 (0.05 < 0.967). My 95th percentile respects the 84.4th percentile (= 1 - (1 - 0.95)/0.32) famine death rate due to the climatic effects given at least one offensive nuclear detonation before 2050^{[46]}, which is 9.06 %^{[47]}, equivalent to 803 M (= 0.0906*8.86*10^9) deaths.

Summarising, since there are 26 years (= 2050 - 2024) before 2050, my best guess for the annual famine deaths due to the climatic effects of nuclear war before then is 496 k (= __12.9*10^6__/26), and my 5th and 95th percentile are 0 and 30.9 M (= 803*10^6/26). My 95th percentile is 62.3 (= 30.9*10^6/(496*10^3)) times my best guess, which means there is lots of uncertainty.

For context, my best guess for the famine deaths due to the climatic effects is similar to the 415 k __caused__ by homicides in 2019, and my 95th percentile identical to the 28.6 M (= (18.56 + 10.08)*10^6) caused by cardiovascular diseases and cancers in 2019.

Bear in mind my estimates only refer to the famine deaths due to the climatic effects. I exclude famine deaths resulting directly or indirectly from infrastructure destruction, and heat mortality.

# Cost-effectiveness of activities related to resilient food solutions

I calculated the expected cost-effectiveness of activities related to __resilient food__ solutions, at decreasing famine deaths due to the climatic effects of nuclear war, from the ratio between^{[48]}:

- Expected lives saved, given by multiplying:
- Effectiveness, the relative decrease in deaths.
- Horizon of effectiveness, the time during which the above applies.
- Age adjustment factor, the ratio between the years of healthy life which the mean person saved would live, and the 51 DALY/life implied by GiveWell’s moral weights
^{[44]}. - Annual famine deaths due to the climatic effects of nuclear war before 2050,
__496 k__.

__Reciprocal__of the expected reciprocal of the cost.

I arrived at the following values:

- For planning, 0.0341 life/$ (= 0.0338*11.3*0.825*496*10^3/(4.59*10^6)), i.e. 29.3 $/life (= 1/0.0341).
- For research, 0.0321 life/$ (= 0.113*22.5*0.825*496*10^3/(32.4*10^6)), i.e. 31.2 $/life (= 1/0.0321).
- For planning, research and development, 0.0349 life/$ (= 0.263*22.5*0.825*496*10^3/(69.4*10^6)), i.e. 28.7 $/life (= 1/0.0349).
- For planning, research, development and training, 1.04*10^-4 life/$ (= (0.500*10 + 0.263*12.5)*0.825*496*10^3/(32.5*10^9)), i.e. 9.62 k$/life (= 1/(1.04*10^-4)).

The effectiveness, horizon of effectiveness, age adjustment factor, and cost are defined below.

Decreasing famine deaths due to the climatic effects would arguably shorten the recovery period, thus increasing cumulative economic output. I have not analysed this indirect effect, hence underestimating cost-effectiveness, for consistency with neartermist cost-effectiveness analyses. These typically focus on the benefits to the people who were saved, not on how they change economic growth via their children.

## Effectiveness

Based on __Denkenberger 2016__, I set the effectiveness to:

- For planning, 3.38 % (= 0.0376 - 0.00376). This is the difference between the
__means__of lognormal distributions with 2.5th and 97.5th percentile equal to:- 1 % and 10 %. “A lognormal distribution is assumed with a 95 % credible interval of 1 to 10 % chance of feeding everyone [who would otherwise starve] with alternate foods in this case”.
- 0.1 % and 1 %. “A lognormal probability distribution is assumed with a 95 % credible interval of 0.1–1 % chance of alternate foods working as planned with current preparation”.

- For research, 11.3 %. This is the
__mean__of a lognormal distribution with 2.5th and 97.5th percentile equal to 3 % and 30 %. “A lognormal distribution with a 95 % credible interval of 3–30 % chance of feeding everyone with alternate foods is assumed with both a plan and experiments”. - For planning, research and development, 26.3 %. This is the
__mean__of a lognormal distribution with 2.5th and 97.5th percentile equal to 7 % and 70 %. “A lognormal distribution is assumed with a 95 % credible interval of 7–70 % chance of feeding everyone with alternate foods with a plan, research, and development approach”. - For planning, research, development and training, 50.0 % (= 2/(2 + 2)). This is the mean of a
__beta distribution__with alpha and beta parameters of 2. “A beta distribution (to avoid truncation) is assumed with a 95 % credible interval of 9–90 % chance of feeding everyone with alternate foods with a plan, research, development, and training”. “Beta parameters: X = 2, Y = 2, minimum = 0, maximum = 1”.

__Denkenberger 2016__ truncates the difference between the 2 lognormal of the 1st bullet, and those of the 2nd and 3rd at 1 % (and David thinks at 100 % too). For simplicity, I used the means of non-truncated lognormals, but I do not think this matters.

## Horizon of effectiveness

Based on __Denkenberger 2016__, I assumed the horizon of effectiveness to be:

- For planning, 11.3 years. This is the
__mean__of a lognormal distribution with 2.5th and 97.5th percentile equal to 3 and 30 years. “The time horizon of the effectiveness of the plan is assumed to be lognormally distributed and have a 95 % credible interval of 3–30 years”. - For research, 22.5 years. This is the
__mean__of a lognormal distribution with 2.5th and 97.5th percentile equal to 6 and 60 years. “Research is generally longer-lived than planning, so the time horizon of the effectiveness of the plan [actually, research^{[49]}] is estimated to be lognormally distributed and have a 95 % credible interval of 6 to 60 years”. - For planning, research and development, 22.5 years, like for research. “The same time horizon is used as for research”.
- For planning, research, development and training:
- 10 years for all together. “In this case, it is assumed that the training is over a specific period of 10 years”.
- 12.5 years (= 22.5 - 10) for the 1st 3 together, which is the difference between the effectiveness horizons of the 1st 3 and training.

## Age adjustment factor

I estimated an age adjustment factor of 82.5 % (= 42.1/51). I got 42.1 years (= 48.4*0.869) of healthy life which the mean person saved would leave from the product between:

- 48.4 years (= 81.8 - 33.4) of life which the median person saved would live
^{[50]}. I determined this from the difference between:- 81.8 years (= 75.6 + (78.4 - 75.6)/(15 - 0)*(33.4 - 0)) of life expectancy at the median age in 2037. I got this:
- Considering the 33.4 years old
__median age__projected for 2037. - Linearly extrapolating the life expectancy in 2037 of:
- At birth, 75.6 years.
- At 15 years old, 78.4 years.

- Considering the 33.4 years old
- 33.4 years old
__median age__projected for 2037.

- 81.8 years (= 75.6 + (78.4 - 75.6)/(15 - 0)*(33.4 - 0)) of life expectancy at the median age in 2037. I got this:
- 86.9 % (= 0.8737 + (0.8709 - 0.8737)/(2016 - 1990)*(2037 - 1990)) healthy life expectancy at birth
^{[51]}. I computed this:- For 2037.
- Linearly extrapolating the healthy life expectancy at birth as a
__fraction__of the life expectancy at birth of:- In 1990, 87.37 %.
- In 2016, 87.09 %.

For simplicity, I am:

- Stipulating the age distribution of the people who die is the same as the age distribution of the global population in 2037. In reality, I expect there will be more deaths in low income countries. People are younger there, but so is life expectancy.
- Neglecting changes in life expectancy resulting from the nuclear war. If this decreases, I would be overestimating cost-effectiveness.

## Cost

Based on __Denkenberger 2016__, I determined the reciprocal of the expected reciprocal of the cost to be:

- For planning, 4.59 M$ (=
__1.22__/(0.266*10^-6)). In the calculation here, the numerator is the ratio between the value of 1 $ in 2016 and 2022, and the denominator is the__mean__of a lognormal distribution with 2.5th and 97.5th percentile equal to 1/30 and 1 M$^-1. “The cost of the plan is assumed to be lognormally distributed and have a 95 % credible interval of USD 1 million–USD 30 million”. - For research, 32.4 M$ (=
__1.22__/(0.0376*10^-6)). The denominator is the__mean__of a lognormal distribution with 2.5th and 97.5th percentile equal to 1/100 and 1/10 M$^-1. “It is assumed that the cost of the research is lognormally distributed and has a 95 % credible interval of USD 10 million–USD 100 million”. - For planning, research and development, 69.4 M$ (= (4.59 + 2*32.4)*10^6):
- 4.59 M$ for planning (see above).
- 32.4 M$ for research (see above).
- 32.4 M$ for development, like for research. “The cost of the development is assumed to be lognormally distributed and has a 95 % credible interval of USD 10 million–USD 100 million, the same as for research”.

- For planning, research, development and training, 32.5 G$ (= (69.4*10^-3 + 32.4)*10^9).
- 69.4 M$ for planning, research and development (see above).
- 32.4 G$ (=
__1.22__/(0.0376*10^-9)) for training. The denominator is the__mean__of a lognormal distribution with 2.5th and 97.5th percentile equal to 1/100 and 1/10 G$^-1. “The cost of the training is assumed to be lognormally distributed and has a 95 % credible interval of USD 10 billion–USD 100 billion”.

# Results

The results are summarised in the tables below.

## Probability of nuclear war

Probability of… | Value |
---|---|

At least one offensive nuclear detonation before 2050 | 32 % |

Large nuclear war conditional on the above | 10.3 % |

Large nuclear war before 2050 (product of the above) | 3.30 % |

## Soot injected into the stratosphere

Metric | Expected value |
---|---|

Offensive nuclear detonations in a large nuclear war | 2.09 k |

Yield per countervalue nuclear detonation (kt) | 189 |

Soot injected into the stratosphere per countervalue yield (Tg/kt) | 2.60*10^-4 |

Soot injected into the stratosphere per countervalue nuclear detonation (Tg) | 0.0491 |

Soot ejected into the stratosphere in a large nuclear war (product of the above) | 22.1 |

## Famine deaths due to the climatic effects

Metric | Expected value (5th to 95th percentile) |
---|---|

Famine death rate due to the climatic effects in a large nuclear war | 4.43 % (0.0233 % to 16.4 %) |

Famine deaths due to the climatic effects in a large nuclear war | 392 M (2.06 M to 1.45 G) |

Famine deaths due to the climatic effects of nuclear war before 2050 | 12.9 M (0 to 803 M) |

Annual famine deaths due to the climatic effects of nuclear war before 2050 | 496 k (0 to 30.9 M) |

## Cost-effectiveness of activities related to resilient food solutions

Activity | Cost to save a life ($/life) |
---|---|

Planning | 29.3 |

Research | 31.2 |

Planning, research and development | 28.7 |

Planning, research, development and training | 9.62 k |

# Discussion

## 2 views on soot injected into the stratosphere

My best guess for the soot injected into the stratosphere per countervalue yield __is__ 2.60*10^-4 Tg/kt. I obtained this giving the same weight to results I inferred from Reisner’s and Toon’s views, but they differ by a factor of __68.3__:

- The 3.15*10^-5 Tg/kt I deduced from
__Reisner 2018__and__Reisner 2019__is 12.1 % (= 3.15*10^-5/(2.60*10^-4)) of my best guess. - The 0.00215 Tg/kt I deduced from
__Toon 2007__and__Toon 2008__is 8.27 (= 0.00215/(2.60*10^-4)) times my best guess.

Consequently, if I attributed all weight to the result I deduced from Reisner’s (Toon’s) view, my estimates for the expected mortality would become 0.121 (8.27) times as large. In other words, my best guess is hundreds of millions of famine deaths due to the climatic effects, but tens of millions putting all weight in the result I deduced from Reisner’s view, and billions putting all weight in the one I deduced from Toon’s view. Further research would be helpful to figure out which view should be weighted more heavily.

## Xia 2022

I calculated __392 M__ famine deaths due to the climatic effects of a large nuclear war for:

- An injection of soot into the stratosphere of
__22.1 Tg__, i.e. 17.7 M/Tg (= 392*10^6/22.1). - A total yield of 395 Mt (=
__2.09*10^3__*__189*10^3__), i.e. 0.992 M/Mt (= 392*10^6/395).

The __results__ of __Table 1__ of __Xia 2022__, which are in the table below, imply:

- For my injection of soot into the stratosphere, by linear interpolation, 1.21 G (= (0.926 + (1.43 - 0.926)/(27 - 16)*(22.1 - 16))*10^9) people without food at the end of year 2, i.e. 54.8 M/Tg (= 1.21*10^9/22.1).
- For my total yield, by linear extrapolation, 5.01 G (= (2.51 + (5.34 - 2.51)/(440 - 50.0)*(395 - 50.0))*10^9) people without food at the end of year 2, i.e. 12.7 M/Mt (= 5.01*10^9/395).

Soot injected into the stratosphere (Tg) | Total yield (Mt) | Number of people without food at the end of Year 2 (M) | Number of people without food at the end of Year 2 per soot injected into the stratosphere (M/Tg) | Number of people without food at the end of Year 2 per total yield (M/Mt) |
---|---|---|---|---|

5 | 1.50 | 255 | 51.0 | 170 |

16 | 3.75 | 926 | 57.9 | 247 |

27 | 12.5 | 1.43 k | 52.8 | 114 |

37 | 25.0 | 2.08 k | 56.2 | 83.2 |

47 | 50.0 | 2.51 k | 53.4 | 50.2 |

150 | 440 | 5.34 k | 35.6 | 12.1 |

So my famine deaths due to the climatic effects of a large nuclear war of 17.7 M/Tg (per soot injected into the stratosphere) and 0.992 M/Mt (per total yield) are 32.3 % (= 17.7/54.8) and 7.81 % (= 0.992/12.7) those of __Xia 2022__, which I therefore deem too pessimistic.

## Luisa’s analyses

I have updated one parameter of Luisa’s nuclear winter Guesstimate __model__ to make its results more comparable with mine. Whereas it considers a “world population, excluding Australia and New Zealand”^{[52]}, of 7.5 G, I have used 8.83 G (= 8.86*10^9*(1 - 0.00391)). I computed this from the product between:

- My estimate for the global population of 8.86 G.
- 1 minus 0.391 % (= (26.0 + 5.12)/(7.95*10^3)), which was the
__population__of Australia and New Zealand in 2022 as a fraction of the global one. This factor is roughly 1, but it matters because Luisa obtains population losses close to 100 % in her worst case scenarios.

The 5 k ordered samples are __here__, and have a mean of 6.69 G deaths. Luisa __estimated__ an annual probability of 0.38 % for a nuclear war between the United States and Russia, i.e. 9.42 % (= 1 - (1 - 0.0038)^(2050 - 2024)) before 2050. Luisa does not explicitly define nuclear war, but my interpretation of the __post__ is that it means at least one offensive nuclear detonation, which Luisa confirmed^{[53]}. Similarly, I take Luisa’s nuclear winter __post__ to be conditional on at least one offensive nuclear detonation in the United States or Russia, which Luisa also confirmed^{[54]}.

As a consequence, Luisa’s expected deaths before 2050 would be 630 M (= 6.69*10^9*0.0942) accounting for nuclear wars between the United States and Russia, and arguably significantly more if others are included^{[55]}. My estimate of __12.9 M__ deaths is 2.05 % (= 12.9*10^6/(630*10^6)) of Luisa’s, so I would say her results are significantly pessimistic. I end up agreeing with Luisa __that__:

If we discounted the expected harm caused by US-Russia nuclear war for the fact that the nuclear winter hypothesis is somewhat suspect, the expected harm could shrink substantially.

I am also surprised by Luisa’s distribution for the famine death rate due to the climatic effects. Her 5th and 95th percentile are __41.0 %__ and __99.6 %__, which I think are too close and high. According to __my distribution__, the probability of the famine death rate due to the climatic effects being at least 41.0 % given one offensive nuclear detonation before 2050 is 0.00718 %^{[56]}. The probability is actually higher due to __model uncertainty__^{[57]}. In any case, Luisa’s 5 % chance of a population loss greater than 41.0 %, conditional on one offensive nuclear detonation in the United States or Russia, does seem off. So much so that it prompted me to recheck her Guesstimate __model__.

The 5th percentile death rate is 41.1 % (= 3.63/8.83), which checks out. I guess this super pessimistic result has gone unnoticed because people think “US-Russia nuclear exchange” refers to thousands of detonations, but it is only supposed to refer to at least one.

## Michael’s analysis

Mike says __that__:

If firestorms do occur in any serious numbers, for example in half of cases as with the historical atomic bombings, a nuclear winter is still a real threat. Even assuming lower fuel loads and combustion, you might get 3 degrees centigrade cooling from 750 detonations; you do not need to assume every weapon leads to a firestorm to be seriously concerned.

However, the above, which is illustrated in Mike’s graph below, only holds under __Toon’s view__, not __Reisner’s__. As I __discussed__, the 2nd simulation of __Reisner 2019__ has high fuel load, and produces a firestorm, but results in basically the same fraction of emitted soot being injected into the stratosphere in the 1st 40 min as the simulations of __Reisner 2018__, which have low fuel load, and did not produce firestorms. The soot injected into the stratosphere per countervalue yield I inferred from Toon’s view is __68.3__ times the one I deduced from Reisner’s view, and I think one should give some weight to both.

Having in mind the graph above, Mike __says__:

To stress, this argument [“nuclear winter is still a real threat”] isn’t just drawing two lines at the high/low estimates, drawing one between them and saying that is the reasonable answer. This is an argument that any significant targeting of cities (for example 250+ detonations) with high yield strategic weaponry presents a serious risk of a climate shock, if at least some of them cause firestorms.

Since the above is only true under Toon’s view, I believe Mike is in effect drawing a line (in light red and orange) between the bottom and top lines (in yellow and dark red), thus underweighting Reisner’s view. Giving the same weight to Toon’s and Reisner’s view implies drawing a line between the bottom and top lines, but not on a linear scale as above. Since the results I deduced for the views differ by 2 orders of magnitude, I think one should draw that line on a logarithmic scale, i.e. combine the views using the geometric mean instead of the mean, as I __did__.

One may argue the geometric mean is not adequate based on the following. If the soot injected into the stratosphere per countervalue yield I deduced from Reisner’s and Toon’s view respects the 5th and 95th percentile of a __lognormal distribution__, the geometric mean is the median of the distribution, but what matters is its mean. This __would__ be 5.93*10^-4 Tg/kt, i.e. 2.28 (= 5.93*10^-4/(__2.60*10^-4__)) times my best guess. I did not follow this approach because:

- It is quite easy for an apparently reasonable distribution to have a nonsensical right tail which drives the expected value upwards. For instance, setting the soot injected into the stratosphere per countervalue yield I deduced from Reisner’s and Toon’s view to the 25th and 75th percentile of a lognormal distribution, its mean would be 0.0350 Tg/kt, which is 16.3 (= 0.0350/0.00215) times the 0.00215 Tg/kt I deduced for Toon’s view, i.e. apparently too high.
- I do not have a good sense of the quantiles corresponding to the results I calculated based on Reisner’s and Toon’s views.

I guess it is better to treat the results I inferred from Reisner’s and Toon’s view as random samples of a lognormal distribution, as opposed to matching them to specific quantiles. I used the geometric mean, which is the __MLE__ of the median of a lognormal distribution^{[18]}.

Note that, before getting my best guess using the geometric mean, I adjusted Reisner’s and Toon’s view based on my available fuel per area for countervalue nuclear detonations, and Reisner’s view for the emitted soot per burned fuel. I ultimately obtained famine deaths due to the climatic effects of a large nuclear war per total yield __7.81 %__ of those of __Xia 2022__, which relies on Toon’s view.

I also noted linearly extrapolating the top line of Mike’s graph would lead to 30 Tg for 0 detonations. In reality, there would be 0 Tg for 0 detonations, so one cannot linearly extrapolate. The reason is that, under Toon’s view, the soot injected into the stratosphere increases sublinearly for few detonations, as illustrated in the figure __here__. This is because __Toon 2008__:

Assumed regions were targeted in decreasing order of population [and therefore soot injected into the stratosphere] within 5.25 km of ground zero

I do not endorse this assumption.

## Comparison with direct deaths

My analysis does not cover direct deaths, but I guess they would be 337 M (= (164 + (360 - 164)/(440 - 50)*(395 - 50))*10^6) in a large nuclear war:

- Considering my expected total explosive yield of
__395 Mt__for a large nuclear war. Using this__makes__sense if direct deaths are proportional to burned area, which is larger than the blasted area. - Linearly interpolating the results of
__Table 1__of__Xia 2022__for a nuclear war between India and Pakistan. For:- 50 Mt (500 times 100 kt), 164 M.
- 440 Mt (4.4 k times 100 kt), 360 M.

I expect __392 M__ famine deaths due to the climatic effects of a large nuclear war, which suggests these would be 1.16 (= 392*10^6/(337*10^6)) times the direct deaths. So I disagree with Bean __that__:

All available data suggests it [“climatic impact”] would be dwarfed by the direct (and very bad) impacts of the nuclear war itself.

Putting all weight in the soot injected into the stratosphere per countervalue yield I deduced from Reisner’s or Toon’s view, the famine deaths due to the climatic effects would be 14.0 % (= 1.16*__0.121__) or 9.59 (= 1.16*__8.27__) times the direct deaths. In other words, my best guess is that famine deaths due to the climatic effects are within the same order of magnitude of the direct deaths, but 1 order of magnitude lower putting all weight in the result I inferred from Reisner’s view, and 1 higher putting all weight in the one I inferred from Toon’s view.

## Cost-effectiveness of activities related to resilient food solutions

### Nearterm perspective

The median cost to save a life among the 4 GiveWell’s __top charities__ is 5 k$/life. The ratio between this and those linked to the activities related to resilient food solutions is:

- For planning, 171 (= 5*10^3/29.3).
- For research, 160 (= 5*10^3/31.2).
- For planning, research and development, 174 (= 5*10^3/28.7).
- For planning, research, development and training, 52.0 % (= 5*10^3/(9.62*10^3)).

This suggests planning, research and development related to resilient food solutions is 2 (= log_{10}(174)) orders of magnitude more cost-effective than GiveWell’s top charities. The above results are based on my estimates for the expected famine deaths due to the climatic effects of nuclear war, and the guesses provided in __Denkenberger 2016__ for the cost and effectiveness of activities related to resilient food solutions. Their cost-effectiveness would tend to be higher due to also decreasing deaths from other severe food shocks, such as those resulting from abrupt climate change, engineered crop pathogens, or other abrupt sunlight reduction scenarios (ASRSs), namely __volcanic__ or __impact__ winters.

On the other hand, I suspect the values from __Denkenberger 2016__ are very optimistic, such that I am greatly overestimating the cost-effectiveness. My reasons for this are similar to the ones given by Joel Tan in the context of __concluding__ arsenal limitation is 5 k times as effective as GiveWell’s top charities:

The headline cost-effectiveness will almost certainly fall if this cause area is subjected to deeper research: (a) this is empirically the case, from past experience; and (b) theoretically, we suffer from optimizer's curse (where causes appear better than the mean partly because they are genuinely more cost-effective but also partly because of random error favouring them, and when deeper research fixes the latter, the estimated cost-effectiveness falls). As it happens, CEARCH intends to perform deeper research in this area, given that the headline cost-effectiveness meets our threshold of 10x that of a GiveWell top charity.

I guess the true cost-effectiveness of planning, research and development related to resilient food solutions is 2 orders or magnitude lower than I estimated, i.e. within the same order of magnitude of that of GiveWell’s top charities. Consequently, instead of expecting these 3 activities to reduce famine deaths at 0.379 %/M$ (= __0.264__/(__69.4*10^6__)), as suggested by __Denkenberger 2016__, I think their effectiveness to cost ratio is more like 0.00379 %/M$. Note this adjustment is not __resilient__.

Furthermore, I have __argued__ corporate campaigns for chicken welfare are 1.71 k times as cost-effective as GiveWell’s top charities, i.e. 3 orders of magnitude more cost-effective. If so, such campaigns would also be 3 orders of magnitude more cost-effective than activities related to resilient food solutions.

### Longterm perspective

I am open to the idea that nuclear war can have longterm implications. As William MacAskill’s __argued__ on The 80,000 Hours Podcast:

It’s quite plausible, actually, when we look to the very long-term future, that that’s [whether artificial general intelligence is developed in “liberal democracies” or “in some dictatorship or authoritarian state”] the biggest deal when it comes to a nuclear war: the impact of nuclear war and the distribution of values for the civilisation that returns from that, rather than on the chance of extinction [which

isvery low].

Nonetheless, I believe it would be a __surprising and suspicious convergence__ if broadly decreasing starvation due to the climatic effects of nuclear war was among the most cost-effective interventions to increase __democracy levels__, or positively shape the development of transformative artificial intelligence (__TAI__). At least a priori:

- I feel there are better ways of achieving these via
__AI safety technical research__,__AI governance and coordination__,__information security in high-impact areas__,__AI hardware__,__China-related AI safety and governance paths__,__understanding India and Russia better__, or__improving China-Western coordination on global catastrophic risks__.- The shorter the
__TAI timelines__, the more cost-effective I expect interventions in these areas to be relative to broadly decreasing starvation due to the climatic effects of nuclear war. - In the cases where prevention is less cost-effective than response and resilience (although they
__all matter__), I would argue working on response and resilience in the context of the above areas would still be preferable. This would be by understanding how__great power conflict__,__nuclear war__,__catastrophic pandemics__, and especially__AI catastrophes__would affect post-catastrophe democracy levels and development of TAI. __AGI lock-in__may be the closest mechanism available to ensure__value lock-in__(for better or worse), although I have__doubts__.

- The shorter the
- Mitigating starvation after a population loss of 50 % does not seem that different from saving a life now, and I estimate a probability of 3.29*10^-6 of such a loss due to the climatic effects of nuclear war before 2050
^{[58]}.- In reality, the probability of such population loss is higher due to
__model uncertainty__. However,__human extinction__would be very__unlikely__to happen soon even in that case. As Carl Shulman__said__, “alone and directly (not as a contributing factor to something else later), enough below 0.1% that I evaluate nuclear interventions based mainly on their casualties and disruption, not extinction. I would (and have) support them in the same kind of metric as GiveWell, not in extinction risk”. - So, although I guess it is possible to improve the longterm future even if the risk of worse than 50 % population losses is negligible, I would like to see more specific arguments about how less starvation
__at the margin__results in better transformative AI.

- In reality, the probability of such population loss is higher due to

For these reasons, I think activities related to resilient food solutions are not cost-effective at increasing the longterm value of the future, neither via decreasing the risk of __human extinction__^{[59]}, nor improving the values of __TAI__. By not cost-effective, I mostly mean I do not see those activities being competitive with the best opportunities to decrease __AI risk__, and improve __biosecurity and pandemic preparedness__ __at the margin__, like __Long-Term Future Fund__’s __marginal grants__.

As another factor informing my view, I conclude in the next section that the expected __importance__ of accelerating __economic growth__ via decreasing famine deaths due to the climatic effects of nuclear war decreases with mortality^{[60]}. Some important caveats:

- I am underestimating the expected importance by excluding deaths due to non-climatic effects, which make the population lower, thus increasing the value of saving lives.
- The expected cost-effectiveness may well increase with mortality due to higher
__tractability__times__neglectedness__. - Economic growth may not contribute
__at the margin__to a better future overall. I judge__differential progress__to be a better proxy for that.

**Rapid diminution of the longterm value of accelerating economic growth**

Under my assumptions, the longterm value of accelerating __economic growth__ via decreasing deaths due to the climatic effect of nuclear war presents what I think David Thorstad __calls__ rapid diminution. In essence, the right tail of the probability density function (__PDF__) of the famine death rate due to the climatic effects decays much faster than the growth in the longterm value of saving lives due to accelerating __economic growth__, hence the expected value of saving lives for higher famine death rate due to the climatic effects also decreases. To illustrate, the 90th, 99th and 99.9th percentile famine deaths due to the climatic effects of a large nuclear war have:

- Famine death rate due to the climatic effects of 11.9 %, 26.4 % and 39.2 %, whereas the median deaths are 2.21 %
^{[61]}. - If the longterm value of saving lives
__is____inversely proportional__to population size due to accelerating economic growth^{[62]}, the values of saving an additional life are 1.11 (= (1 - 0.0221)/(1 - 0.119)), 1.33 (= (1 - 0.0221)/(1 - 0.264)) and 1.61 (= (1 - 0.0221)/(1 - 0.392)) times that of the median deaths^{[63]}. - The probability densities are 15.3 %, 1.65 % and 0.192 % as high as that of the median deaths
^{[64]}. - The expected value densities
^{[65]}of saving an additional life are 17.0 % (= 0.153*1.11), 2.19 % (= 0.0165*1.33), and 0.309 % (= 0.00192*1.61) that for the median deaths.

Therefore improving worst case outcomes does not appear to be the driver of the overall expected value. In addition, my expected famine death rate due to the climatic effects of __4.43 %__ corresponds to the 66.8th percentile outcome of a large nuclear war^{[66]}. These suggest maximising the number of (expected) lives saved is a better proxy for maximising longterm value due to accelerating economic growth than the heuristic of minimising the probability of a given population loss^{[67]}.

Relatedly, there is a __case__ for longtermists to use standard cost-benefit analyses in the political sphere. __Denkenberger 2016__ and __Denkenberger 2018__ are examples of following such an approach in the context of activities related to resilient food solutions.

For reference, improving worst case outcomes is also not the driver of the longterm value of accelerating __economic growth__ based on Luisa’s results. Her expected famine death rate due to the climatic effects of 75.5 % matches the __47.1th percentile__ outcome given at least one offensive nuclear detonation in the United States or Russia, and there is rapid diminution too. Her 90th, 99th and 99.9th percentile deaths have:

- Famine death rate due to the climatic effects of
__99.3 %__,__99.660 %__and__99.661 %__, whereas the median deaths are__78.4 %__. - If the longterm value of saving lives is inversely proportional to population size, the values of saving an additional life are 30.9 (= (1 - 0.784)/(1 - 0.993)), 63.5 (= (1 - 0.784)/(1 - 0.99660)) and 63.7 (= (1 - 0.784)/(1 - 0.99661)) times that of the median deaths.
- The probability densities are 0.0613 (= 0.0974/1.59), 6.73*10^-6 (= 1.07*10^-5/1.59) and 3.35*10^-5 (= 5.33*10^-5/1.59) times as high as that of the median deaths
^{[68]}. - The expected value densities of saving an additional life are 1.89 (= 0.0613*30.9), 0.0427 % (= 6.73*10^-6*63.5), and 0.213 % (= 3.35*10^-5*63.7) that for the median deaths.

I see some potential red flags above. I expected:

- The famine death rate due to the climatic effects to increase for high percentiles, but Luisa’s 99.9th percentile is 1.00 (= 0.99660/0.99661) times her 99th percentile.
- These percentiles respect a death toll of 8.83 G, which
__is__the “world population, excluding Australia and New Zealand”, I inputted into Luisa’s model. So the famine death rate due to the climatic effects does not increase for high percentiles because it is rapidly approaching extinction levels outside of these countries. - For Luisa’s 90th, 99th and 99.9th percentile famine death rate due to the climatic effects, the surviving population outside of Australia and New Zealand is
__36.2 M__,__160 k__and__1.77 k__.

- These percentiles respect a death toll of 8.83 G, which
- The probability density to decrease for high percentiles, but Luisa’s 99.9th percentile famine death rate due to the climatic effects is 4.98 (= 5.33*10^-5/(1.07*10^-5)) times as likely as her 90th percentile.
- I repeated the calculation for another 2 runs of Luisa’s Guesstimate model. These resulted in the 99.9th percentile being 4.51 (= 1.51*10^-3/(3.35*10^-4)) and 0.260 (= 1.91*10^-4/(7.34*10^-4)) times as likely as the 90th.
- Ideally, the
__Monte Carlo simulation__would have been run with more samples.

## Left tails

It __is__ often hard to find interventions which are robustly beneficial. In my mind, decreasing the famine deaths due to the climatic effects of nuclear war is no exception, and I think it is unclear whether that is beneficial or harmful from both a nearterm and longterm perspective.

The benevolence, intelligence, and power (BIP) __framework__ suggests how saving human lives may not be sufficient for an intervention to be beneficial. According to it:

It’s likely good to:

- Increase actors’ benevolence.
- Increase the intelligence of actors who are sufficiently benevolent
- Increase the power of actors who are sufficiently benevolent and intelligent
And that it may be bad to:

- Increase the intelligence of actors who aren’t sufficiently benevolent
- Increase the power of actors who aren’t sufficiently benevolent and intelligent

I see saving human lives, and the capability __approach__ to human welfare more broadly, as mostly about increasing power, which goes to 0 if one dies. However, I am not confident increasing power in an untargeted way is good. I must emphasise not saving lives has drastically different consequences from killing people, which is much more anti-cooperative. I strongly oppose killing people, including via nuclear war^{[69]}.

All things considered, my intuition is that __at the margin__ it would be good if interventions which are mainly cost-effective at saving lives, not at increasing longterm value, focussed more on actively minimising harmful effects on animals, and ensuring beneficial longterm effects.

### Nearterm perspective

From a nearterm perspective, I am concerned with the __meat-eater problem__, and believe it can be a __crucial consideration__. The people whose lives were saved thanks to resilient food solutions would go on to eat factory-farmed animals, which may well have sufficiently bad lives for the decrease in human mortality to cause net suffering. In fact, net global welfare __may__ be negative and declining.

I __estimated__ the annual welfare of all farmed animals combined is -12.0 times that of all humans combined^{[70]}, which suggests not saving a random human life might be good (-12 < -1). Nonetheless, my estimate is not __resilient__, so I am mostly agnostic with respect to saving random human lives. There is also a potentially dominant beneficial/harmful __effect__ on wild animals.

Accordingly, I am uncertain about whether decreasing famine deaths due to the climatic effects of nuclear war would be beneficial or harmful. I think the answer would depend on the country, with saving lives being more beneficial in (usually low income) countries with lower consumption per capita of farmed animals with bad lives. __I calculated__ the cost-effectiveness of saving lives in the countries targeted by GiveWell’s top charities only decreases by 22.4 % accounting for negative effects on farmed animals, which means it would still be beneficial (0.224 < 1).

Some hopes would be:

- Resilient food solutions mostly save lives in countries where there is low consumption per capita of animals with bad lives.
- The conditions of animals significantly improving, or the consumption of animals with bad lives majorly decreasing in the next few decades
^{[71]}, before an eventual nuclear war starts. - The decreased consumption of animals in high income countries during the 1st few years after the nuclear war persisting to some extent
^{[72]}.

Bear in mind price-, taste-, and convenience-competitive plant-based meat __would not__ currently replace meat.

Another downside I am not too worried about is the __moral hazard__ of preparing for the climatic effects of nuclear war. This would tend to increase the probability of a large nuclear war, and number of offensive nuclear detonations conditional on its occurrence. In the survey (S) and Anders Sandberg’s (E) model of __Denkenberger 2022__, it is guessed such hazard would only decrease longterm cost-effectiveness by 4 % and 0.4 % for a full scale nuclear war, and 2 % and 0.04 % for a 10 % agricultural shortfall, thus not making preparation harmful. I intuitively agree the moral hazard would not be a major effect. Nonetheless, I welcome further research like that of __Ingram 2023__, which investigated the public awareness of nuclear winter, and its implication for escalation control^{[73]}.

### Longterm perspective

It is somewhat unclear to me whether generally mitigating the food shocks caused by nuclear war would change values for the better. I __concluded__ it would in expectation if they were fully mitigated everywhere, but that there would still be a 1/3 chance of an overall negative effect in that case^{[74]}. More importantly, nationally mitigating food shocks would be harmful not only in pessimistic cases, but also in expectation in 40.7 % (= 59/145) of the countries I analysed. “All results should be taken with a big grain of salt, as they rely on quite __speculative assumptions__”, but I would still say the sign of the longterm impact is unclear.

It also looks like there is a potential trade-off between maximising nearterm and longterm effects. Saving lives in low income countries is tendentially cheaper, and consumption per capita of animals with bad lives is lower there. Nonetheless, to the extent GDP per capita is a good proxy for influence per person on the longterm future, targeting high income countries __may__ be better if reducing famine there does lead to sufficiently better democracy levels or __TAI__, and is sufficiently cheap.

Nevertheless, resilient food solutions potentially having a beneficial impact on the longterm future via would not automatically render the uncertainty around the nearterm effects irrelevant. Although I subscribe to __expectational__ __total__ __hedonistic__ __utilitarianism__, and agree the expected value of the future is way higher than that of this century^{[75]}, interventions usually __do not__ differ astronomically in expected cost-effectiveness:

- If it is possible to majorly improve the longterm future by decreasing the
__4.43 %__starvation famine deaths due to the climatic effects of a large nuclear war, interventions which increase resilience to smaller food shocks would presumably not be many orders of magnitude less effective. - There are various potential such interventions which would not classically be identified as longtermist. For example,
__increasing__agricultural productivity across Sub-Saharan Africa, or accelerating__economic growth__in low income countries^{[76]}, which can also be achieved by__global health and development__interventions. - Yet, interventions aiming to decrease starvation famine deaths due to the climatic effects of nuclear war are much more neglected than the above
^{[77]}, which contributes to them being more effective.

## My personal recommendations for funders

I encourage funders who have been supporting efforts to decrease nuclear risk (improving prevention, response or resilience) to do the following. If they aim to:

- Decrease the risk of human extinction, or improve the longterm future, support interventions to decrease
__AI risk__by donating to the__Long-Term Future Fund__(LTFF), as I personally do with my donations. - Increase nearterm welfare, support interventions to improve
__farmed animal welfare__by donating to the__Animal Welfare Fund__, or__ACE’s Recommended Charity Fund__. - Increase nearterm human welfare with high confidence, and put low weight on effects on animals, support interventions in
__global health and development__by donating to__GiveWell’s Top Charities Fund__. - Continue in the nuclear space, support
__Longview’s Nuclear Weapons Policy Fund__, which “directs funding to under-resourced and high-leverage opportunities to reduce the threat of large-scale nuclear warfare”. It is the only fund solely focussed on nuclear risk, and aligned with effective altruism I am aware of, and I like the 4 components of their grantmaking strategy:- Understanding the new nuclear risk landscape.
- Reduce the likelihood of accidental and inadvertent nuclear war.
- Educate policymakers on these issues.
- Strengthen fieldwide capacity.

These are my personal recommendations __at the margin__. I am not arguing for interventions decreasing nuclear risk to receive zero resources, nor for all these to be funded via __Longview’s Nuclear Weapons Policy Fund__.

I agree with Giving What We Can’s __recommendation__ for most people to donate to expert-managed funds, and have not recommended any specific organisations above.

# Acknowledgements

Thanks to Anonymous Person 1, Anonymous Person 2, Anonymous Person 3, Anonymous Person 4, Anonymous Person 5, Anonymous Person 6, Anonymous Person 7, Anonymous Person 8, Anonymous Person 9, Fin Moorhouse, Stan Pinsent and Stephen Clare for feedback on the draft^{[78]}. Thanks to GPT-4 for: coding the __Colab__ to calculate the parameters of a __beta distribution__ given 2 quantiles, and the __Colab__ to obtain the parameters of a beta distribution from its mean and ratio between 2 quantiles; explaining how to estimate the ratio between the 95th and 5th percentile of the product of independent lognormal distributions given the ratios between the 95th and 5th percentile of the various factors; and feedback on the draft.

^{^}^{^}1

__G__means 1 billion.^{^}Nonetheless, Luisa acknowledges that (see next section):

If we discounted the expected harm caused by US-Russia nuclear war for the fact that the nuclear winter hypothesis is somewhat suspect, the expected harm could shrink substantially.

^{^}David Denkenberger commented:

Though this is true, my analysis had assumptions between the extremes.

^{^}I presume all the soot comes from the same nuclear war.

^{^}In all the simulations, the soot is arbitrarily injected during the week starting on May 15 of Year 1.

^{^}“This question will resolve as Yes if there is any nuclear detonation as an act of war between January 1, 2020 and January 1, 2050. Resolution will be by credible media reports. The detonation must be deliberate; accidental, inadvertent, or testing/peaceful detonations will not qualify (see fine print). Attacks using strategic and tactical nuclear weapons are both sufficient to qualify”. I assume the detonations can be by both state and non-state actors, as nothing is said otherwise.

^{^}Luisa does not explicitly define nuclear war, but my interpretation of the

__post__is that it means at least one offensive nuclear detonation. Luisa confirmed. “Yes, I was considering just 1 nuclear detonation”.^{^}Such that the beta distribution has minimum 0.

^{^}Assuming the annual probability of one offensive nuclear detonation does not change before 2050, and that one such detonation does occur before 2050, it is expected to happen 13 years (= 2037 - 2024) from now.

^{^}Metaculus’ community predictions for 2032 and 2052 approximately follow a

__normal distribution__, whose mean can be computed from the mean between the 25th and 75th percentiles. As a side note, Metaculus’ 90th percentile community predictions for 2032, 2052 and 2122 are 12 k, 21 k, and 40 k. These point towards__dramatic order of magnitude increases__in nuclear warheads being unlikely.^{^}Calculated

__here__from 1 - beta.cdf(0.113, alpha, beta_).^{^}Jeffrey Lewis

__clarified__on The 80,000 Hours Podcast there is not a sharp distinction between__counterforce__and__countervalue__:And so just to explain that a little bit, or unpack that: if you look at what the United States says about its nuclear weapons today, we are explicit that we target things that the enemy values, and we are also explicit that we follow certain interpretations of the law of armed conflict. And it is absolutely clear in those legal writings that the United States does not target civilians intentionally, but that in conducting what you might call “counterforce,” there is a list of permissible targets. And they include not just nuclear forces. I think often in the EA community, people assume counterforce means nuclear forces, because it’s got the word “force,” right? But it’s not true. So traditionally, the US targets nuclear forces and all of the supporting infrastructure — including command and control, it targets leadership, it targets other military forces, and it targets what used to be called “war-supporting industries,” but now are called “war-sustaining industries.”

^{^}The green line in the 3rd subfigure is 0 above the dashed black line marking the start of the stratosphere.

^{^}Calculated

__here__via beta.ppf(“quantile (0.05, 0.5 or 0.95)”, alpha, beta_). The 5th percentile might look strangely low, but I think it is fine. A null value would only mean at least 5 % chance of no more offensive nuclear detonations after the 1st one.^{^}Mean between the lowest and highest values shown on the graph of the

__CDF__of Metaculus’ predictions for the 50th percentile.^{^}Mean between the lowest and highest values shown on the graph of the

__CDF__of Metaculus’ predictions for the 90th percentile.^{^}For the same reasons that the mean

__is__the maximum likelihood estimator (__MLE__) of the mean of a__normal distribution__, the geometric mean is the MLE of the median of a__lognormal distribution__, which I think describes the estimates well. There is a large difference between them (otherwise I would have considered a__normal distribution__), and they are not limited to range from 0 to 1 (otherwise I would have used a__beta distribution__).^{^}The mean yield to the power of 2/3 is 30.2 kt^(2/3) (= (600*335^(2/3) + 200*300^(2/3) + 1511*90^(2/3) + 25*8^(2/3) + 384*455^(2/3) + 500*(5^(2/3)*150^(2/3))^0.5 + 288*400^(2/3) + 200*(0.3^(2/3)*170^(2/3))^0.5)/3708).

^{^}From

__Nukemap__:At 5 psi overpressure, most residential buildings collapse, injuries are universal, fatalities are widespread. The chances of a fire starting in commercial and residential damage are high, and buildings so damaged are at high risk of spreading fire. Often used as a benchmark for moderate damage in cities. Optimal height of burst to maximize this effect is 1,830 m.

^{^}The mean

__is__the__MLE__of the mean of a__normal distribution__, which I think describes the estimates well. There is not a large difference between them (otherwise I would have considered a__lognormal distribution__), and they are not limited to range from 0 to 1 (otherwise I would have used a__beta distribution__).^{^}__Denkenberger 2018__argues the above quantiles are a reflection of__Turco 1990__. I agree. From the emitted soot and burned fuel of 105 and 5,075 Tg given in Table 2 of__Turco 1990__, one infers an emitted soot per available fuel of 2.07 % (= 105/5075), which is very similar to 2.13 %.^{^}__Reisner 2018__notes that:Although FIRETEC does not presently include this capability, it does have the ability to simulate combustion of fuel and fire spread th[r]ough heat transfer, while other fire-modeling tools, such as WRF-FIRE (Coen et al., 2013) [used in

__Wagman 2020__], employ prescribed fire spread approximations typically based on wind speed and direction.There is ongoing work to upgrade the models of

__Reisner 2018__to integrate chemical combustion modelling of soot production. From__Hess 2021__:Jon Reisner gave a seminar at the National Center for Atmospheric Research on 12 November 2019 in which he discussed the need to reduce the uncertainties and appealed to the community for help to do this (Reisner 2019). Work is underway at LANL [Los Alamos National Laboratory] to upgrade HIGRAD-FIRETEC to run faster, and to include detailed chemical kinetics (the formation of black carbon), probability density functions for the mean temperature and its variation within a grid cell, pyro-cumulus formation and the release of latent heat. Validation tests with other fire models and field data are being carried out, as well as tests on modern building materials to see if they will burn.

^{^}The

__tropopause__can be between 9 and 17 km, which encompass both__Reisner 2018__’s 12 km and__Wagman 2020__’s 16.6 km, so there is not necessarily a contradiction. Nevertheless, I suspect these studies are using different definitions of the tropopause. I would have expected the soot injected into the stratosphere to be the most relevant proxy for the climatic effects, and the fraction of emitted soot being injected into the stratosphere of__Wagman 2020__to be higher than that of__Reisner 2018__. Nonetheless, eyeballing the 3rd subfigure of Figure 4 of__Wagman 2020__, it looks like less than 10 % of emitted soot is injected into the stratosphere for a fuel load of 16 g/cm^2 (see area between the vertical axis and the black line), which is less than the 21.1 % implied by__Reisner 2018__.^{^}I contacted Jon Reisner, the 1st author of

__Reisner 2018__and__Reisner 2019__, on October 11 to get confirmation, and had already asked for feedback on the draft on September 22, but have not heard back.^{^}We adopt a baseline value for the rainout parameter, R (the fraction of the smoke emission not removed), of 0.8 [= 1 - 0.20], following Turco et al. (1990).

^{^}Thanks to Brian Toon for clarifying this.

^{^}Thanks to Bean for suggesting I looked into this.

^{^}Urban Fires and Trends:

- Early 20th Century: The early 1900s, especially before the 1940s, witnessed significant urban fires. Factors like wooden constructions, crowded urban spaces, and inadequate firefighting equipment and techniques contributed. The 1906 reference you mentioned might be related to the famous San Francisco earthquake and subsequent fires. Many cities during this era suffered large fires, prompting a push for better urban planning and fire safety.

- Mid 20th Century: With the advent of modern building materials and techniques, fires decreased in frequency. The establishment of national fire codes and standards, and the professionalisation of firefighting, also played a significant role.

- Late 20th Century to Present: Continued advancements in fire detection (like smoke alarms) and suppression systems (like sprinklers), coupled with public awareness campaigns, have further reduced urban fires. However, while the number of fires has generally decreased, the economic damage per fire incident (adjusted for inflation) might have increased due to the value of modern urban infrastructure.

^{^}For reference, Metaculus defines countervalue as follows:

A detonation will be considered "countervalue" if credible media reporting does not widely consider a military or industrial target as the primary target of the detonation (except for detonations on capital cities, which will always be considered countervalue without exception).

^{^}Note the fraction of counterforce nuclear detonations by a country equals 1 minus the fraction of countervalue nuclear detonations by that country. The weights add up to 3.44 (= 0.492 + 0.675 + 0.921 + 0.492 + 0.860), but this being higher than 1 is not a red flag. What has to sum to less than 1 are the counterforce detonations, as a fraction of the total counterforce detonations, which are detonated

**in**each country, not the counterforce detonations**by**each country as a fraction of their offensive detonations. Since I considered 5 countries, the sum of the weights only has to add up to less than 5, and it does (3.46 < 5).^{^}Last year for which The World Bank

__has__data on urban land area.^{^}The mean weight of 11.2 % (= (0.00770 + 0.325 + 0.079 + 0.00770 + 0.140)/5) being 52.1 % (= 0.112/

__0.215__) of the fraction I supposed for the offensive nuclear detonations which will be countervalue suggests only half of them will be in the 5 aforementioned countries. I guess more that this will, in which case Metaculus’ community predictions may not be internally consistent, but there might be many detonations in other countries too. Alternatively, it may be that offensive nuclear detonations by each of the 5 countries will be significantly different. In any case, none of these potential sources of error lead in an obvious way to underestimating/overestimating the fuel load, as it is a weighted mean. The potential error is also very much bounded, as the lowest and highest fuel loads are 0.427 (= 7.08/16.6) and 1.64 (= 27.3/16.6) times my estimate of 16.6 g/cm^2.^{^}“We use the LandScan (2003) population density database as a fuel-loading database”.

^{^}Year closest to 2003 for which The World Bank

__has__data on urban land area.^{^}“For a 15-kt explosion [what was analysed], we assume the fire zone area is equal to that of the Hiroshima firestorm – 13 km2 – ignited by a weapon of about the same yield”.

^{^}Arguably a good model if the countervalue detonations target city centres.

^{^}^{^}I obtained high precision based on the pixel coordinates of the relevant points, which I retrieved with Paint.

^{^}I suppose the e-folding time of stratospheric soot does not depend on the initial amount of soot.

^{^}^{^}These numbers underestimate the death toll linked to undernutrition and micronutrient deficiencies.

__Ahmed 2013__says these “are responsible directly or indirectly for more than 50% of all under-5 deaths globally”. Given__5.02 M__under-5 deaths in 2021, it sounds like more than 2.51 M (= 0.5*5.02*10^6) under-5 deaths are connected to undernutrition and micronutrient deficiencies, i.e. at least 5.41 (= 2.51/0.464) times the 464 M (= (252 + 212)*10^3) deaths caused by nutritional deficiencies and protein-energy malnutrition in 2019.^{^}Assuming such meat comes from farmed animals.

^{^}__According__to Open Philanthropy:GiveWell uses moral weights for child deaths that would be consistent with assuming 51 years of foregone life in the DALY framework (though that is not how they reach the conclusion).

^{^}Given 2 lognormal distributions X_1 and X_2, and Y = X_1 X_2, the ratio between the 95th and 5th percentile of Y is e^((ln(r_1)^2 + ln(r_2)^2)^0.5), where r_1 and r_2 are the ratios between the 95th and 5th percentile of X_1 and X_2. To explain, if there is a probability of p_1 and p_2 that ln(X_i) is no larger than ln(x_i_1) and ln(x_i_2), the

__z-scores__of these are z_1 = (ln(x_i_1) - E(ln(X_i)))/V(ln(X_i))^0.5 and z_2 = (ln(x_i_2) - E(ln(X_i)))/V(ln(X_i))^0.5. Consequently, z_2 - z_1 = (ln(x_i_2) - ln(x_i_1))/V(ln(X_i))^0.5, i.e. V(ln(X_i)) = (ln(x_i_2/x_i_1)/(z_2 - z_1))^2. Since the sum of 2 independent normal distributions__is__also normal, Y = X_1 X_2 is lognormal. So, if there is also a probability of p_1 and p_2 that ln(Y) is no larger than ln(y_1) and ln(y_2), V(ln(Y)) = (ln(y_2/y_1)/(z_2 - z_1))^2. Since V(Y)__=__V(X_1) + V(X_2) if X_1 and X_2 are independent, denoting by r_i the ratio between x_i_2 and x_i_1, (ln(y_2/y_1)/(z_2 - z_1))^2 = (ln(r_1)/(z_2 - z_1))^2 + (ln(r_2)/(z_2 - z_1))^2, i.e. y_2/y_1 = e^((ln(r_1)^2 + ln(r_2)^2)^0.5). As a side note, if Y = X_1 X_2 … X_N, and r_i = r, y_2/y_1 = r^(N^0.5).^{^}“Probability of having more than N deaths before 2050” = “probability of at least one offensive nuclear detonation before 2050”*“probability of having more than N deaths before 2050 given at least one offensive nuclear detonation before 2050” => 1 - “quantile of N deaths before 2050” = “probability of at least one offensive nuclear detonation before 2050”*(1 - “quantile of N deaths given at least one offensive nuclear detonation before 2050”) <=> “quantile of N deaths given at least one offensive nuclear detonation before 2050” = 1 - (1 - “quantile of N deaths before 2050”)/“probability of at least one offensive nuclear detonation before 2050”.

^{^}^{^}Because

__E__(“cost-effectiveness”) = E(“lives saved”/“cost”) = E(“lives saved”)/(1/E(1/“cost”)) if lives saved and cost are independent, as assumed in__Denkenberger 2016__.^{^}David confirmed it should be research.

^{^}Ideally, I should have relied on healthy life expectancy at the mean age (not median), but I did not easily find data for it.

^{^}Ideally, I should have focussed on healthy life expectancy at 33.4 years old (

__median age__projected for 2037), but I did not easily find data for global healthy life expectancy at adult ages.^{^}From

__Table S2__of__Xia 2022__, calorie production in Australia “from the major food crops (maize, rice, soybean and spring wheat) and marine fish in Year 2” for 150 Tg of soot injected into the stratosphere would be 24.2 % higher than without any soot. This illustrates the comparatively high resilience of Australia against abrupt sunlight reduction scenarios.^{^}Yes, I was considering just 1 nuclear detonation.

^{^}Me: “Is

__this__post also conditional on at least one offensive nuclear detonation in the US or Russia?”. Luisa: “Yes”.^{^}__Luisa__attributed an expected harm of 12 (on her scale) to nuclear wars between not only__NATO__(including the United States) and Russia, but also India and Pakistan. The expected harm was calculated from the sum of 5 factors, each ranging from 1 to 3, number of nuclear warheads of country 1 and 2, population of countries 1 and 2, and median probability of nuclear war between country 1 and 2 over the next 20 years.^{^}^{^}In addition, I am overstating the difference between mine and Luisa’s results because her estimates are conditional on at least one offensive nuclear detonation in the United States or Russia, which arguably respects higher escalation potential than at least one offensive nuclear detonation globally (what I considered).

^{^}^{^}Including by decreasing the risk of

__civilisational collapse__.^{^}By accelerating economic growth, I mean increasing longterm cumulative economic output.

^{^}Calculated

__here__via beta.ppf(“quantile (0.5, 0.9, 0.99 or 0.999)”, alpha, beta_).^{^}If this is the case, the longterm value of saving a life after a population loss of 90 % is 10 times that of doing it now, and so on. Consequently, the decrease in longterm value due to lost economic output for a certain population loss is proportional to . In other words, going from 8 billion people to 800 million is as bad as going from that to 80 million, and so on. Analogously, marginal increases in wealth leading to marginal increases in welfare which are inversely proportional to wealth (and proportional to the increase in wealth) implies that going from 1 k$/year to 10 k$/year is as good as going from 10k$/year to 100 k$/year. If roughly all longterm value is lost in the process of going from 8 billion to 800 people, there would be an absolute reduction of 1/7 (= 1/log

_{10}(8*10^9/800)) of the initial longterm value for each decrease by a factor of 10 of the population. So 90 %, 99 % and 99.9 % population losses would imply a decrease in longterm value of 14.3 % (= 1/7), 28.6 % (= 2/7), and 42.9 % (= 3/7). The assumption of the longterm value of saving lives being inversely proportional to population size is informed by the following passage of Carl Shulman’s__post__on the flow-through effects of saving a life:For example, suppose one saved a drowning child 10,000 years ago, when the human population was

__estimated__to be only in the millions. For convenience, we'll posit a little over 7 million, 1/1000th of the current population. Since the child would add to population pressures on food supplies and disease risk, the effective population/economic boost could range from a fraction of a lifetime to a couple of lifetimes (via children), depending on the frequency of famine conditions. Famines were not annual and population fluctuated on a time scale of decades, so I will use 20 years of additional life expectancy.So, for ~ 20 years the ancient population would be 1/7,000,000th greater, and economic output/technological advance. We might cut this to 1/10,000,000 to reflect reduced availability of other inputs, although increasing returns could cut the other way. Using 1/10,000,000 cumulative world economic output would reach the same point ~ 1/500,000th of a year faster. An extra 1/500,000th of a year with around our current population of ~7 billion would amount to an additional ~14,000 life -years, 700 times the contemporary increase in life years lived. Moreover, those extra lives on average have a higher standard of living than their ancient counterparts.

Readers familiar with Nick Bostrom's paper on

__astronomical waste__will see that this is a historical version of the same logic: when future populations will be far larger, expediting that process even slightly can affect the existence of many people. We cut off our analysis with current populations, but the greater the population this growth process will reach, the greater long-run impact of technological speedup from saving ancient lives.^{^}In reality, the longterm value of saving lives due to accelerating economic growth is also proportional to the longterm annual value. This would presumably decrease for higher famine death rate due to the climatic effects, since full recovery is not guaranteed, so I am overestimating the value of accelerating growth.

^{^}Calculated

__here__via beta.pdf(“90th/99th/99.9th famine death rate due to the climatic effects”, alpha, beta_)/beta.pdf(“median famine death rate due to the climatic effects”, alpha, beta_).^{^}Probability density times value.

^{^}Calculated

__here__from beta.cdf(0.0443, alpha, beta_).^{^}If the expected value density of saving an additional life increased with mortality, improving worst case outcomes would be a comparatively better proxy for maximising the overall expected value of improving the longterm future via accelerating economic growth, and therefore the

__maxipok rule__would be more applicable.^{^}Calculated from the data

__here__taking the derivative of the famine death rate due to the climatic effects with respect to the quantile. For example, to obtain the PDF for the 90th percentile deaths, I used (“90.01th percentile famine death rate due to the climatic effects” - “89.99th percentile famine death rate due to the climatic effects”)/(0.9001 - 0.8999).^{^}I am against violence to the point that I wonder whether it would be good to not only stop militarily supporting Ukraine, but also impose economic sanctions on it proportional to the deaths in the

__Russo-Ukrainian War__. I guess supporting__Ukrainian nonviolent civil resistance in the face of war__might be better to minimise both nearterm and longterm war deaths globally, although I have barely thought about this. If you judge my views on this to be super wrong, please beware the__horn effect__before taking conclusions about other points I have made.^{^}My number is based on the conditions of broilers in a reformed scenario.

^{^}I still believe it would be desirable to eventually stop factory-farming. Even if the animal lives had become good, there would arguably be more effective ways of increasing welfare.

^{^}Animals are not an efficient way of producing food. Consequently, to increase food supply, their consumption would be reduced, and animal feed directed to humans.

^{^}I

__shared__my thoughts on the study.^{^}^{^}It could be worth as much as the equivalent of 10^54 human lives according to Table 1 of

__Newberry 2021__.^{^}Famines tend to happen in low income countries (see chart

__here__).^{^}Nevertheless, current spending

__may__overestimate the neglectedness of decreasing starvation famine deaths due to the climatic effects of nuclear war.^{^}The names are ordered alphabetically.

I thought this was comprehensive, and it was clever how you avoided doing a Monte Carlo simulation for most of the variables. The expected amount of soot to the stratosphere was similar to my and Luisa's numbers for a large-scale nuclear war. So the main discrepancies are the expected number of fatalities and the impact on the long-term future.

At 5 g/cm^2, Still most of soot makes it into the upper troposphere, so I think much of that would eventually go to the stratosphere. Furthermore, forest fires are typically less than 5 g/cm^2, and they are moving front fires rather than firestorms, and yet still some of the soot makes it into the stratosphere. In addition, some counter value targets would be in cities with higher g/cm^2. Since you found the counterforce detonations were ~4x as numerous, 1/7 the fuel loading, and if the soot to stratosphere percent was 1/3x, that would be ~20% as much soot to stratosphere as the countervalue.

I do think there will be significant disruptions in trade due to the infrastructure destruction. But I also think perhaps the majority of the disruption to food trade in particular would be due to the climate impacts on the nontarget countries, which is the majority of the food production. Furthermore, the climate impacts make the overall catastrophe significantly worse, so I think they will increase the chances significantly of the loss of nearly all trade (not just food). This is a major reason why I expect significantly higher mortality due to climate impacts.

Why do you not endorse this for countervalue targeting?

Your model of the long-term future impact does not incorporate potential cascading impacts associated with catastrophes, which is why you find the marginal value of saving a life in a catastrophe not very different than saving a single life with mosquito bed nets. This is probably the largest crux. With the potential for collapse of nearly all trade (not just food), I think there is potential for collapse of civilization, from which we may not recover. But even if there is not collapse of civilization, I think there's a significant chance that worse values end up in AGI.

I think there is a high correlation between saving lives in a catastrophe and improving the long run future. This is probably clearest in the case of reducing the probability of collapse of civilization. Though resilient foods have a longer causal chain to democracy than working directly on democracy, resilient foods are many orders of magnitude more neglected, so it seems at least plausible to me. As for TAI, resilient foods are still orders of magnitude more neglected, which is why my

paperindicates they likely have higher long-term cost effectiveness compared to direct work on TAI (or competitive even if one reduced the cost effectiveness of resilient foods by 3 orders of magnitude).Because that kind of countervalue targeting isn't a thing. I intend to write on this more, but there tends to be a lot of equivocation here between countervalue as "nuclear weapons fired at targets which are not strictly military" and countervalue as "nuclear weapons fired to kill as many civilians as possible". The first kind absolutely exists, although I find the countervalue framing unhelpful. The second doesn't in a large-scale exchange, because frankly there's no world in which you aren't better off aiming those same weapons at industrial targets. You get a greater effect on the enemy's ability to make war, and because industrial targets tend to be in cities and have a lot of people around them, you will undoubtedly kill enough civilians to accomplish whatever can be accomplished by killing civilians, and the other side knows it.

The partial exception to this is if you're North Korea or equivalent, and don't have enough weapons to make a plausible dent in your opponent's industry. In that case, deterrence through "we will kill a lot of your civilians" makes sense, but note that the US was pretty safely deterred by 6 weapons, which is way less than discussed here.

Both sides targeted civilians in WWII. Hopefully that is not the case now, but I'm not sure.