Bio

Participation
4

How others can help me

You can give me feedback here (anonymous or not). You are welcome to answer any of the following:

  • Do you have any thoughts on the value (or lack thereof) of my posts?
  • Do you have any ideas for posts you think I would like to write?
  • Are there any opportunities you think would be a good fit for me which are either not listed on 80,000 Hours' job board, or are listed there, but you guess I might be underrating them?

How I can help others

Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering and paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.

Comments
1411

Topic contributions
25

Thanks for the post! I wonder whether it would also be good to have public versions of the applications (sensible information could be redacted), as Manifund does, which would be even less costly than having external reviewers.

Thanks, Will! Relatedly, I noted the importation makes the text in tables go from aligned to the centre in docs to aligned to the left/right on the EA Forum editor.

Hi Mike.

Los Alamos find that firestorms are highly unlikely to form under nuclear detonations, even at very high fuel loads, and so lofting is negligible. They only look at fission scale weaponry.

I think this may well misrepresent Los Alamos' view, as Reisner 2019 does not find significantly more lofting, and they did model firestorms. I estimated 6.21 % of emitted soot being injected into the stratosphere in the 1st 40 min from the rubble case of Reisner 2018, which did not produce a firestorm. Robock 2019 criticised this study, as you did, for not producing a firestorm. In response, Reisner 2019 run:

Two simulations at higher fuel loading that are in the firestorm regime (Glasstone & Dolan, 1977): the first simulation (4X No-Rubble) uses a fuel load around the firestorm criterion (4 g/cm2) and the second simulation (Constant Fuel) is well above the limit (72 g/cm2).

Crucially, they say (emphasis mine):

Of note is that the Constant Fuel case is clearly in the firestorm regime with strong inward and upward motions of nearly 180 m/s during the fine-fuel burning phase. This simulation included no rubble, and since no greenery (trees do not produce rubble) is present, the inclusion of a rubble zone would significantly reduce BC production and the overall atmospheric response within the circular ring of fire.

These simulations led to a soot injected into the stratosphere in the 1st 40 min per emitted soot of 5.45 % (= 0.461/8.454) and 6.44 % (= 1.53/23.77), which are quite similar to the 6.21 % of Reisner 2018 for no firestorm I mentioned above. This suggests a firestorm is not a sufficient condition for a high soot injected into the stratosphere per emitted soot under Reisner's view?

In my analysis, I multiplied the 6.21 % emitted soot that is injected into the stratosphere in the 1st 40 min from Reisner 2018 by 3.39 in order to account for soot injected afterwards, but this factor is based on estimates which do not involve firestorms. Are you implying the corrective factor should be higher for firestorms? I think Reisner 2019 implicitly argues against this. Otherwise, they would have been dishonest by replying to Robock 2019 with an incomplete simulation whose results differ from that of the full simulation. In my analysis, I only adjusted the results from Reisner’s and Toon’s views in case there was explicit information to do so[1], i.e. I did not assume they concealed key results.

As a result, blending together the Los Alamos model with that of Rutgers doesn’t really work as a baseline, they’re based on a very different binary concerning firestorms and lofting and you exclude other relevant analysis, like that of Lawrence Livermore.

In my analysis, I also did not integrate evidence from Wagman 2020 (whose main author is affiliated with Lawrence Livermore National Laboratory) to estimate the soot injected into the stratosphere per countervalue yield. As far as I can tell, they do not offer independent evidence from Toon's view. Rather than estimating the emitted soot as Reisner 2018 and Reisner 2019 did, they set it to the soot injected into the stratosphere in Toon 2007:

Finally, we choose to release 5 Tg (5¡10^12 g) BC into the climate model per 100 fires, for consistency with the studies of Mills et al. (2008, 2014), Robock et al. (2007), Stenke et al. (2013), Toon et al. (2007), and Pausata et al. (2016). Those studies use an emission of 6.25 Tg BC and assume 20% is removed by rainout during the plume rise, resulting in 5 Tg BC remaining in the atmosphere.

  1. ^

    For example, I adjusted downwards the soot injected into the stratosphere from Reisner 2019 (based on data from Denkenberger 2018), as it says (emphasis mine):

    Table 1. Estimated BC Using an Idealized Diagnostic Relationship (BC Estimates Need to be Reduced by a Factor of 10–100) and Fuel Loadings From the Simulations Shown in Reisner et al. and Two New Simulations for 100 15-kt Detonations

Great points, Stan!

Yes, I agree that the crux is whether firestorms will form.

I am not confident this is the crux.

The main thing I would like people to take away is that we remain uncertain what would be more damaging about a nuclear conflict: the direct destruction, or its climate-cooling effects.

I arrived at the same conclusion in my analysis, where I estimated the famine deaths due to the climatic effects of a large nuclear war would be 1.16 times the direct deaths.

Thanks for making this! I think it is valuable, although I should basically just donate to whatever I think is the most cost-effective given my strong endorsement of expected total hedonistic utilitarianism and maximising expected choice-worthiness.

Thanks for the relevant points, Joshua. I strongly upvoted your comment.

Could you please expand on why you think a Pareto distribution is appropriate here?

I did not mean to suggest a Pareto distribution is appropriate, just that it is worth considering.

Tail probabilities are often quite sensitive to the assumptions here, and it can be tricky to determine if something is truly power-law distributed.

Agreed. In my analysis of conflict deaths, for the method where I used fitter:

The 5th and 95th percentile annual probability of a conflict causing human extinction are 0 and 5.02 % [depending on the distribution]


When I looked at the same dataset, albeit processing the data quite differently, I found that a truncated or cutoff power-law appeared to be a good fit. This gives a much lower value for extreme probabilities using the best-fit parameters. In particular, there were too few of the most severe pandemics in the dataset (COVID-19 and 1918 influenza) otherwise; this issue is visible in fig 1 of Marani et al. Could you please add the data to your tail distribution plot to assess how good a fit it is?

I did not get what you would like me to add to my tail distribution plot. However, I added here the coefficients of determination (R^2) of the regressions I did.

A final note, I think you're calculating the probability of extinction in a single year but the worst pandemics historically have lasted multiple years. The total death toll from the pandemic is perhaps the quantity most of interest.

Focussing on the annual deaths as a fraction of the global population is useful because it being 1 is equivalent to human extinction. In contrast, total epidemic/pandemic deaths as a fraction of the global population in the year in which the epidemic/pandemic started being equal to 1 does not imply human extinction. For example, a pandemic could kill 1 % of the population each year for 100 years, but population remain constant due to births being equal to the pandemic deaths plus other deaths.

However, I agree interventions should be assessed based on standard cost-effectiveness analyses. So I believe the quantity of most interest which could be inferred from my analysis is the expected annual epidemic/pandemic deaths. These would be 2.28 M (= 2.87*10^-4*7.95*10^9) multiplying:

  • My annual epidemic/pandemic deaths as a fraction of the global population based on data from 1900 to 2023. Earlier years are arguably not that informative.
  • The population in 2021.

The above expected death toll would rank as 6th in 2021.

For reference, based on my analysis of conflicts, I get an expected death toll of conflicts based on historical data from 1900 to 2000 (also adjusted for underreporting), and the population in 2021 of 3.83 M (= 2.87*10^-4*7.95*10^9), which would rank above as 5th.

Here is a graph with the top 10 actual causes of death and expected conflict and epidemic/pandemic deaths:

Thanks for the comment, Jeff.

Are the high numbers of deaths in the 1500s old world diseases spreading in the new world?

Yes, and deaths are especially high in the 1500s given my assumption of high underreporting then.

If so, that seems to overestimate natural risk: the world's current population isn't separated from a larger population that has lots of highly human-adapted diseases.

Agreed. Personally, I guess the annual probability of a natural pandemic causing human extinction is lower than 10^-10.

In the other direction, this kind of analysis doesn't capture what I personally see as a larger worry: human-created pandemics. I know you're extrapolating from the past, and it's only very recently that these would even have been possible, but this seems at least worth noting.

I think it is interesting that:

There has been a downward trend in the logarithm of the annual epidemic/pandemic deaths as a fraction of the global population, with the R^2 of the linear regression of it on the year being 38.5 %. I guess the sign of the slope is resilient against changes to my modelling of the underreporting. One may argue the aforementioned logarithm will increase in the next few decades based on inside view factors such as technology becoming cheaper and more powerful. Nevertheless, technology also became cheaper and more powerful during the period of 1500 to 2023 covered by my data, but these still suggest a decreasing logarithm of the annual epidemic/pandemic deaths as a fraction of the global population.

Are you saying:

  1. There are no concievable interventions someone could make with p non-tiny.
  2. U, expected utility in a year in 500 years time, is approximately 0.
  3. Something else... my setup of the situation is wrong, or unrealistic..?

1, in the sense I think the change in the immediate risk of human extinction per cost is astronomically low for any conceivable intervention. Relatedly, you may want to check my discussion with Larks in the post I linked to.

Thanks for the feedback on the votes and animal welfare comparison!

Thanks for the kind words. I was actually unsure whether I should have followed up given my comments in this thread had been downvoted (all else equal, I do not want to annoy readers!), so it is good to get some information.

Thanks for the detailed reply on that! You've clearly thought about this a lot, and I'm very happy to believe you're right on the impact of nuclear war, but It sounds like you are more or less opting for what I called option 1? In which case, just substitute nuclear war for a threat that would literally cause extinction with high probability (say release of a carefully engineered pathogen with high fatality rate, long incubation period, and high infectiousness). Wouldn't that meaningfully affect utility for more than a few centuries? Because there would be literally no one left, and that effect is guaranteed to be persistent! Even if it "just" reduced the population by 99%, that seems like it would very plausibly have effects for thousands of years into the future.

I think the effect of the intervention will still decrease to practically 0 in at most a few centuries in that case, such that reducing the nearterm risk of human extinction is not astronomically cost-effective. I guess you are imagining that humans either go extinct or have a long future where they go on to realise lots of value. However, this is overly binary in my view. I elaborate on this in the post I linked to at the start of this paragraph, and its comments.

It seems to me that to avoid this, you have to either say that causing extinction (or near extinction level catastrophe) is virtually impossible, through any means, (what I was describing as option 1) or go the other extreme and say that it is virtually guaranteed in the short term anyway, so that counterfactual impact disappears quickly (what I was describing as option 2). Just so I understand what you're saying, are you claiming one of these two things? Or is there another way out that I'm missing?

I guess the probability of human extinction in the next 10 years is around 10^-7, i.e. very unlikely, but far from impossible.

Load more