Hide table of contents

This post outlines an idea for a research project. I think someone should do something like this, and there’s a ~50% chance I will at some point. I’m publicly sharing this idea in order to:

  • Highlight this potentially important question/crux
  • Get feedback
  • See if there’s anyone else who might want to do a project along these lines[1]
  • Make coordination, and avoiding duplication of work, easier[2]

Feel very free to comment here, message me, and/or schedule a call with me.


Some experts and EAs appear to think there’s a nontrivial chance of civilizational collapse. Some further argue that reducing the risk of collapse, or increasing the chance of recovery, should be a top priority. Others argue that that wouldn’t be worth prioritising even if there’s a nontrivial chance of collapse, based on the premise that the chance of recovery is already high. The latter group might suggest instead prioritising reducing risks of extinction or dystopia, or prioritising things unrelated to existential risks. Indeed, views on the odds of recovery from civilizational collapse seem to be pushing large (and growing) pools of money and talent either towards or away from work aimed at reducing collapse risks or increasing chances of recovery.

Advancing our thinking on that matter thus appears highly valuable. Furthermore, I see a way to do that that seems tractable and neglected. Specifically, I propose modelling the likelihood of various types of recovery from various types of collapse scenarios, following roughly the following steps:

  1. Think about how to carve up the possible causes of collapse (e.g. impact winter, pandemics), types of collapse (e.g. loss of population, industry, both), and types of recovery (e.g. recovery of GWP, political institutions, values), and think about what to focus on.
    1. “Likelihood of recovery from collapse” is underspecified, and some types of recovery from some scenarios may be unlikely even if others are likely.
  2. Begin to construct in Guesstimate one or more models of the odds of various types of recovery, given various collapse scenarios.
  3. Begin to estimate the parameters of this model(s).
  4. Iteratively refine the model(s), estimates, and conceptual underpinnings.
  5. Present the results in EA Forum posts, and possibly as a paper.
  6. Perhaps iterate further based on new feedback from the EA and academic communities.

Steps 1-4 could involve armchair reasoning; reading relevant academic and/or EA writings; and getting ideas and input from EAs and academics. I’ve previously collected sources relevant to this matter, and, if I were doing this project, I’d likely start by (re-)reading some of those sources and contacting some of their authors. I’d also read works from and talk to experts outside the global catastrophic risk field (e.g., experts on minimum viable populations or the history of the industrial revolution), especially when estimating parameters. I expect the output would resemble that of Rethink Priorities’ series on nuclear risks, and it might be useful to in some ways emulate how that research was conducted.

This project would be unlikely to provide definitive answers, but could plausibly inform our point estimates, narrow our uncertainty, indicate what further research would be most valuable, and suggest points for intervention.

It may also end up seeming worthwhile to (also) model the odds of various collapse scenarios occurring in the first place. However, that topic seems to me somewhat less neglected than the odds of recovery.

(See here for further thoughts.)

Grateful acknowledgements and delicious disclaimers

My thanks to David Denkenberger, Aron Mill, Luke Kemp, Siebe Rozendal, Ozzie Gooen, Seth Baum, and Luisa Rodriguez for their useful input on this research project idea. Also, this idea might have originally been inspired by Rethink Priorities indicating interest in “Analyzing the likelihood of civilization recovering from a population collapse” (I can’t remember for sure). This does not necessarily imply any of these people’s endorsement of this idea or this post.

This post doesn’t necessarily reflect the views of my past or present employers.


    1. I have a other project ideas I could pursue instead of this. And someone else might have a more fitting skillset for this than I do. So I’m potentially happy for someone else to take this project on instead of me. ↩︎

    2. Maybe someone else is already doing, or would by default do, something similar to this. And/or maybe someone is doing something vaguely related, and they or I could benefit from us talking. ↩︎

Comments10


Sorted by Click to highlight new comments since:

I'm a bit skeptical about the value of formal modelling here. The parameter estimates would be almost entirely determined by your assumptions, and I'd expect the  confidence intervals to be massive.

I think a toy model would be helpful for framing the issue, but going beyond that (to structural estimation) seems not worth it.

Also, if you're aware of Rethink Priorities/Luisa Rodriguez's work on modelling the odds and impacts of nuclear war (e.g., here), I'd be interested to hear whether you think making parameter estimates was worthwhile in that case. (And perhaps, if so, whether you think you'd have predicted that beforehand, vs being surprised that there ended up being a useful product.)

This is because that seems like the most similar existing piece of work I'm aware of (in methodology rather than topic). And to me it seems like that project was probably worthwhile, including the parameter estimates, and that it provided outputs that are perhaps more useful and less massively uncertain than I would've predicted. And that seems like weak evidence that parameter estimates could be worthwhile in this case as well.

Thanks for the comment. That seems reasonable. I myself had been wondering if estimating the parameters of the model(s) (the third step) might be: 

  • the most time-consuming step (if a relatively thorough/rigorous approach is attempted)
  • the least insight-providing step (since uncertainty would likely remain very large)

If that's the case, this would also reduce the extent to which this model could "plausibly inform our point estimates" and "narrow our uncertainty". Though the model might still capture the other two benefits (indicating what further research would be most valuable and suggesting points for intervention).

That said, if one goes to the effort of building a model of this, it seems to me like it's likely at least worth doing something like: 

  1. surveying 5 GCR researchers or other relevant experts on what parameter estimates (or confidence intervals or probability distributions for parameters[1]) seem reasonable to them
  2. inputting those estimates
  3. see what outputs that suggests, and more importantly perform sensitivity analyses
  4. thereby gain knowledge on what the cruxes of disagreement appear to be and what parameters most warrant further research, breaking down further, and/or getting more experts' views on

And then perhaps this project could stop there, or perhaps it could then involve somewhat deeper/more rigorous investigation of the parameters where that seems most valuable.

Any thoughts on whether that seems worthwhile?

[1] Perhaps this step could benefit from use of Elicit; I should think about that if I pursue this idea further.

I think a moderate amount of this work has actually already been done - I'd be happy to help connect you.

Thanks, I've sent you a PM :)

ETA: Turns out I was aware of the work Peter had in mind; I think it's relevant, but not so similar as to strongly reduce the marginal value this project could provide.

ETA (even later): Peter had in mind Luisa Rodriguez's work, some of which was published as What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? and some of which made its way into Will MacAskill's What We Owe the Future.

I hadn't seen this post before, but this is basically a summary of the project I've just started doing with a LTFF grant.

Can we see it yet?

It ended up being more of a coding project that would enable us to ask the question than a research project looking into the answer. The post I wrote on the approach I'm using is here. Still working on giving the code a UI, but you can see it at https://github.com/Arepo/lrisk_calculator - and in theory at least, the two calculators will work  if you insert a breakpoint, and query the Markov Chain object they generate.

Did you ever start/do this project, as per your linked G-doc?

No, I didn't - I ended up getting hired by Rethink Priorities and doing work on nuclear risk instead, among other things.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
LewisBollard
 ·  · 8m read
 · 
> How the dismal science can help us end the dismal treatment of farm animals By Martin Gould ---------------------------------------- Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- This year we’ll be sharing a few notes from my colleagues on their areas of expertise. The first is from Martin. I’ll be back next month. - Lewis In 2024, Denmark announced plans to introduce the world’s first carbon tax on cow, sheep, and pig farming. Climate advocates celebrated, but animal advocates should be much more cautious. When Denmark’s Aarhus municipality tested a similar tax in 2022, beef purchases dropped by 40% while demand for chicken and pork increased. Beef is the most emissions-intensive meat, so carbon taxes hit it hardest — and Denmark’s policies don’t even cover chicken or fish. When the price of beef rises, consumers mostly shift to other meats like chicken. And replacing beef with chicken means more animals suffer in worse conditions — about 190 chickens are needed to match the meat from one cow, and chickens are raised in much worse conditions. It may be possible to design carbon taxes which avoid this outcome; a recent paper argues that a broad carbon tax would reduce all meat production (although it omits impacts on egg or dairy production). But with cows ten times more emissions-intensive than chicken per kilogram of meat, other governments may follow Denmark’s lead — focusing taxes on the highest emitters while ignoring the welfare implications. Beef is easily the most emissions-intensive meat, but also requires the fewest animals for a given amount. The graph shows climate emissions per tonne of meat on the right-hand side, and the number of animals needed to produce a kilogram of meat on the left. The fish “lives lost” number varies significantly by