Hide table of contents

Section 1: Strength of Evidence

The EA community has identified many sources of extinction risk. However, most efforts to mitigate existential risk from one source will not mitigate risk from other sources. This means we must prioritise between extinction risks to maximise impact.

Many factors should be considered in this prioritisation. This post will only focus on one factor - evidence quality. All else equal, we should prioritise problems where there is stronger evidence that it poses an extinction risk. 

In evidence-based medicine, there is a well-known “pyramid of evidence” which ranks the strength of evidence. This pyramid is not applicable to extinction risk studies, where all evidence is of much lower quality than that available in medicine. We also have the problem of observation selection effects which means we will not find past examples of human extinction.

This post introduces a potential approach to ranking evidence strength for extinction risks. I’d love to see people build on this and offer better-seeming alternatives.

I haven’t included “expert opinion” and “superforecaster estimates” in this ranking, since I think experts and superforecasters should be weighing evidence using this ranking to arrive at their opinions and estimates.

 

Proposed Levels of Evidence

 

​​1) Precedent of extinction of multiple species

Asteroids (Cretaceous-Paleogene Extinction)

Supervolcanoes / non-anthropogenic climate change (Permian-Triassic Extinction, Triassic-Jurassic Extinction)

 

2) Precedent of extinction of a single species 

Infectious Disease (Tasmanian Tiger, Golden Toad, Christmas Island Pipistrelle)

 

3) Precedent of a human societal collapse 

 

4) Precedent of an extremely large number of human deaths in an extremely short period of time 

War (World War 2, Taiping Rebellion)

Famine (Great Chinese Famine)

 

5) Clear mechanism of extinction

Nuclear War

Gamma Ray Bursts

Biodiversity Loss

 

6) Unclear mechanism of extinction

AGI

Nanotechnology

Particle Physics Experiments

Global Systemic Risks / Cascading Risks

Geoengineering

 

Section 2: Uncertainties and Open Questions

What are the correct reference classes? Should we take previous technological change-induced societal collapses as a reason to prioritise nanotechnology and AI Safety, even if the technology which induced the collapse looked very different?

Will more research into exoplanets allow us to learn about past events which turned habitable planets into uninhabitable ones? I would put this type of evidence at the top of the ranking.

I’m unsure about where to place “precedent of human societal collapse” relative to precedent of extinction of animal species - I think this depends heavily on how special you think humans are relative to animals. Clearly, we're much better able to co-ordinate and respond to emerging threats.

 

Section 3: Implications for Open Philanthropy, 80K and individual EAs

I don’t think Open Philanthropy, 80k or most individual EAs have given enough consideration to strength of evidence when prioritising between extinction risks.

So based on this current framework, I think all groups should allocate more resources (money, careers, etc) towards planetary defence and supervolcanoes, and less resources towards nuclear war and AI Safety, relative to the status quo.

11

0
0

Reactions

0
0
Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
Jim Chapman
 ·  · 12m read
 · 
By Jim Chapman, Linkedin. TL;DR: In 2023, I was a 57-year-old urban planning consultant and non-profit professional with 30 years of leadership experience. After talking with my son about rationality, effective altruism, and AI risks, I decided to pursue a pivot to existential risk reduction work. The last time I had to apply for a job was in 1994. By the end of 2024, I had spent ~740 hours on courses, conferences, meetings with ~140 people, and 21 job applications. I hope that by sharing my experiences, you can gain practical insights, inspiration, and resources to navigate your career transition, especially for those who are later in their career and interested in making an impact in similar fields. I share my experience in 5 sections - sparks, take stock, start, do, meta-learnings, and next steps. [Note - as of 03/05/2025, I am still pursuing my career shift.] Sparks – 2022 During a Saturday bike ride, I admitted to my son, “No, I haven’t heard of effective altruism.” On another ride, I told him, “I'm glad you’re attending the EAGx Berkely conference." Some other time, I said, "Harry Potter and Methods of Rationality sounds interesting. I'll check it out." While playing table tennis, I asked, "What do you mean ChatGPT can't do math? No calculator? Next token prediction?" Around tax-filing time, I responded, "You really think retirement planning is out the window? That only 1 of 2 artificial intelligence futures occurs – humans flourish in a post-scarcity world or humans lose?" These conversations intrigued and concerned me. After many more conversations about rationality, EA, AI risks, and being ready for something new and more impactful, I decided to pivot my career to address my growing concerns about existential risk, particularly AI-related. I am very grateful for those conversations because without them, I am highly confident I would not have spent the last year+ doing that. Take Stock - 2023 I am very concerned about existential risk cause areas in ge
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 3m read
 · 
Written anonymously because I work in a field where there is a currently low but non-negligible and possibly high future risk of negative consequences for criticizing Trump and Trumpism. This post is an attempt to cobble together some ideas about the current situation in the United States and its impact on EA. I invite discussion on this, not only from Americans, but also those with advocacy experience in countries that are not fully liberal democracies (especially those countries where state capacity is substantial and autocratic repression occurs).  I've deleted a lot of text from this post in various drafts because I find myself getting way too in the weeds discoursing on comparative authoritarian studies, disinformation and misinformation (this is a great intro, though already somewhat outdated), and the dangers of the GOP.[1] I will note that I worry there is still a tendency to view the administration as chaotic and clumsy but retaining some degree of good faith, which strikes me as quite naive.  For the sake of brevity and focus, I will take these two things to be true, and try to hypothesize what they mean for EA. I'm not going to pretend these are ironclad truths, but I'm fairly confident in them.[2]  1. Under Donald Trump, the Republican Party (GOP) is no longer substantially committed to democracy and the rule of law. 1. The GOP will almost certainly continue to engage in measures that test the limits of constitutional rule as long as Trump is alive, and likely after he dies. 2. The Democratic Party will remain constrained by institutional and coalition factors that prevent it from behaving like the GOP. That is, absent overwhelming electoral victories in 2024 and 2026 (and beyond), the Democrats' comparatively greater commitment to rule of law and democracy will prevent systematic purging of the GOP elites responsible for democratic backsliding; while we have not crossed the Rubicon yet, it will get much worse before things get better. 2. T