Hide table of contents

I've spent quite a bit of time over the last few years trying to answer two questions:

  1. Will World War III break out this century? 
  2. If it does, could it be so devastating that it causes an existential catastrophe, threatening humanity’s long-term future?

It's been fun.

Last week I published an in-depth problem profile for 80,000 Hours that sums up what I’ve found so far: that World War III is perhaps surprisingly likely and we can’t rule out the possibility of a catastrophic escalation. It also discusses some ideas for how you might be able to help solve this problem.

This post includes the top-line summary of the profile. Following that, I draw out the highlights that seem particularly relevant for the EA community.

The heavy tail of war battle deaths (note the log-log scale). We just don't know how far into the tail of terrible outcomes a modern great power war could take us.

Profile Summary

Economic growth and technological progress have bolstered the arsenals of the world’s most powerful countries. That means the next war between them could be far worse than World War II, the deadliest conflict humanity has yet experienced.

Could such a war actually occur? We can’t rule out the possibility. Technical accidents or diplomatic misunderstandings could spark a conflict that quickly escalates. Or international tension could cause leaders to decide they’re better off fighting than negotiating.

It seems hard to make progress on this problem. It’s also less neglected than some of the problems that we think are most pressing. There are certain issues, like making nuclear weapons or military artificial intelligence systems safer, which seem promising — although it may be more impactful to work on reducing risks from AI, bioweapons or nuclear weapons directly. You might also be able to reduce the chances of misunderstandings and miscalculations by developing expertise in one of the most important bilateral relationships (such as that between the United States and China).

Finally, by making conflict less likely, reducing competitive pressures on the development of dangerous technology, and improving international cooperation, you might be helping to reduce other risks, like the chance of future pandemics.

Overall view 

Working on this issue seems to be among the best ways of improving the long-term future we know of, but all else equal, we think it’s less pressing than our highest priority areas (primarily because it seems less neglected and harder to solve).

Importance:

There's a significant chance that a new great power war occurs this century. 

Although the world's most powerful countries haven't fought directly since World War II, war has been a constant throughout human history. There have been numerous close calls, and several issues could cause diplomatic disputes in the years to come. 

These considerations, along with forecasts and statistical models, lead me to think there's about a one-in-three chance that a new great power war breaks out in roughly the next 30 years.

Few wars cause more than a million casualties and the next great power war would probably be smaller than that. However, there's some chance it could escalate massively. Today the great powers have much larger economies, more powerful weapons, and bigger military budgets than they did in the past. An all-out war could kill far more people than even World War II, the worst war we've yet experienced. 

Could it become an existentially threatening war — one that could cause human extinction or significantly damage the prospects of the long-term future? It's very difficult to say. But my best current guess is that the chance of an existential catastrophe due to war in the next century is somewhere between 0.05% and 2%.

Neglectedness:

War is a lot less neglected than some of our other top problems. There are thousands of people in governments, think tanks, and universities already working on this problem. But some solutions or approaches remain neglected. One particularly promising approach is to develop expertise at the intersection of international conflict and another of our top problems. Experts who understand both geopolitical dynamics and risks from advanced artificial intelligence, for example, are sorely needed.

Solvability:

Reducing the risk of great power war seems very difficult. But there are specific technical problems that can be solved to make weapons systems safer or less likely to trigger catastrophic outcomes. And in the best case, working on this problem can have a leverage effect, making the development of several dangerous technologies safer by improving international cooperation and making them less likely to be deployed in war.

At the end of this profile, I suggest five issues which I’d be particularly excited to see people work on. These are:

  • Developing expertise in the riskiest bilateral relationships
  • Learning how to manage international crises quickly and effectively and ensuring the systems to do so are properly maintained
  • Doing research to improve particularly important foreign policies, like strategies for sanctions and deterrence
  • Improving how nuclear weapons and other weapons of mass destruction are governed at the international level
  • Improving how such weapons are controlled at the national level

Highlights for an EA audience

The full profile is a major update to 80k’s content on great power conflict. It’s also a kind of update to my Founders Pledge report and previous Forum posts on conflict as a risk factor, the likelihood of World War III, and the potential for catastrophic wars (co-authored with Rani Martin).

Some highlights and updates that seem particularly relevant to an EA audience are:

  • Revised forecasts of the likelihood of major conflict this century. I haven’t updated majorly since “How Likely is WWIII?”. Before 2050, I think there’s a 30-40% chance we see direct conflict between great power countries and a 10% chance we see a war we could reasonably call World War III.
    • I’ve added a lot of detail in the profile about how a war could happen through rapid escalation following a technical or human error or as a result of a bargaining failure (adopting the useful framework from Prof. Chris Blattman’s book Why We Fight)
  • Some new thinking about how likely war is to cause an existential catastrophe. I make rough estimates of the likelihood of both an extinction war and a civilizational collapse or trajectory-bending war. The latter is much more likely but its effects are much more uncertain. On the whole I estimate the amount of x-risk war poses this century to be between 0.05% and 2% (thanks to Benjamin Hilton for his help with this section).
    • I think I overestimated the risk of an extinction-level war in “How Likely is WWIII?”, but this is partly offset by accounting for other ways war could cause an existential catastrophe
  • I’m more confident that enormous wars are possible and even worryingly likely. Rani Martin and I had previously speculated that enormous wars much larger than WWII are even less likely than statistical models imply. I now don’t weigh those considerations very highly.
    • Instead, I think the evidence from statistical analyses, theoretical models, and forecasts all suggest that a modern global war could indeed escalate to kill hundreds of millions or billions of people.
  • I raise the tricky issue of subtle trajectory changes. Any major great power war seems likely to have a bunch of important, long-lasting effects even if it doesn’t threaten us with extinction. They shift global power balances, and in their aftermath borders can be redrawn, institutions created or destroyed, and technological development pathways altered.
    • I’m still very unsure how to think about the expected net effect of these factors and mostly skirt the issue. But I wanted to raise it as a potential useful departure point for future work.
  • I think a bit more about how to prioritise work on this topic vs. 80,000 Hours’ other priorities. Ultimately I do think other existential risks seem potentially more important and less neglected than global conflict. But I do think this is still a pathway some people in the EA community should pursue, for a few reasons. 
    • There may be relatively low-hanging fruit to pick within the area that a scope-sensitive, impact-focused, altruistic person can work on. 
    • Personal fit factors could very plausibly overwhelm other factors. 
    • I think developing expertise at the intersection of geopolitics/diplomacy and some other important issue (like AI risk or US-China relations) seems highly and robustly valuable.
  • I suggest some places where people could work and issues that they could focus on. Those issues are:
    • The riskiest bilateral relationships. Become an expert on US-China, US-Russia, or China-India relations.
    • Crisis management. Think about how to avoid and de-escalate technical accidents, international misperceptions, or border conflicts.
    • Scrutinising important foreign policies, like US export control policy
    • How to govern weapons of mass destruction and emerging weapons technologies at the international level
    • How to control and keep such weapons safe at the national level

I tried to load the profile with concrete examples and anecdotes, including the stories of the most devastating conflicts and near-misses throughout history.

Naturally, the Paraguayan War of 1864-70 warrants a paragraph of its own (real ones know).

The Battle of Avaí by Pedro Américo. One of the largest battles of the horrific Paraguayan War.

I’m excited to have this published and hope to see more people work in this area!

Comments6


Sorted by Click to highlight new comments since:

In the 80k article, I noticed the claim that the Ukraine war had killed 'hundreds of thousands'. 

Looking over various estimates collected on Wikipedia, as far as I can tell, it is around ~100,000. Is there a reason to believe it is much higher than this?

I felt uncertain whether to write this comment as it feels very pedantic, but I think making sure we get small details right is important to be credible about the speculative risk estimates that can't be easily looked up.

Thanks for catching that, you're absolutely right. That should either read about 100,000 deaths or hundreds of thousands of casualties. I'll get that fixed.

Thank you Stephen for your long engagement with this topic, because I do think it is a very real risk that Effective Altruists should pay more attention to.

In addition to the actions you proposed, I also wanted to suggest there might be promising actions in reducing conflicts of interests that incentivise conflict and escalate tensions. The high amounts of political lobbying, sponsoring of think tanks and universities, by weapons companies creates perverse inventives. 

I have been very impressed by the work of the Quincy Institute to bring attention to this issue, and to explore diplomatic options as alternatives to conflict. I would love to see 80000 Hours promote them on their job board or interviewed.

I've written to my local MPs about banning contributions from weapons makers (Lockheed Martin, Boeing etc...) to the Australian Gov't military think tank ASPI. Here in Australia the recent AUKUS security pact has seen an enormous increase in planned military spending and sparked some discussion on the forum. I am trying to raise this as an issue/cause area to explore amongst Aussie EAs.

Thanks for this! I agree interventions in this direction would be worth looking into more, though I'd also say that tractability remains a major concern. I'm also just really uncertain about the long-term effects.

I think the Quincy Institute is interesting but want to note that it's also very controversial. Seems like they can be inflammatory and dogmatic about restraint policies. From an outside perspective I found it hard to evaluate the sign of their impact, much less its magnitude. I don't think I'd recommend 80K put them on the job board right now.

I largely agree with your assessment that Quincy is controversial and dogmatic about restraint/ non-intervention.

That being said, they are a valuable source of disagreement in the wider foreign policy community, and doing something very neglected (researching & advocating for restraint/non-intervention).

I know Quincy staff disagree with each other, coming from libertarian, leftist, realist perspectives. So it is troubling that Cirincione departed because that difference in perspective is needed. Although I do suspect Parsi is describing things accurately when he says Cirincione left because he wanted the Institute to adopt his position in the Russian-initiated war on Ukraine.

Quincy are exploring a controversial analysis in this current conflict in Russia-Ukraine, to identify if Russia's invasion could have been avoided in the 1st place (e.g. by bringing Russia into NATO way back when they were wanting to join), and advocating Ukraine and Russia compromise to reduce casualties (to be fair, it's reported the White House has also urged Ukraine to make compromises at times). Whilst controversial, I do think this is worthwhile, and I myself might disagree (and I believe they all disagree amongst themselves), I want to see this research/advocacy explored and debated. I had been nervous when the invasion started that Quincy's work could dip into Kremlin-apologetics, but they have seemed to steer away from that, and have nuanced perspectives.

Their work on the Iran Nuclear Deal, the conflict in Yemen, is far less controversial, and promising.

I find value in them being a counterbalance to the more hawkish think tanks which are much better resourced.

On the 80K job board, you have a few institutions (well respected and worthwhile no doubt) like CSIS & RAND, which are more interventionist and/or funded by arms manufacturers (even RAND is indirectly funded by the grants it receives from AEI), so I do worry that there is a systemic bias for interventionist views.

I hope people don't write-off Quincy's work or other anti-interventionist/restraint-focused work entirely, but certainly agree, take it with a grain of salt. I certainly do.

Thanks for this post. Reducing risks of great power war is important, but also consider reducing risks from great power war. In particular working on how non-combatant nations can ensure their societies survive the potentially catastrophic ensuing effects on trade/food/fuel etc. Disadvantages of this approach are that it does not prevent the massive global harms in the first place, advantages are that building resilience of eg relatively self-sufficient island refuges may also reduce existential risk from other causes (bio-threats, nuclear war/winter, catastrophic solar storm, etc). 

One approach is our ongoing project the Aotearoa New Zealand Catastrophe Resilience Project.

Also, 100,000 deaths sounds about right for the current conflict in Ukraine, given that recent excess mortality analysis puts Russian deaths at about 50,000. 

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f