Hide table of contents

Part 1: Is traction on nuclear risk possible?

This three-part post draws on a keynote speech[i] delivered by the European Leadership Network’s Director, Sir Adam Thomson, to EA Global in London[ii] last month alongside reactions to our earlier post.[iii] This first part asks whether people outside of government can achieve traction in reducing global nuclear risk, before we address neglect and some ideas to achieve impact. 

Getting traction on existential and catastrophic nuclear risk looks much easier than climate, AI or bio risks.  The major nuclear powers have a well-established sense of the risk and know roughly how to do nuclear crisis management and risk reduction, unlike AI. Unlike AI, they have already negotiated many agreements and have an international framework[iv]. Verification of existential risk reduction agreements - essential for sustaining reduced risk internationally - seems likely to be less intrusive and thus far easier to negotiate for nuclear than for AI or bio[v]. Unlike climate or environmental negotiations, which generally have to involve the entire international community, major nuclear risk reduction could be achieved by agreement between just three capitals - Beijing, Moscow and Washington[vi].  Moreover, such agreement would change the weather on nuclear risk and make it far easier for other nuclear weapons states and the international community to follow suit. 

This is not to say that getting traction on nuclear risk reduction is easy nor that AI or bio risk reduction is impossible.  But in the present dire state of great power relations is it really more likely that great powers will regulate their AI race more easily than their nuclear one, or add a verification component to the Biological and Toxin Weapons Convention more easily than find a way forward after the US-Russia nuclear New START treaty expires in February 2026?  

On the contrary, it is surely more plausible that nuclear risk reduction is the key that could unlock international progress on the control of AI applications and bio risks. Climate risks aren’t yet deeply integrated enough into great power security establishments to provide the diplomatic bridge to military risk reduction.  But great power security establishments ‘speak’ nuclear, have come to know they cannot win nuclear arms races or sustain covert nuclear programmes and are more confident about calculating their interests and then cutting deals on nuclear risk reduction than on AI or even bio. Beijing and Moscow have interests in constraining and mirroring the United States.  Washington professes interest in strategic stability.  The openings for traction on nuclear are clear. 

If you believe as we do that great power competition is the single greatest driver of anthropogenic existential risk[vii], then nuclear risk reduction is not only desirable in itself but is also the first step towards eliminating a massive chunk of wider existential risk. So, arguably, it has a double importance. In fact, a triple importance because, in the short term, nuclear catastrophe is currently still the most likely trigger to end humanity[viii].   

 

Part 2: Is nuclear traction really a priority for the EA community?

Even if we accept that getting traction on nuclear risk is in principle possible and extremely important, much of the EA community seem to regard nuclear risk reduction as less neglected, and therefore see nuclear projects as less likely to gain EA community traction than projects on AI, bio or climate. Maybe nuclear is an existential risk that can be left to governments?

On the face of it, all that seems reasonable.  As argued above, governments already know roughly how to do nuclear risk reduction. AI, at least, is a so far relatively less studied risk.  AI, bio and climate have more international momentum behind them right now than nuclear, which seems paralysed by US-Russia confrontation. So if you want to make a difference, nuclear might well seem lower priority.   

But these propositions deserve greater scrutiny. This section briefly examines ‘neglectedness’ as an EA traction issue before we assess some nuclear projects that stand at least as much chance of EA impact as EA projects on AI, bio or climate. 

First, understanding traction itself is ‘neglected’ within EA. How to achieve it has not been well-studied.  While the EA community has paid useful attention to modelling great power war, there has been less attention to great power dynamics as a risk factor, and almost no attention to tractability.  

Yet, whatever other tests you apply to a project, in the end all projects that aspire to reduce existential risk have to pass the great power traction test. That’s true for all existential risks, not just nuclear. Great powers are not only the countries most likely to generate and deploy capabilities that threaten catastrophe. They are also the countries who most have to be bought in if existential risk reduction is to be sustained internationally[ix].  Like AI, much of the catastrophic risk associated with nuclear is embedded within the intersection between the nature of the technology itself and its impact on the strategic balance. The challenges of multipolar traps and collective action connected to great power competition make controls on AI at least as difficult as they are for nuclear. So getting traction on great power risk deserves more attention - and at the moment getting traction on the newer risks looks less promising than nuclear.

Second, nuclear is more neglected than you might think.  It’s not just that arms control establishments have shrivelled[x], research funding has shrunk[xi], governments and publics are complacent, or worse, and we are therefore going backwards both in terms of reduced capacity and in terms increased risk.  It’s that the risk has changed and is changing fast and that this change is under-studied. It is significant (though we don’t intend this as a criticism) that both Toby Ord and William MacAskill in their discussions of nuclear risk[xii], cite the Cuban Missile Crisis.  That was certainly a terrifyingly close call.  But in the 60 years since October 1962, the risks of nuclear miscalculation have developed substantially (and in the process have grown an order of magnitude worse.  Then, four nuclear weapon states on two distinct sides, now nine on six or seven. Then, few exogenous technologies impacting the nuclear decision making, now many with the potential to do so.  Then, only one potentially existential anthropogenic risk, now nuclear is entangled with at least climate and AI risks. Whilst the general approach has developed over decades of practice, much of it needs reinventing in the emerging context. Old time practitioners are at a loss[xiii]. Fresh attention is urgently needed. 

Third, if you want to reduce existential risk, it is vital not to leave it only to governments.  True, governments are indispensable, especially when they are the ones actively manipulating the risk.  But they also have vested interests in preserving their capabilities.  And there is a widening gap between what they manage to do and what needs to be done.  EAs aren’t the only community that can help fill this gap, but they are multidisciplinary, international, focused on the long term not the quick fix, and therefore highly relevant. 

 

Part 3: So what EA-type nuclear projects have tractability?  

If nuclear risk reduction is more important than it looks, is more tractable than it seems, and is more neglected than it should be, there’s still a question whether there are effective nuclear risk reduction projects out there that merit EA support.  Existential diplomacy can seem so nebulous, so unquantifiable.  Where are the rigorously researched designs, crunchy outputs, and measurable outcomes?

Part of our message is that no existential risk reduction can in the end avoid the diplomacy. After all, the international community, especially the great powers, will have to collaborate for all generations to come in managing existential risks. So existential diplomacy with traction is a vital, continuous part of the solution.

There is sometimes a tendency to be attracted to activities more susceptible to quantification and modelling.  The Future of Life Institute is funding rigorous scientific analysis of nuclear winter so that the international community can better understand the degree to which nuclear risk really is existential.  And ALLFED are working to make that risk less existential by ensuring that surviving humanity does not starve after a nuclear holocaust. Greater scientific understanding and improved resilience are both highly worthwhile activities, but there is a sense in which activities to influence great power relations, the most critical dimension to preventing nuclear war in the first place, are more difficult to quantify.

Nevertheless, concrete research and action is possible for great power traction. For traction on nuclear risks, like anything else, you need effective institutional capacity, infectious ideas, and visible practical steps that help build momentum.  In each of these three categories, it’s not hard to identify structured, measurable projects with a decent chance of gaining traction.

On effective institutional capacity, we would offer the power of highly informed networks to fill the government gap - and especially the European Leadership Network and its sister Asia-Pacific Leadership Network.  They are free to do things that governments cannot.  They can reach decision makers in every nuclear capital, except perhaps Pyongyang, using very senior influencers who speak the national language, understand the national policies, and instinctively know how the government works.  They already make a concrete difference[xiv]. Researching the influencers for network membership, building such networks for enduring effect, polling to assess policy impact, is all quite concrete and quantifiable.

Projects to develop and spread viral ideas might seem more evanescent.  But it may be only a handful of ideas that are vital - for example the scale of risks of nuclear escalation under technological complexity or that negotiation of nuclear and other existential risk reduction must be insulated (“compartmentalised”) from the rest of great power competition[xv] - and progress in injecting such ideas into decision making circles can be quite well measured using digital twins, big data natural language analysis.  

Practical steps at present should start with the available options. These include unilateral initiatives that great powers can take to show seriousness (such as fail-safe reviews or tightening up their declaratory policy); use of existing formats such as the P5 process within the framework of the NPT; and return to established formats such as Moscow resuming its Strategic Stability Dialogue with Washington.  Such steps are quite binary: they either happen or they do not. And whilst it is governments that do the negotiation, it is very often non-government organisations that drive the ideas and establish the ground for progress.

This is a highly compressed set of possible projects that the EA community could get behind. They each have a decent chance of getting traction.  One thing in their favour is the dawning realisation among great powers, owners of capabilities that could spell catastrophe, that if they don’t work together to better manage them and stop their spread, others could eventually turn such capabilities against them.

Adam Thomson, European Leadership Network

Paul Ingram, Centre for the Study of Existential Risk, Cambridge

June 2023

 

 

[i] https://www.youtube.com/watch?v=R8WIX1FOs0E  

[ii] 19 -21 May at Tobacco Dock, London

[iii] https://forum.effectivealtruism.org/posts/Dc5r4hM5pYbW9xvx2/philanthropy-and-nuclear-risk-reduction

[iv] This includes the 1968 Nuclear Non-Proliferation Treaty (NPT) 

[v] It is surely the intrusiveness required for inspection of biohazards that has so far prevented the adoption of a verification regime under the Biological and Toxin Weapons Convention (BTWC).  Consider China’s resistance to and obstruction of WHO inspection of the origins of COVID-19.  

[vi] Between them, allowing for China’s projected ‘modernisation’ to 1500 warheads, they will hold over [95%] of all nuclear warheads

[vii] The European Leadership Network (ELN) is planning a research project to test this hypothesis.  If it can be convincingly proved, it would be a powerful aid to diplomacy in the international community, especially the global south.  

[viii] As Toby Ord has noted in The Precipice, nuclear Armageddon might not terminate humanity but could leave continued human existence vulnerable to other disasters.  And nuclear Armageddon is available right now, whereas beyond-human AI is not, existential bio capabilities do not yet appear to be in terrorist hands, and existential climate destruction, while appallingly plausible, is not yet upon us.  

[ix] Truly existential nuclear risk is currently in the hands of just two countries – Russia and the United States.  To get a long-term handle on nuclear risk, just nine states (the nuclear weapon states) would have to agree to give up their weapons programmes.  The future of unaligned artificial intelligence is arguably in the hands of just two states at the moment – the United States and China.  To shape that future so that nobody takes advantage or loses control, Washington and Beijing would have to reach agreements.  Of the 59 category 4 biolabs in existence, half are in four countries: the USA (14), the UK (9), Germany (4), India (3). Stronger controls would need to be agreed between at least these four but really between all 23 countries with such facilities.  Just four countries (China, the USA, Russia and India) are responsible for over 55% of global carbon emissions.  So progress on emissions reductions must include traction with at least these four. 

[x] See CSS/ETHZ November 2021 Redesigning Nuclear Arms Control for New Realities. – a joint report by Anna Péczeli, Brad Roberts, Jonas Schneider, Adam Thomson, Oliver Thränert, and Heather Williams, edited by Névine Schepers.  Since 2021 President Putin has done a good deal to spur renewed interest.  

[xi] The withdrawal of regular funding by the MacArthur Foundation and by Warren Buffet has had a particular impact on the Anglo-American NGO community.

[xii] Ord, The Precipice.  MacAskill, What We Owe The Future.

[xiii] The ELN supports a fortnightly dialogue between senior Russian, American and European arms control experts that, after three years, is still struggling - not just between national groups but within them - with the taxonomy of the challenge.  

[xiv] The ELN can, for example, document how it played the leading NGO role in the 3 January 2022 joint statement by the leaders of China, France, Russia, the United Kingdom and the United States which in turn has given added traction to the Nuclear Non-Proliferation Treaty community and the G20.  

[xv] See the ELN-APLN17 May 2023 global group appeal to G7 leaders to protect nuclear arms control from great power competition. 

9

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
More from ELN
Curated and popular this week
Relevant opportunities