Hide table of contents

Note: the below is a chapter by Matthijs Maas, Kayla Matteucci, and Di Cooke (CSER), for a forthcoming book on new avenues of research on Global Catastrophic Risk. The chapter explores whether, or in what ways, the military use of AI could constitute or contribute to Global Catastrophic Risk. A PDF of the chapter is also available at SSRN  (see also CSER publication page). 

TLDR: The chapter focuses primarily on two distinct proposed scenarios, (1)  the use of swarms of Lethal Autonomous Weapons Systems, and potential barriers and disincentives; and (2) the intersection of military AI and nuclear weapons. We specifically outline six ways in which the use of AI systems in-, around-, or against- nuclear weapons and their command infrastructures could increase the likelihood of nuclear escalation and global catastrophe

This research dovetails with themes explored by others in the community, such as in Christian Ruhl's recent Founders Pledge report on 'risks from autonomous weapons and military AI'. We welcome your thoughts both on the content of the chapter, as well as suggestions for future research in this area could be valuable and interesting, as this chapter is a condensed version of a larger research project that dives into a broader set of ways in which military AI systems could intersect with, contribute to, or compound global catastrophic risk.

(Length note: half of this page's length consists of references & endnotes).


Abstract: Recent years have seen growing attention for the use of AI technologies in warfare, which has been rapidly advancing. This chapter explores in what ways such military AI technologies might contribute to Global Catastrophic Risks (GCR). After reviewing the GCR field’s limited previous engagement with military AI, and giving an overview of recent advances in military AI, this chapter focuses on two risk scenarios that have been proposed. First, we discuss arguments around the use of swarms of Lethal Autonomous Weapons Systems, and suggest that while these systems are concerning, they appear not yet likely to be a GCR in the near-term, on the basis of current and anticipated production limits and costs which make these systems still uncompetitive with extant systems for mass destruction. Second, we delve into the intersection of military AI and nuclear weapons, which we argue has a significantly higher GCR potential. We review historical debates over when, where, and why nuclear weapons could lead to GCR, along with recent geopolitical developments that could raise these risks further. We then outline six ways in which the use of AI systems in-, around-, or against- nuclear weapons and their command infrastructures could increase the likelihood of nuclear escalation and global catastrophe. The chapter concludes with suggestions for a research agenda that can gain a more comprehensive and multidisciplinary understanding of the potential risks from military AI, both today and in the future.

 

1. Introduction

It should hardly be surprising that military technologies have featured prominently in public discussions of global catastrophic risk (GCR).1 The prospect of uncontrolled global war stands as one of the oldest and most pervasive scenarios of what total societal disaster would look like. Conflict has always been able to devastate individual societies: in the modern era, technological and scientific progress has steadily increased the ability of state militaries, and possibly others, to inflict catastrophic violence.2–4

There are many such technologies, with Artificial Intelligence (AI) becoming an even more notable one in recent years. Increasingly, experts from numerous fields have begun to focus on AI technologies’ applications in warfare, considering how these could pose risks, or even new GCRs. While the technological development of military AI, and the corresponding study of its impacts, are still at an early stage, both have also progressed dramatically in the past decade. Most visibly, the development and use of Lethal Autonomous Weapons (LAWS) has sparked a heated debate, spanning both academic and political spheres.5–7 However in actuality, military applications of AI technology extend far beyond controversial ‘killer robots’, with diverse uses from logistics to cyberwarfare, and from communications to training.8

It is anticipated that these applications may lead to many novel risks for society. The growing trend of utilizing AI across defense-related systems creates new potential points for technical failure or operator errors; it can result in unanticipated wide-scale structural transformations in the decision environment or may negatively influence mutual perceptions of strategic stability, exacerbating the potential for escalation resulting in global catastrophic impacts. Even in less directly kinetic or lethal roles, such as intelligence gathering or logistics, there is concern that the use of AI systems might still circuitously lead to GCRs. Finally, there are possible GCRs associated with the future development of more capable AI systems, such as artificial general intelligence (AGI); while these final potential GCRs are not the direct focus of this chapter, it should be noted that these risks could be especially significant in the military context, and that this would require caution rather than complacency.

Despite the ongoing endeavours around the world to leverage more AI technology within the national security enterprise, current efforts to identify and mitigate risks resulting from military AI are still very much nascent. At a technical level, one of the most pressing issues facing the AI technical community today is that any AI system is prone to a wide array of performance failures, design flaws, unexpected behaviour, or adversarial attacks.9–11 Meanwhile, numerous militaries are devoting considerable time and resources towards deploying AI technology in a range of operational settings. Despite this, many still lack clear ethics or safety standards as part of their procurement and internal development procedures for military AI.12 Nor have most state actors actively developing and deploying such systems agreed to hard boundaries limiting the use of AI in defense, or engaged in establishing confidence building measures with perceived adversaries.13,14 

It is clear that military AI developments could significantly affect the potential for GCRs in this area, making the exploration of this technological progression and its possible impacts vital for the GCR community. Now that AI techniques are beginning to see real-world uptake by militaries, it is more crucial than ever that we develop a detailed understanding about how military AI systems might be considered as GCRs in their own right, or how they might be relevant contributors to military GCRs. In particular, from a GCR perspective, further attention is needed to examine instances when AI intersects with military technologies as destructive as nuclear weapons, potentially producing catastrophic results. To enable a more cohesive understanding of this increasingly complex risk landscape, we explore the established literature and propose further avenues of research. 

Our analysis proceeds as follows: after reviewing past military GCR research and recent pertinent advancements in military AI, this chapter turns the majority of its focus on LAWS and the intersection between AI and the nuclear landscape, both of which have received the most attention thus far in existing scholarship. First examining LAWs, we assess whether they might constitute GCRs - which we ultimately argue to be unlikely considering current and anticipated production and associated costs. We then delve into the intersection of military AI and nuclear weapons, which we argue has a significantly higher GCR potential. We first examine the GCR potential of nuclear war, briefly discussing the debates over when, where, and why it could lead to a GCR. Furthermore, after providing recent geopolitical context by identifying relevant converging global trends which may also independently raise the risks of nuclear warfare, the chapter turns its focus to the existing research on specific risks arising at the intersection of nuclear weapons and AI. We outline six hypothetical scenarios where the use of AI systems in-, around-, or against- nuclear weapons could increase the likelihood of nuclear escalation and result in global catastrophes. Finally, the chapter concludes with suggestions for future directions of study, and sets the stage for a research agenda that can gain a more comprehensive and multidisciplinary understanding of the potential risks from military AI, both today and in the future.

2. Risks from Military AI in the Global Catastrophic Risks field

Before understanding how military AI might be a GCR, it is important to understand how the GCR field has viewed risks from AI more broadly. Within the GCR field there has been growing exploration of the ways in which AI technology could one day pose a global catastrophic or existential risk.15–19 Such debates generally have not focused much on the military domain in the near term, however. Instead, they often focus on how such risks might emerge from future, advanced AI systems, developed in non-defense (or, at best, broadly ‘strategic’) contexts or sectors. These discussions have often focused on the development of Artificial General Intelligence (AGI) systems that would display “the ability to achieve a variety of goals, and carry out a variety of tasks, in a variety of different contexts and environments”20 with performance equivalent or superior to a human in many or all domains. These are of course not the only systems studied: more recent work has begun to explore the prospects for, and implications of intermediate ‘High-Level Machine Intelligence’,21 or ‘Transformative AI’,22 as referring to types of AI systems that would be sufficient to drive significant societal impacts, without making strong assumptions about the architecture, or ‘generality,’ of the system(s) in question. 

Whichever term is used, across the GCR field, and particularly in the sub-fields of AI safety and AI alignment, there has been a long-running concern that if technological progress continues to yield more capable AI systems, such systems might eventually pose extreme risks to human welfare if they are not properly controlled or aligned with human values.18,19 Unfortunately, pop culture depictions of AI have fed some misperceptions about the actual nature of the concerns in this community.23 As this community notes itself, there is still deep uncertainty over whether existing approaches in AI might yield progress towards something like AGI,24 or when such advanced systems might be achieved.25 Nonetheless, they point to a range of peculiar failure modes in existing machine learning approaches,9,26 which often display unexpected behaviours, achieving the stated target goals in unintended (and at times hazardous) ways.27–29 Such incidents suggest that the safe alignment of even today’s machine learning systems with human values will be a very difficult task;30 that it is unlikely that this task will become easier if or when AI systems become highly capable; and that even minor failures to ensure such alignment could have significant, even globally catastrophic societal impacts.17 

However, while the continued investigation of such future risks is critical, these are not strictly the focus of this chapter, which rather looks at the intersection of specifically military AI systems with GCRs, today or in the near-term. Indeed, with only a few exceptions,31–33 existing GCR research has paid relatively little attention to the ways in which military uses of AI could result in catastrophic risk. That is not to say that the GCR community has not been interested in studying military technologies in general. Indeed, there have been research efforts to learn from historical experiences with the safe development and responsible governance of high-stakes military technologies, to derive insights for critical questions around the development, deployment, or governance of advanced AI. This research includes, for example, analyses of historical scientific dynamics around (strategically relevant) scientific megaprojects,34 the plausibility of retaining scientific secrecy around hazardous information,35 or the viability of global arms control agreements for high-stakes military technologies.36,37 Other work in this vein has studied the development, impacts of, and strategic contestation over previous ‘strategic general-purpose technologies’ with extensive military applications, such as biotechnology, cryptography, aerospace technology, or electricity.38–40 However, these previous inquiries work by analogy, and have neglected to thoroughly examine in detail the object-level question of whether or how existing or near-term military AI systems could themselves constitute a GCR.

Thus far, the predominant focus on military AI as GCRs has been on LAWs, and on nuclear weapons. The former should not be surprising, given the strong resonance of ‘killer robots’ in the popular imagination. The latter should not be surprising, given that the GCR field’s examination of military technologies has its roots in original concerns about nuclear weapons. Indeed, in the past 75 years, long before terms such as ‘GCR’ or ‘existential risk’ even came to be, the threat of nuclear weapons inspired a wave of work, study, and activism to reckon with the catastrophic threats posed by this technology.1 Still, at the present moment, the exploration of how military AI might intersect with or augment the dangers posed by destructive technologies such as nuclear weapons is still in its early stages.

Before delving into military AI as a potential GCR, it is also crucial to first define what we consider to be a GCR. ‘Global catastrophic risks’ (GCRs) are risks which could lead to significant loss of life or value across the globe, and which impact all or a large portion of humanity. There is not yet widespread agreement on what this exactly means, what threshold would count as a global catastrophe,41 or what the distinction is between GCRs and ‘existential risks’. For many discussions within the field of GCR, and for many of the risks discussed in other chapters in this volume, such ambiguity may not matter much, if the potential risks discussed are so obviously catastrophic in their impacts (virtually always killing hundreds of millions, or even resulting in extinction) that they would undeniably be a GCR. Yet in the domain of military AI (as with other weapons technologies), one may confront potential edge case scenarios, involving the projected deaths of hundreds of thousands or even millions, but where it is unclear if this (plausibly) would reach higher. 

Within our chapter, we therefore need some working threshold for what constitutes a GCR, even if any threshold is by its nature contestable. What is a workable threshold to use here for GCRs? One early influential definition by Bostrom & Cirkovic holds that a catastrophe causing 10,000 fatalities (such as a major earthquake or nuclear terrorism) might not qualify as a global catastrophe, whereas one that “caused 10 million fatalities or 10 trillion dollars worth of economic loss (e.g., an influenza pandemic) would count as a global catastrophe, even if some region of the world escaped unscathed.”42 However, while there is therefore clear definitional uncertainty, in this chapter we will utilize a lower bound for GCRs that lies in the middle of the range indicated by Bostrom & Cirkovic. To be precise, we understand a GCR to be an event or series of directly connected events which result in at least one million human fatalities within a span of minutes to several years, across at least several regions of the world.

To understand whether and in what ways military AI could contribute to GCRs of this level, we next sketch the speed and direction by which this technology has been developed and deployed for military purposes, both historically and in recent years.

3. Advances in Military AI: past and present 

The use of computing and automation technologies in military operations itself is hardly new. Indeed, the history of AI’s development has been closely linked to militaries, with many early advances in computing technologies, digital networks, and algorithmic tools finding their genesis in military projects and national strategic needs.43,44 During the Cold War, there were repeated periods of focus on the military applications of AI, from early RAND forecasts exploring long-range future trends in automation,45 to discussions of the potential use of AI in nuclear command and control (NC2) systems management.46 As such, military interest in AI technology has proven broadly robust, in spite of periods of occasional disillusionment during the ‘AI winters’. Even when individual projects failed to meet overambitious goals and were cancelled or scaled back, they still helped advance the state of the art; such was the case with the US’s 1980s Strategic Computing Initiative, a ten-year, $1 billion effort to achieve full machine intelligence.44 Moreover, by the 1990’s, some of these investments were seen as beginning to pay off on the battlefield: for instance, during the first Gulf War, as a wide range of technologies contributed to a steeply one-sided Coalition victory over Iraqi forces,47 the US military’s use of the Dynamic Analysis and Replanning Tool (DART) tool for automated logistics planning and scheduling was allegedly so successful that DARPA claimed this single application had promptly paid back thirty years of investment in AI.48,49 

This long-standing relation between militaries and AI technology also illustrates how, just as there is not a single ‘AI’ technology but rather a broad family of architectures, techniques and approaches, likewise there is not one ‘military AI’ use case (e.g. combat robots). Rather, weapons systems have for a very long time been positioned along a spectrum of various forms of automatic, automated, or autonomous operation.50 Many of these therefore are not new to military use: indeed, armies have been operating ‘fire and forget’ weapons for over 70 years, dating back to the acoustic ‘homing’ torpedoes that already saw use during the Second World War.51 In restricted domains such as at sea, fully autonomous ‘Close-In Weapon Systems’ (last-defense anti-missile cannons) have been used for years by dozens of countries to defend their naval vessels.51

Still, recent years have seen a notable acceleration in the militarisation of AI technology.7 The market for the use of AI in military uses was estimated at USD 6.3 billion in 2020, and was then projected to double to USD 11.6 billion by 2025.52 Investments are led by the US, China, Russia, South Korea, the UK, and France,7 but also include efforts by India, Israel, and Japan.53 

What is the exact appeal of AI capabilities for militaries? Generally speaking, AI has been described as a ‘general-purpose technology’ (GPT),38,54 suggesting that it is likely to see global diffusion and uptake, even if there may be shortfalls amid rushed applications.55,56 This also extends to the military realm. Although uptake of military AI differs by country, commonly highlighted areas of application include improved analysis and visualisation of large amounts of data for planning and logistics; pinpointing relevant data to aid intelligence analysis; cyber defence and identification of cyber vulnerabilities (or, more concerningly, cyber offence); early warning and missile defence; and autonomous vehicles for air, land, or sea domains.57 

Given this range of uses, there has been significant government attention for the strategic promise of the technology. US scholars describe AI as having prompted the latest “revolution in military affairs”;58 Chinese commentators project that virtually any aspect of military operations might be ‘intelligentized’59 –that is, improved, made faster, or more accurate. In this way, AI could enable ‘general-purpose military transformations’ (GMT).40 Consequently, many anticipate far-reaching or even foundational changes in military practice. Even those with a more cautious outlook still agree that AI systems can serve as a potent ‘evolving’ and ‘enabling’ technology, that will have diverse impacts across a range of military fields.60 This has led some to anticipate widespread and unconstrained proliferation of AI, on the assumption that [t]he applications of AI to warfare and espionage are likely to be as irresistible as aircraft.58 Still, this should come with some caveats.

In the first place, many applications of military AI may appear relatively ‘mundane’ in the near-term. As argued by Michael Horowitz, “[m]ost applications of AI to militaries are still in their infancy, and most applications of algorithms for militaries will be in areas such as logistics and training rather than close to or on the battlefield”.61 Indeed, early US military accounts on autonomy maintain that there are only particular battlefield conditions under which that capability adds tactical value.62 Despite the ambitious outlook and rhetoric of many national defense strategies around AI, in practice their focus appears to be more on rapidly maximizing the benefits from easily accessible or low-hanging AI applications in areas such as logistics and predictive maintenance, rather than working immediately towards epochal changes.63

Secondly, while there are significant technological breakthroughs in AI, a number of technological and logistical challenges are likely to slow implementation to many militaries, at least in the near term of the next decade. All military technologies, no matter how powerful, face operational, organisational, and cultural barriers to adoption and deployment,64 and there is no reason to expect military AI will be immune to this. Indeed, militaries may face additional and unexpected hurdles when forced to procure such systems from private sector tech companies, because of mismatches in organisational processes, development approaches, and system requirements,65 or export control restrictions or military robustness expectations that go beyond consumer defaults.63 Finally, emerging technologies, when in their early stages of development, will often face acute trade-offs or brittleness in performance that limits their direct military utility.66 Likely high-profile failures or accidents with early systems can also temper early military enthusiasm for deployment, stopping or slowing development, especially where it concerns more advanced applications such as complex drone swarms with the capacity for algorithmically coordinated behaviour.67 

Moreover, there are factors that may slow or restrict the proliferation of military AI technology, at least in the near term. Military-technological espionage or reverse engineering have proven a valuable but ultimately limited tool for militaries to keep pace with cutting-edge technologies developed by adversaries.68 In recent years, the training of cutting-edge AI systems has also begun to involve increasingly large computing hardware requirements,69 as well as important AI expert knowledge, which could ultimately restrict the straightforward proliferation of many types of military AI systems around the globe.70,71 

Finally, and alongside all of this, there may be political brakes or even barriers to some (if not all) military uses of AI. It should be kept in mind that while the adoption of any military technology may be driven by military-economic selection pressures,72 their development or use by any actors is certainly not as inevitable or foregone as it may appear in advance.36 Historically, states and activists have, by leveraging international norms, interests and institutions, managed to slow, contain, or limit the development of diverse sets of emerging weapons technologies–from blinding lasers to radiological weapons, and from environmental modification to certain nuclear programs–beyond a degree imaginable even years earlier.73,74 Accordingly, there is always the possibility that the coming decades will see invigorated opposition to military AI that will impose an effective brake; however, the success of any such efforts will depend sensitively on questions of issue framing, forum choice, and organization.75,76 

As a result, the reality of military AI may appear relatively mundane at least for the next few years, even as it gathers pace below the surface. Nonetheless, even under excessively conservative technological assumptions–where we assume that AI performance progress slows down or plateaus in the next years–AI appears likely to have significant military impacts. In fact, in many domains, it need not achieve further dramatic breakthroughs for existing capabilities to alter the international military landscape. As with conventional drone technologies, even imperfect AI capabilities (used in areas such as image recognition) could suffice to enable disruptive tactical and strategic effects, especially if they are pursued by smaller militaries or non-state actors.77 As such, even if we assume that more advanced AI capabilities remain out of reach or undesired (an assumption that may rest on thin ground), the development of autonomous systems could herald a wide range of tactical changes,77 including a shift in the so-called ‘offense-defence balance’78,79 due to increased effectiveness of offensive capabilities--along with an increased use of deception and decoys, or changes in force operation and operator skill requirements, to name a few.80 But the question still remains: are any of these impacts plausibly globally catastrophic?

4. LAWS as GCR

Thus far, some of the most in-depth discussions of military AI systems as plausible GCRs have focused on the potential risks of LAWS. In this section, we examine existing research, and explore several proposed scenarios for ways by which LAWS might contribute to GCRs. Ultimately, we argue that the threshold of destruction (> one million human fatalities) necessary for a GCR leaves most, if not all, near-term LAWs unlikely to qualify as GCRs in isolation. 

To pose a GCR, a technology has, at some point, to have lethal effects. To be certain, there are significant developments in directly lethal military AI. Of course, technical feasibility by itself does not mean the development of such systems is inevitable: the existence of LAWS–or their mass procurement and deployment beyond prototypes–hinges not just on questions of technological feasibility, but also on questions of governments’ willingness to deploy such systems. To take the technological developments as a starting point, LAWS systems are already being developed and deployed across militaries worldwide. Already in 2017, a survey identified 49 deployed weapon systems with autonomous targeting capabilities sufficient to engage targets without the involvement of a human operator”.5 This number has grown substantially since. 

Moreover, in the past years the first fully autonomous weapons systems have reportedly begun to see actual (if limited) deployment. For instance, the South Korean military briefly deployed Samsung SGR-A1 sentry gun turrets to the Korean Demilitarized Zone, which came with an optional autonomous operation mode.81,82 Israel has begun to deploy the ‘Harpy’ loitering anti-radar drone,83 and various actors have begun to develop, sell, or use weaponised drones capable of autonomy.84 In 2019, the Chinese company Ziyan released the ‘Blowfish A3’, a machine-gun carrying assault drone that was allegedly marketed as sporting ‘full autonomy’.85 2020 saw claims that Turkey had developed (semi)autonomous versions of its ‘Kargu-2’ kamikaze drone;86 In the spring of 2021, a UN report suggested that this weapon had been used fully autonomously in the Libyan conflict, to attack soldiers fleeing battle.87,88 UAVs that are in-principle capable of full autonomy have also reportedly seen use in the 2022 Russian invasion of Ukraine, although it remains difficult to ascertain whether any of these systems have been used in fully autonomous mode.89,90 Recent developments in autonomous weapons have also included the use of large numbers of small robotic drone platforms in interacting swarms.91 The Israel Defense Forces deployed such swarms in its May 2021 campaign on Gaza, to locate, identify and even strike targets.92

In other cases, AI has been used in ways that are less ‘autonomous’, but which certainly show the ‘lethality-enabling’ function of many AI technologies.93 For example, the November 2020 assassination of Mohsen Fakrizadeh, Iran’s top nuclear scientist, relied upon a remotely-controlled machine gun. While the system was controlled by a human operator, it reportedly used AI to correct for more than a second and a half of input delay. This allowed the operator to fire highly accurately at a moving target, from a moving gun platform on a highway, while stationed more than 1,000 miles away.94 Other developments demonstrate the potential for more advanced autonomous behaviour. In 2020, DARPA ran AlphaDogFight, a simulated dogfight between a human F-16 pilot and a reinforcement-learning based AI system, which saw the AI defeating the human pilot in all of five matches.95 In the past decade, the US and others have also experimented with a plane-launched swarm of 103 Perdix drones, which coordinated with one another to demonstrate collective decision-making, adaptive formation, and ‘self-healing’ behaviour.96 Experiments in swarming drones have continued apace since.

Perhaps unsurprisingly due to the fact that it has had earlier adoption relative to other high-risk military applications, LAWS have received sustained public scrutiny and scholarly attention, far more so than any other military AI use case. Consequently, efforts to develop governance approaches have arisen from multiple corners,75,97 including at the UN Convention on Certain Conventional Weapons (CCW) since 2014, as well as within arms control communities since 2013.98 However, it is notable that these debates have mostly examined qualitative characteristics of LAWS, rather than the potential quantitative upper limit on the scale of violence they might enable. Specifically, opposition to LAWS has focused primarily (but not exclusively) on their potential violation of various existing legal principles or regimes under international law, specifically International Humanitarian Law,99,100 or (when used in law enforcement outside of warzones) under international human rights law;101 other discussions have explored whether LAWS, even if they narrowly comply with cornerstone IHL principles, might still be held to undermine human dignity because they involve ‘machine killing’.75,102 

Over time, however, some civil society actors have begun to attempt to understand and stigmatise LAWS swarms as a potential ‘weapon of mass destruction’,103 with swarms of lethal drones as a weapon system that could easily fall in the hands of terrorist actors or unscrupulous states, allowing the infliction of massive violence. This is a framing that has become more prominent within counter-LAWS disarmament campaigns,104 most viscerally in depictions such as the Future of Life Institute’s ‘Slaughterbot’ videos of 2017 and 2021.105,106 This is indicative of a growing concern for the ‘quantitative’ dimension and potential scale of mass attacks using autonomous weapons. 

As a consequence, two distinct scenarios have often been proposed regarding LAWS technology as a significant global risk: terrorist use for mass attacks and state military use of massed LAWS forces.

Mass terror attacks on public or on GCR-sensitive targets 

One hypothetical discussed by experts focuses on the use of LAWS, not by state militaries, but by non-state actors such as terror groups.104 In  theory, terrorists could subsequently leverage larger and larger swarms, either through direct acquisition of such militarized technology (if unregulated), or remote subversion of existing fleets using cyberattacks. Turchin and Denkenberger argue that increasingly larger quantities of drone swarms would be feasible as a global catastrophic risk, as it becomes cheaper to build drones.107 While it is possible that this could enable mass casualty attacks, it seems unlikely that any non-state actor could scale such attacks up to the global level. Moreover, it would be hard for them to prepare attacks of such magnitude undetected. 

Another less explored risk would involve the (terrorist) use of LAWS to deliver other GCR-capable weapons or agents. For instance, Kallenborn and Bleek have suggested that actors could use drone swarms to deliver existing chemical, biological, or radiological weapons;108 others have suggested that non-state actors could refit crop duster drones to disperse chemical or biological agents.109 In such cases, the level of risk is less clear: it might still be unlikely that these hypothetical events could be scaled up to result in a full GCR, however, this depends on the potency of the delivered agent in question. Ultimately, existing research is still very preliminary, and much further research is necessary to enable more concrete conclusions.

A third attack pathway could involve the malicious or terrorist use of autonomous weapons on sensitive critical infrastructures which, if damaged or compromised, would precipitate GCRs (or at least would instantly cripple our ability to respond to ongoing or imminent GCRs). Drone systems have been used by various non-state actors in recent years to mount effective attacks against critical infrastructures—as in the attacks on oil pipelines and national airports in the Yemen conflict.110 Moreover, across the world there are a wide range of vulnerable global infrastructural ‘pinch points’ (internet connection points, narrow shipping canals, breadbasket regions) which, if they are attacked or degraded, could precipitate major shocks in the global system.111 Many of these could be conceivably attacked through autonomous weapons, which could result in regional or even global disaster by the resulting knock-on effects, even if they were only temporarily disrupted. For instance, AWS could be used to deliver coordinated attacks on nuclear power plants, potentially resulting in large fallout patterns and contamination of land and food.112 Alternatively, they could be used to attack and interrupt any future geo-engineering programs, potentially triggering climatic ‘termination shocks’ where temperatures bounce back in ways that would be catastrophically disruptive to the global ecosystem and agriculture.113,114 However, these types of attack do not seem to necessarily require autonomous weapons, and while they could certainly result in widespread global chaos, it is again unclear if they could be scaled up to the threshold of a global catastrophe involving over one million casualties.

State attacks with mass LAWS swarms

Within existing research, another frequently discussed hypothetical scenario is the idea of well-resourced actors using mass swarms of LAWS to carry out global attacks, allowing for “armed conflict to be fought at a scale greater than ever”.115 There is also a lively discussion about the possibility that mass attacks using swarms of ‘slaughterbots’ (fully autonomous microdrones that deliver small shaped charges) could allow small state actors to mount attacks that would kill as many as 100,000 people.116 

Turchin and Denkenberger have argued that in large enough quantities, drone swarms could be destructive enough to constitute a GCR, and command errors could result in autonomous armies creating a similar level of damage. Still, they predict that, even in those scenarios, LAWS are likely to result in broad instability rather than destruction on the scale of a GCR.107 More recently, Anthony Aguirre has suggested that mass swarms of ‘anti-personnel AWS’ could deliver large-scale destruction at lower costs and lower access thresholds than would be required for an equivalently destructive nuclear strike (of an equivalent scale as the Hiroshima bombing), and that such weapons could be scaled up to inflict extreme levels of global destruction.117 Turchin has suggested that drone swarms could become catastrophic risks only under very specific conditions, where more advanced (e.g. AGI) technologies are delayed, drone manufacture costs fall to extremely low bounds, defensive counter-drone capabilities lag behind, and militaries adopt global postures that condone the development of drone swarms as a strategic offensive weapon.31 Even under these conditions, he suggests, drone swarms would be unlikely to ever rise to the level of an ‘existential risk’, though they could certainly contribute to civilizational collapse in the event of an extensive global war.31

Evaluating the feasibility of mass LAWS swarm-attack scenarios as GCRs

In both of the above cases, there is reason for concern and precautionary study and policy. However, there remain at least some practical reasons to doubt that LAWS lend themselves to precipitating catastrophes at a full GCR scale in the near term. 

For one, it still is unclear if LAWS would be more cost-effective as a mass-attack weapon for states that have other established options. On the one hand, Aguirre has argued that ‘slaughterbots’ could be as inexpensive as $100, meaning that, even with a 50% unit attack success rate, and a doubling of cost to account for delivery systems, the shelf price of an attack inflicting 100,000 casualties would be $40 million.117 However, how does that actually compare to the costs of other mass-casualty weapons systems? While precise procurement costs remain classified, estimates have been given for various nuclear weapon assets: US B61 gravity bombs are estimated to cost $4.9 million each (with a B-52H bomber carrying 20 such bombs costing an additional $42 million); a Minuteman III missile costs $33.5 million apiece (or $48.5 million including the cost of 3 nuclear warheads).118 The cost of North Korean nuclear weapons has been estimated at between $18 million and $53 million per warhead.119 Accurate and up-to-date cost-effectiveness estimates for other weapons of mass destruction are hard to come by: in 1969, a UN study estimated that the costs of inflicting one civilian casualty per square kilometer were about $2,000 with conventional weapons, $800 with nuclear weapons, $600 with chemical weapons, and only $1 with biological weapons.120,121 However, these estimates are likely considerably outdated, and are unlikely to reflect the destructive efficiency of contemporary WMDs used against modern societies. So in principle–and perceived only from a narrowly ‘economic’ perspective, LAWS swarms might appear less ‘cost-effective’ than most existing WMDs–but not dramatically so. And they could theoretically be ‘competitive’ because they are seen as more achievable (in the sense that their production may be less reliant on globally controlled resources such as fissile materials or toxins). 

Moreover, there may be supply chain limitations, which could result in caps on how many such drone swarm units could be plausibly produced or procured. To be sure, assuming very small drones, swarms could be scaled up to hundreds of thousands or millions of units. Some accounts of drone swarms have envisaged a future of ‘smart clouds’ of billions of tiny, insect-like drones.91 Yet this might trade off against effective lethality: it seems unlikely that ‘micro-drone’ systems will be able to do much more than reconnaissance, given limits in terms of power, range, processing, and/or payload capacity.122 By contrast, focusing on LAWS that are able to project lethal force at meaningful ranges, the production constraints seem more serious. We can compare the production lines for military drones, a technology with more well established supply chains: a 2019 estimate by defense information group Jane’s estimated that more than 80,000 surveillance drones and 2,000 attack drones would be purchased around the world in the next decade.123 The civilian drone market is admittedly larger, with around 5 million consumer drones being sold in 2020, a number expected to rise to 9.6 million by 2030.124

That suggests that if commercial supply chains were all dedicated to the production of LAWS, GCR-scale attacks could come into range. Yet the relatively small size of the military drone market is still suggestive of the challenges around procuring sufficient numbers of autonomous weapons to truly inflict global catastrophe in the next decade or so, and possibly beyond. 

Of course, there might also be counter-arguments that suggest these barriers could be overcome, making mass LAWS attacks (at GCR-scale) more feasible. For instance, it could be misleading to look at the raw number of platforms acquired and deployed, since individual autonomous weapons platforms might easily be equipped with weapons that would allow each platform to kill not one but dozens or thousands, depending on the weapon delivered or location of attacks. However, this is not the way that ‘slaughterbots’ are usually represented; and indeed, outfitting these systems with more ordnance would simply make the ordnance the bottleneck. 

In the second place, motivated states might be able to step up production and procure far larger numbers of these systems than is possible today, especially if the anticipated strategic context of their use is not counterinsurgency but a near-peer confrontation, where drone swarms might become perceived (whether or not accurately) as not just helpful or cost-saving, but instead providing a key margin of dominance. For instance, the US Navy in 2020 discussed offensive and defensive tactics for dealing with attacks of ‘super swarms’ of up to a million drones.125 Increased state attention and enthusiasm for this technology could change the industrial and technical parameters rapidly.

In the third place, economies of scale and advances in manufacturing capabilities could mean that unit production costs could fall, or mass production would be facilitated, potentially enabling the targeting of many millions. It is unclear to what level costs would have to fall for GCR-scale fleets to become viable (let alone common), however, with Turchin suggesting unit costs of below $1.31 Even so, barring truly radical manufacturing breakthroughs, producing this would require quite significant investments. The above does not even begin to address questions of delivery.

The overall point here is therefore not that states will remain disinterested in- or incapable of building drone swarms of a size that would enable GCR-scale attacks. Indeed, states have often proven willing to invest huge sums in military technologies and their production infrastructures and industries.126 Still, even in those cases, LAWS swarms will likely not be as destructive as modern thermonuclear weapons: as argued by Kallenborn, “[w]hile they are unlikely to achieve the scale of harm as the Tsar Bomba, the famous Soviet hydrogen bomb, or most other major nuclear weapons, swarms could cause the same level of destruction, death, and injury as the nuclear weapons used in Nagasaki and Hiroshima.”103 That suggests that they might be seen by militaries to complement–rather than substitute for–existing deterrents.

The above suggests that LAWS are certainly a real concern, in that it appears possible that if this technology is developed further, it could in principle be used to inflict mass casualty attacks on cities; at the same time, it implies that unless political, economic or technological conditions change, swarms of LAWS (whether operated by terrorists or states) remain unlikely to be able to inflict GCR-level catastrophes, for the near future. The scale-up that would be necessary to achieve destruction qualifying as a GCR does not presently seem to be a realistic outcome, both industrially but also politically, particularly given the host of similarly- or more destructive weapons already available to states. All this suggests that while autonomous weapons would likely be disruptive, their use would not scale up to a full GCR under most circumstances. Nevertheless, there may be additional edge cases of risk, especially in the under-explored scenarios such as the use of LAWS to deliver WMDs, and/or their use in mass-scale internal repression or genocide.122 This therefore is an area that will require further research.

5. Nuclear Weapons and AI 

There is a second way in which military AI systems could rise to become a GCR: this is through their interaction with one of the oldest anthropogenic sources of Global Catastrophic Risk: nuclear weapons.

Nuclear war as GCR

To understand the way that AI systems might increase the risk of nuclear war in ways that could pose GCRs, it is first key to briefly review the ways in which nuclear war itself has become understood as a global catastrophic risk. 

Since the invention of atomic weapons, discussions of nuclear risk have often been characterised by sharply divergent frames and understandings, with many accounts focusing single-mindedly either on the perceived irreplaceable strategic and geopolitical benefits derived from possessing nuclear weapons, or on the absolutely intolerable humanitarian consequences of their use. The discourses surrounding nuclear weapons today often still fall within those categories.127 This is not new: early understandings of nuclear weapons vacillated between treating them as simply another weapon for tactical use on the battlefield,128 or as an atrocious “weapon of genocide”,129,130 potentially even capable of incinerating the atmosphere, as some lead Manhattan Project scientists briefly worried might happen during the Trinity test.131,132 

One fact which no one questions, however, is the historically unprecedented capability of nuclear weapons to inflict violence at a massive scale.133 The crude atomic bombs dropped by the U.S. on Hiroshima and Nagasaki killed at least 140,000 and 74,000 people respectively, but more recently, nuclear weapons with similar destructive capacity have been considered ‘low-yield’.134 In the decades following WWII, countries developed thermonuclear weapons, which, in some cases, were thousands of times more destructive than the first atomic bombs.135 Today, the use of a single nuclear weapon could kill hundreds of thousands of people, and a nuclear exchange–even involving ‘only’ a few dozen nuclear weapons could have devastating consequences for human civilization and the ecosystems upon which we depend.136

If the use of a single nuclear weapon would be a tragedy, the additional fact that these weapons would rarely be used in isolation highlights clear paths to global catastrophe. According to David Rosenberg, early US plans for a nuclear war, drawn up by the Strategic Air Command in 1955, were estimated to inflict a total of 60 million deaths and another 17 million casualties on the Soviet Union.137 Later plans would escalate even further. The 1962 U.S. nuclear war plan, utilizing the entire U.S. arsenal, would have killed an estimated 285 million people and harmed at least another 40 million in the targeted (Soviet-Sino bloc) countries alone.138 Daniel Ellsberg, then at DARPA, later recounted war plans for a US first strike on the Soviet Union, Warsaw Pact satellites, and China, as well as additional casualties from fallout in adjacent neutral (or even allied) countries, which projected global casualties rising up to 600 million.132

These estimates proved to be not a ceiling but a potential lower bound, once scientists began to focus on potential environmental interactions of nuclear war. In 1983, Carl Sagan famously embarked on a public campaign to raise awareness about the environmental impacts of nuclear weapons. Along with several colleagues, including some in the USSR, Sagan disseminated a theory of “nuclear winter,” which holds that fires caused by nuclear detonations would loft soot into the stratosphere, leading to cooler conditions, drought, famine, and wide scale death.139,140 In response to Sagan’s campaign, the U.S. government attempted to play down public discussions of nuclear winter, with the Reagan administration stating publicly in 1985 that it had “…very little confidence in the near-term ability to predict this phenomenon quantitatively.”141 Still, archival materials reveal that internally, administration officials had strong feelings about nuclear winter. One employee of the Department of Defense noted at the time that the U.S. government and overall scientific community “ought to be a bit chagrined at not realizing that smoke could produce these effects.”139

Over time, accounts such as these have led to the creation of a “nuclear taboo” or norm of non-use,142 although it is unclear whether the taboo will stand amid a number of contemporary developments.143 Today, scholars continue to study the impacts of nuclear detonations, with some predicting that even a small nuclear exchange could result in nuclear winter. For instance, climate scientist Alan Robock and colleagues suggest that “…if 100 nuclear bombs were dropped on cities and industrial areas—only 0.4 percent of the world’s more than 25,000 warheads—would produce enough smoke to cripple global agriculture.”144 Even in the limited scenario of such a ‘nuclear autumn’, it has been estimated that U.S. and Chinese agricultural production in corn and wheat would drop by about 20-40% in the first 5 years, putting as many as 2 billion people at risk of starvation.145 A larger exchange between the U.S. and Russia would have even more serious and “catastrophic” consequences, according to a 2019 analysis of long-term climatic effects.146 

To be sure, there remains some dissent over models predicting these environmental impacts,147 the science of nuclear winter,148 or the status of nuclear war as GCR.149 Assessments of nuclear risk are made more difficult still by uncertainty in not just the environmental models, but also the underlying strategic dynamics.  There are deep methodological difficulties around quantifying nuclear risks, especially due to the fact that an all-out nuclear war has never occurred. When examining nuclear risks, scholars face the insurmountable feat of attempting to “understand an event that never happened.”150 Nonetheless, different approaches attempt to integrate historical base rates for intermediary steps (close calls and accidents) with expert elicitation, to come to imperfect background estimates.151–153 

Yet (as even modellers note), such estimates remain subject to extreme uncertainty, given the unpredictability of strategy, targeting decisions, and complex socio-technical systems. A host of close calls during the Cold War show that carefully-designed systems are not impervious to accidents or immune from human error.154,155 As normal accident theory suggests, undesirable events and accident cascades are inevitable156, and adding in automated components or fail-safe systems may sometimes counterintuitively increase overall risk, by increasing the system’s complexity, reducing its transparency, or inducing automation bias.36 The present era is now faced with the question of whether emerging technologies such as AI will be equally susceptible to risks from normal accidents;157 whether they will contribute to such risks in legacy technologies such as nuclear weapons—and whether they will make the impacts of already-destructive weapons more severe or increase the likelihood of their use. 

Overall, the massive loss of life envisioned in nuclear war plans certainly qualifies nuclear weapons as a GCR. Whether they are considered to pose an ‘existential’ risk may depend on the number and yield of weapons used. Some analyses have suggested that, even in extreme scenarios of nuclear war that resulted in civilizational collapse and the deaths of very large (>90% or >99.99%) fractions of the world population, we might still expect humanity to survive.158,159 On the other hand, it has been countered that even if such a disaster would not immediately lead to extinction, it might still set the stage for a more gradual and eventual collapse or extinction over time, or at the very least for the recovery of a society with much worse prospects.160 However, for many commonly shared ethical intuitions, this distinction may be relatively moot.161 Whether or not it is a technical existential risk, any further study of nuclear weapons’ environmental and humanitarian impacts, including nuclear winter, will likely further corroborate their status as a major threat to humanity both today and into the future. 

Recent developments in nuclear risk and emerging technology

Today’s emergence of military AI therefore comes on top of a number of other disruptive developments that have already impacted nuclear risk over the past decades, and which have already brought concern about nuclear GCRs to the forefront. 

Notably, this attention comes after a period of relative inattention to nuclear risk. In the aftermath of the Cold War, the risks posed by the existence of nuclear weapons were seen to be less immediate and pronounced. Accordingly, discussions came to focus more on nuclear security, including efforts after the fall of the Berlin Wall to secure Soviet nuclear materials,162 as well as the challenges of preventing terrorist acquisition of WMDs, such as through the UNSC Resolution 1540 and the Nuclear Security Summit initiatives. In the last decade, however, converging developments in geopolitics and military technology have brought military (and especially nuclear) GCRs back to the fore. 

First, the relative peace that followed the Cold War has been replaced by competition between powerful states, rather than fully cooperative security (or hegemony) in many domains. Geopolitical tensions between major powers have been inflamed, visible in the form of flashpoints from Ukraine to the South China Sea. Meanwhile, the regimes for the control of WMDs have come under pressure.163 Nuclear arms control agreements between the U.S. and Russia (such as the Intermediate-Range Nuclear Forces treaty and the Anti-Ballistic Missile Treaty) have been cancelled by Presidents Trump and Bush; other nuclear states such as the UK, France, or China are not restrained by binding nuclear arms control agreements. Although the U.S. and Russia extended the New Strategic Arms Reduction Treaty in March of 2021,164 the future of arms control is uncertain amid ongoing disputes between the owners of the world’s two largest nuclear arsenals,165 and tensions between the West and Russia over Putin’s invasion of Ukraine. In the absence of open channels of communication and risk reduction measures, the dangers of miscalculation are pronounced.166 

Second, various states have undertaken programs of nuclear re-armament that reach beyond maintenance and replacement of existing systems, opposing the spirit of the Nuclear Non-Proliferation Treaty’s commitment to continued disarmament.167 For example, the US recently deployed a new low-yield submarine-launched ballistic missile and requested funding for research and development on a new sea-launched cruise missile.168 Seeing its nuclear arsenal as guarantor of its great-power status, Russia has modernised its nuclear arsenal,169 as well as investing in a new generation of exotic nuclear delivery systems, including Poseidon (autonomous submarine nuclear drones),170 Burevestnik (nuclear-powered cruise missile),171 Kinzhal (air-launched ballistic missile) and Avangard (hypersonic glide vehicle).172 While the Chinese nuclear force still lags substantially behind those of its rivals in size, it too has begun a program of nuclear force expansion; analysts estimate that its arsenal has recently surpassed France’s to become the world’s third largest,173 and there are concerns that the construction of new ICBM fields shows an expansion in force posture from ‘minimum’ to ‘medium deterrence’.174 China in 2021 also conducted an alleged test of a Fractional Orbital Bombardment System (FOBS).175,176 In its 2021 Integrated Review, the UK recommended an expansion of its nuclear stockpile by over 40%, to 260 warheads.177

The third trend relates to the ways in which strategic stability is further strained by the introduction of new technologies, from the United States’ Conventional Prompt Global Strike to a range of programs aimed at delivering hypervelocity missiles, which risk exacerbating nuclear dangers by shortening decision timelines, or which introduce ‘warhead ambiguity’ around conventional strikes which could be mistaken as nuclear ones.178 New technologies will make states more adept at targeting one another’s nuclear arsenals, creating a sense of instability that could lead to pre-emption and/or arms racing.179 Not only are states engaging individually in the development of these technologies; the last few years have also seen an increasing number of strategic military partnerships involving such technologies, and shaping and constraining their use.180–182

In sum, there are several external trends that frame the historical intersection of nuclear risk with emerging military AI technologies: an increase in inter-state geopolitical tensions, state nuclear rearmament or armament, and the introduction of other novel adjacent technologies. These trends all intersect with the advances of military AI, and against the backdrop of an alleged ‘AI Cold War’.183 

This brings us back to our preceding discussion: even if many military AI applications are not a direct GCR, there are concerns at their intersection with nuclear weapons. Yet how, specifically, could the use of AI systems to automate, support, attack, disrupt or change nuclear decision-making interact with the already-complex geometry of deterrence, creating new avenues for deliberate or inadvertent global nuclear catastrophe?

Nuclear Weapons and AI: usage- and escalation scenarios

As discussed, militaries have a long history of integrating computing technologies with their operations–and strategic and nuclear forces are no exception. This has led some to raise concerns about the potential risks of such integrations. In the late 1980s, Alan Borning noted that “[g]iven the devastating consequences of nuclear war, it is appropriate to look at current and planned uses of computers in nuclear weapons command and control systems, and to examine whether these systems can fulfil their intended roles.”184 On the Soviet side, there were similar concerns over the possibility of triggering a ‘computer war’, especially in combination with launch on warning postures and the militarization of space. As Soviet scholar Borish Raushenbakh noted in a joint publication, “[t]otal computerization of any battle system is fraught with grave danger.”185 Scruples notwithstanding, during the late Cold War the Soviet Union did in fact develop and deploy the ‘Perimeter’ (or ‘Dead Hand’) system; while still including a small number of human operators, when switched on during a crisis period the system was configured to (semi)automatically launch the USSR’s nuclear arsenal, if its sensors detected signs of a nuclear attack and lost touch with the Kremlin.186 

As previously stated, concerns about the potentially escalatory effects of AI on the nuclear landscape have been somewhat more extensively examined than other possible military AI GCR scenarios. In this section, we examine established research investigating potential risk scenarios arising from the intersection between AI and the nuclear weapons infrastructure. We therefore concern ourselves not only with the direct integration of AI into nuclear decision-making functions, such as launch orders, but also with the application of AI in supporting or tangentially associated systems, as well as its indirect effects on the broader geopolitical landscape. Throughout the Cold War, US and Soviet NC3 featured automated components, but today there is an increasing risk that AI will begin to erode human safeguards against nuclear war. Although NC3 differs by country, we define it broadly as the combination of warning, communication, and weapon systems–as well as human analysts, decision-makers, and operators–involved in ordering and executing nuclear strikes, as well as preventing unauthorized use of nuclear weapons.

NC3 systems can include satellites, early warning radars, command centers, communication links, launch control centers, and operators of nuclear delivery platforms. Depending on the country, individuals involved in nuclear decision-making might include operators of warning radars, analysts sifting through intelligence to provide information about current and future threats, authorities who authorize the decision to use nuclear weapons, or operators who execute orders.187 Differences in posture among nuclear weapon possessors mean that their NC3 varies considerably: for example, while China has dual-use land- and sea-based nuclear weapons,188 the United Kingdom has only a sea-based nuclear deterrent, and its NC3 systems do not support any conventional operations.189

To understand how AI could affect the risk of a global nuclear war, it is important to distinguish between distinct escalation routes. Following a typology by Johnson,190,191 we can distinguish intentional and unintentional escalation. Under (1) intentional escalation, one state has (or gains) a set of (AI + nuclear) strategic capabilities, as a result of which they knowingly take an escalatory action for strategic gain (e.g. they perceive they have a first-strike advantage, and launch a decapitation strike); this stands in contrast to various forms of (2) unintentional escalation–situations where “an actor crosses a threshold that it considers benign, but the other side considers significant”.190 

Specifically, unintentional escalation can be further subdivided into (2a) inadvertent escalation (mistaken usage on the basis of incorrect information) escalation; (2b) catalytic escalation (nuclear war between actors A and B, triggered by the malicious actions of a third party C against either party’s NC3 systems), or (2c) accidental escalation (nuclear escalation without a deliberate and properly informed launch decision; triggered by a combination of human- and human machine interaction failures, as well as background organizational factors).191

Additionally, AI can be used in, around, and against NC3 in a number of ways, all of which can contribute to different combinations of escalation risk (and thereby GCR). We will therefore review some uses of military AI, how these could increase the risk of one or more escalation routes being triggered.

1. Autonomised decision-making 

The first risk involves integrating AI directly in NC3 nuclear decision-making.192–195 This could involve giving systems the ability to authorize launches, and/or to allow AI systems to compose lists of targets or attack patterns following a launch order, in ways that might not be subject to human supervision.

It should be immediately noted that few states currently appear interested in the outright automation of nuclear command and control in any serious way. While commentators within the US defence establishment have called for the US to create its own AI-supported nuclear ‘Dead Hand’,196 senior defense officials have explicitly claimed they draw the line at such automation, ensuring there will always be a human in the loop of nuclear decision-making.197 Likewise, Chinese programs on military AI currently do not appear focused on automated nuclear launch.198,199 

Indeed, in addition to a lack of interest, there may be outstanding technical limits and constraints posed by existing AI progress. For instance, it has been argued that current machine learning systems do not lend themselves well to integration in nuclear targeting, given the difficulty of collating sufficient (and sufficiently reliable) training datasets of imagery of nuclear targets (e.g. mobile launch vehicles), which some have argued will provide ‘enduring obstacles’ to implementation.200 If that is the case, highly-anticipated applications may remain beyond current AI capabilities. 

Nonetheless, even if no state is known to have directly done so today, and some technical barriers remain for some time, this avenue cannot be ruled out and should be cautiously observed. If configurations of AI decision-making with nuclear forces were developed, this could introduce considerable new risks of false alarms, or of accidental escalation, especially given the history of cascading ‘normal accidents’ that have affected nuclear forces.36,154,155

2. Human decision-making under pressure 

More broadly, the inclusion of AI technology in NC3 may increase the pace of conflicts, reducing the time frame in which decisions can occur and increasing the potential likelihood for inadvertent or accidental escalation.201,202 As the perception of an adversary’s capabilities are equally as important in deterrence efforts as their actual capabilities, a military’s understanding of what (their; or their adversaries’) military AI systems are in fact able to accomplish may also spur miscalculation and inadvertent escalation.201 Therefore, AI systems might not need to be deployed to create a destabilising nuclear scenario, as long as they are perceived as creating additional pressures that can lead to miscalculation or rushed and ill-informed actions.203

3. AI in systems peripheral to NC3 

Furthermore, AI does not need to be directly integrated into NC3 itself in order to affect the risks of nuclear war. As noted by Avin & Amadae, while there has been extensive attention on ‘first-order’ effects of introducing technologies into nuclear command-and-control and weapon-delivery systems, there are also higher-order effects which “stem from the introduction of such technologies into more peripheral systems, with an indirect (but no less real) effect on nuclear risk.”204 For instance, even if militaries believe that AI is not usable for direct nuclear targeting or command, AI systems can still bring about cascading effects through their integration into systems that peripherally impact the safe and secure functioning of NC3; these might include electrical grids, computer systems providing access to relevant intelligence, or weapon platforms associated with the transportation, delivery, or safekeeping of nuclear warheads. 

4. AI as threat to the information environment and accurate intelligence

A fourth avenue of risk is regarding AI’s effects on the broader information environment surrounding, framing and informing nuclear decision-making. In recent years, researchers have begun to explore the ways in which novel AI tools can enable disinformation,205 and how this may affect societies’ ‘epistemic security’ in ways that make it harder to agree on truth and take coordinated actions that could be crucial for societies to mitigate GCRs (whether this includes coordinated de-escalation around nuclear risks, or other coordination to mitigate other GCRs). Epistemic security has been defined as the state which “ensures that a community’s processes of knowledge production, acquisition, distribution, and coordination are robust to adversarial (or accidental) influence [such that] [e]pistemically secure environments foster efficient and well-informed group decision-making which helps decision-makers to better achieve their individual and collective goals".206 For instance, Favaro has mapped how a range of technologies, including AI, might serve as ‘Weapons of Mass Distortion’.207 She distinguishes four clusters of technological effects on the information environment–those that ‘distort’, ‘compress’, ‘thwart’, or ‘illuminate’. A more contested or unclear information environment would also open up new attack surfaces that could be exploited by third-party actors to trigger catalytic escalation amongst its adversaries.

5. AI as cyber threat to NC3 integrity 

Whereas some AI uses within NC3 might be dangerous because of the vulnerabilities they create (as failure points, human decision compressors, or attack surfaces), another channel could involve the use of AI as a tool for attacking NC3 systems (regardless of whether they involve AI). This could involve the use of AI-enabled cyber capabilities to attack and disrupt NC3.208 Experts are increasingly concerned that NC3 is vulnerable to cyberattacks, and that the resulting escalation or unauthorized launch could potentially trigger a GCR scenario.209 AI technology has been shown capable of facilitating increasingly powerful and sophisticated cyber-attacks, with increased precision, scope, or scale.210,211 Although there is no evidence of states systematically deploying AI-enabled cyber offensive weapons to date, the convergence of AI and cyber offensive tools could exacerbate the vulnerabilities of NC3.212 This could lead to deliberate escalation of offensive cybersecurity strategies.208

Furthermore, cyber attacks also can be hard to detect and attribute (quickly);213 therefore they may be misconstrued, leading to unintentional or catalytic escalation. For example, an offensive operation targeting dual-use conventional assets could be interpreted as an attack on NC3.214 It is also broadly agreed that AI acts as a force multiplier for cyber offensive capabilities215 However, it is less clear whether AI will strengthen cyber defense to the same degree it might strengthen offensive capabilities. The precise effect on the offense-defense balance may be critical to the overall picture.216 Stronger offensive capabilities could further increase the risk of pre-emptive cyber attacks and subsequently intentional escalation, which would be especially dangerous in the context of nuclear weapon systems.

6. Broader impacts of AI on nuclear strategic stability 

Furthermore, the broader deployment of military AI in many other areas could indirectly lead to the disruption of nuclear strategic stability, which could increase the risk of potential intentional or inadvertent escalation. 

AI technology could be used to improve a state’s capabilities in locating and monitoring an adversary’s nuclear second strike capabilities. For example, better and cheaper autonomous naval drones could track nuclear-armed submarines. This in turn, could increase the state’s perception of likely success in destroying said capabilities before the state’s adversary is able to utilise them, and therefore may make a pre-emptive nuclear strike a more attractive strategy than before.192 Other risks could come from the integration of AI in novel autonomous platforms that are able to operate and loiter in sensitive areas for longer.217 Even if they were only deployed in order to monitor rival nuclear forces, their pre-positioned presence close to those nuclear assets might prove destabilising, by convincing a defender that they are being deployed to ‘scout out’ or engage nuclear weapons in advance of a first strike. In these ways, autonomous systems could increase the risks of intentional escalation (when they give a genuine first-strike advantage to one state; or are perceived to do so by another), inadvertent escalation (when errors in their information streams lead to a misinformed decision to launch), or accidental escalation risks, starting the chain of escalation towards nuclear a GCR. Zwetsloot and Dafoe concur that this increased perception of insecurity in nuclear systems could lead to states feeling pressured during times of unrest to engage in preemptive escalations.216 

Finally, in an effort to gain a real or perceived nuclear strategic advantage against their adversaries, while engaging in an AI race, states may place less value on AI safety concerns and more on technological development.218 This could result in what Danzig has called a ‘technology roulette’219 dynamic, with increased risk of prematurely adopting unsafe AI technology in ways that could have profound impacts on the safety or stability of states’ nuclear systems.

Contributing factors to AI-nuclear risks

It is important to keep in mind that the risks generated jointly by AI and nuclear weapons are a function of several factors. Firstly, nuclear force posture differs by country, with some being more aggressively postured so as to be usable more immediately. Additionally, depending on NC3 system design and the degree of force modernization, AI will interact differently with NC3’s component parts–and even dangerously with brittle legacy systems. Third, the relative robustness or vulnerability of NC3 systems to cyberattacks, for example, will impact systems’ resilience to malicious attacks. Along those lines, states’ perception of their own vulnerability, as well as the aggressiveness of attackers, will impact stability. This is especially true given that within complex systems and even through the use of extensive red teaming, it is impossible to identify all system flaws. Fourth, governments’ willingness to prematurely deploy AI, either within NC3 and surrounding systems or to augment offensive options for targeting NC3, will be a determinant of catastrophic risk. Fifth, open dialogue, arms control, and risk reduction measures can reduce the potential for nuclear escalation, and a lack of such dialogue can be detrimental. Lastly, luck and normal accidents will inevitably play a role–a fact which highlights unpredictable outcomes amid increased complexity.

Conclusion: questions for the GCR community

The above discussion has covered a wide range of themes and risk vectors to explore whether–or in what ways–military AI technology is a GCR. Given this, what are the lessons and insights? What policies will be needed to mitigate the potential global catastrophic risks from military AI technology, especially at the intersection with nuclear risk? Finally, going forward as a field, what are the new lines of research that are needed? 

There are lessons specific for the different communities, future questions they should take on, and the outlines for an integrated research agenda into military technology, actors, and GCRs, that will need further urgent exploration. This chapter has highlighted the urgent need for greater conversation between the different communities engaged on GCRs; on the ethics, safety, and implications of AI; and on nuclear weapons and their risks. We require cross-pollination between these fields, as well as contributions from people with robust expertise in AI and nuclear policy. 

In the first place, scholars in defense should reckon with safety and reliability risks around military AI in particular (especially insofar as it poses a GCR), including topics such as robustness, explainability, or susceptibility to adversarial input (‘spoofing’). To mitigate these risks, there is value in working with defense industry stakeholders to draw red lines, and clarify procurement processes.12

For nuclear thinkers, there should be greater understanding of the complexities and risks of introducing AI technologies in nuclear weapons. Practically, it will be critical to study how the changing risks of nuclear war, as mediated by AI and machine learning, will impact not just GCR risk, but also the established taboo on nuclear weapons use. How will these changing risks impact governments’ calculus about maintaining nuclear arsenals? Are there grounds for optimism, about whether or how the ‘nuclear taboo’ might be elaborated or even extended to a ‘nuclear-AI taboo’? 

Finally for experts in both the military and AI fields, more attention needs to be dedicated to investigating the complex and quickly evolving environment that is military AI, especially risks arising at the intersection between nuclear weapons and AI. As made clear in this chapter, concerns around this are not as clear cut as one might believe upon first glance. Instead, there are a number of possible risk vectors arising from the use of AI throughout the wider landscape, all of which could lead to different forms of nuclear escalation. 

In addition, while our analysis in this chapter has made it clear that, at present, there is a small risk of LAWS becoming GCRs, this may not always be the case. It would be useful not only to continue to monitor the development of LAWS to assess if the likelihood of them leading to global catastrophic events alters, but also how they may interact with other potential GCRs. For example, what might be the possibility of using LAWS to deliver WMDs, and what kind of risk impact could the combination of the two feasibly have? This is another potentially worthwhile avenue for future research. 

 It is clear that the question ‘is military AI a GCR?’ is not only complicated to address, but also a moving target owing to the rapidly evolving technology and risk landscape. To be clear: our preliminary analysis in this chapter has suggested that not all military AI applications qualify as GCRs; however, it also highlights that there are distinct pathways of concern. This is especially the case where emerging military AI technologies intersect with the existing arsenals and command infrastructures of established GCR-level technologies, most notably nuclear weapons. All in all, we invite scholars and practitioners from across the defense studies, GCR, and AI fields (and beyond) to take up the aforementioned challenges, ensuring that this next chapter in global technological risk is not the final one.

 

Acknowledgements

The authors thank the editors, and in particular SJ Beard and Clarissa Rios Rojas, for their feedback and guidance, as well as Esme Booth for invaluable support. For additional and particularly detailed comments on earlier drafts of this chapter, we thank Haydn Belfield and Eva Siegmann. Matthijs Maas also thanks Seth Baum and Uliana Certan (Global Catastrophic Risk Institute), for conversations and parallel work that clarified some of the arguments and shape of this debate.

 

Bibliography

 

1.     Beard, S. J. & Bronson, R. The Story So Far: How Humanity Avoided Existential Catastrophe. in Cambridge Conference on Catastrophic Risk 2020 (2022).

2.     Picker, C. B. A View from 40,000 Feet: International Law and the Invisible Hand of Technology. Cardozo Law Rev. 23, 151–219 (2001).

3.     Allenby, B. Are new technologies undermining the laws of war? Bull. At. Sci. 70, 21–31 (2014).

4.     Deudney, D. Turbo Change: Accelerating Technological Disruption, Planetary Geopolitics, and Architectonic Metaphors. Int. Stud. Rev. 20, 223–231 (2018).

5.     Boulanin, V. & Verbruggen, M. Mapping the development of autonomy in weapon systems. 147 https://www.sipri.org/sites/default/files/2017-11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf (2017).

6.     Crootof, R. The Killer Robots Are Here: Legal and Policy Implications. CARDOZO LAW Rev. 36, 80 (2015).

7.     Haner, J. & Garcia, D. The Artificial Intelligence Arms Race: Trends and World Leaders in Autonomous Weapons Development. Glob. Policy 10, 331–337 (2019).

8.     De Spiegeleire, S., Maas, M. M. & Sweijs, T. Artificial Intelligence and the Future of Defense: Strategic Implications for Small- and Medium-Sized Force Providers. (The Hague Centre for Strategic Studies, 2017).

9.     Amodei, D. et al. Concrete Problems in AI Safety. (2016).

10.   Lehman, J. et al. The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities. Artif. Life 26, (2019).

11.   Michel, Arthur Holland. Known Unknowns: Data Issues and Military Autonomous Systems. 41 https://www.unidir.org/known-unknowns (2021).

12.   Belfield, H., Jayanti, A. & Avin, S. Written Evidence to the UK Parliament Defence Committee’s Inquiry on Defence industrial policy: procurement and prosperity. https://committees.parliament.uk/writtenevidence/4785/default/ (2020).

13.   Horowitz, M. C. & Kahn, L. How Joe Biden can use confidence-building measures for military uses of AI. Bull. At. Sci. 77, 33–35 (2021).

14.   Horowitz, M. C. & Scharre, P. AI and International Stability: Risks and Confidence-Building Measures. https://www.cnas.org/publications/reports/ai-and-international-stability-risks-and-confidence-building-measures (2021).

15.   Yudkowsky, E. Artificial Intelligence as a Positive and Negative Factor in Global Risk. in Global Catastrophic Risks 308–345 (Oxford University Press, 2008).

16.   Bostrom, N. Superintelligence: Paths, Dangers, Strategies. (Oxford University Press, 2014).

17.   Ngo, R. AGI Safety From First Principles. (2020).

18.   Russell, S. Human Compatible: Artificial Intelligence and the Problem of Control. (Viking, 2019).

19.   Burden, J., Clarke, S. & Whittlestone, J. From Turing’s Speculations to an Academic Discipline: A History of AI Existential Safety. in Cambridge Conference on Catastrophic Risk 2020 (2022).

20.   Goertzel, B. Artificial General Intelligence: Concept, State of the Art, and Future Prospects. J. Artif. Gen. Intell. 5, 1–48 (2014).

21.   Grace, K., Salvatier, J., Dafoe, A., Zhang, B. & Evans, O. When Will AI Exceed Human Performance? Evidence from AI Experts. J. Artif. Intell. Res. 62, 729–754 (2018).

22.   Gruetzemacher, R. & Whittlestone, J. The Transformative Potential of Artificial Intelligence. Futures 135, 102884 (2022).

23.   Future of Life Institute. AI Safety Myths. Future of Life Institute https://futureoflife.org/background/aimyths/ (2016).

24.   Cremer, C. Z. Deep limitations? Examining expert disagreement over deep learning. Prog. Artif. Intell. (2021) doi:10.1007/s13748-021-00239-1.

25.   Karnofsky, H. AI Timelines: Where the Arguments, and the ‘Experts,’ Stand. Cold Takes https://www.cold-takes.com/where-ai-forecasting-stands-today/ (2021).

26.   Hendrycks, D., Carlini, N., Schulman, J. & Steinhardt, J. Unsolved Problems in ML Safety. ArXiv210913916 Cs (2021).

27.   Amodei, D. & Clark, J. Faulty Reward Functions in the Wild. OpenAI https://openai.com/blog/faulty-reward-functions/ (2016).

28.   Krakovna, V. et al. Specification gaming: the flip side of AI ingenuity. Deepmind https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity (2020).

29.   Turner, A. M., Smith, L., Shah, R., Critch, A. & Tadepalli, P. Optimal Policies Tend to Seek Power. ArXiv191201683 Cs (2021).

30.   Cotra, A. Why AI alignment could be hard with modern deep learning. Cold Takes https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/ (2021).

31.   Turchin, A. Could Slaughterbots Wipe Out Humanity? Assessment of the Global Catastrophic Risk Posed by Autonomous Weapons. (2018).

32.   Turchin, A. & Denkenberger, D. Military AI as a Convergent Goal of Self-Improving AI. in Artificial Intelligence Safety and Security (ed. Yampolskiy, R.) (Louiswille: CRC Press, 2018).

33.   Vold, K. & Harris, D. R. How Does Artificial Intelligence Pose an Existential Risk? in Oxford Handbook of Digital Ethics (ed. Veliz, C.) 34 (Oxford University Press, 2021).

34.   Levin, J.-C. & Maas, M. M. Roadmap to a Roadmap: How Could We Tell When AGI is a ‘Manhattan Project’ Away? in 7 (2020).

35.   Grace, K. Leó Szilárd and the Danger of Nuclear Weapons: A Case Study in Risk Mitigation. https://intelligence.org/files/SzilardNuclearWeapons.pdf (2015).

36.   Maas, M. M. How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons. Contemp. Secur. Policy 40, 285–311 (2019).

37.   Zaidi, W. & Dafoe, A. International Control of Powerful Technology: Lessons from the Baruch Plan. https://www.fhi.ox.ac.uk/wp-content/uploads/2021/03/International-Control-of-Powerful-Technology-Lessons-from-the-Baruch-Plan-Zaidi-Dafoe-2021.pdf (2021).

38.   Leung, J. Who will govern artificial intelligence? Learning from the history of strategic politics in emerging technologies. (University of Oxford, 2019).

39.   Ding, J. & Dafoe, A. The Logic of Strategic Assets: From Oil to AI. Secur. Stud. 1–31 (2021) doi:10.1080/09636412.2021.1915583.

40.   Ding, J. & Dafoe, A. Engines of Power: Electricity, AI, and General-Purpose Military Transformations. ArXiv210604338 Econ Q-Fin (2021).

41.   Baum, S. D. & Barrett, A. M. Global Catastrophes: The Most Extreme Risks. in Risks in Extreme Environments: Preparing, Avoiding, Mitigating, and Managing (ed. Bier, V.) 174–184 (Routledge, 2018).

42.   Bostrom, N. & Cirkovic, M. M. Introduction. in Global catastrophic risks (Oxford University Press, 2011).

43.   Weinberger, S. The Imagineers of War: The Untold Story of DARPA, the Pentagon Agency That Changed the World. (Random House LLC, 2017).

44.   Roland, A. & Shiman, P. Strategic computing: DARPA and the quest for machine intelligence, 1983-1993. (MIT Press, 2002).

45.   Gordon, T. J. & Helmer, O. Report on a Long-Range Forecasting Study. http://stat.haifa.ac.il/~gweiss/courses/OR-logistics/Rand.pdf (1964).

46.   Defense Science Board Task Force. Report of the Defense Science Board Task Force on Command and Control Systems Management. 49 (1978).

47.   Biddle, S. Victory Misunderstood: What the Gulf War Tells Us about the Future of Conflict. Int. Secur. 21, 139–179 (1996).

48.   Cross, S. E. & Walker, E. DART: Applying Knowledge Based Planning and Scheduling to CRISIS Action Planning. in Intelligent Scheduling (eds. Zweben, M. & Fox, M.) 711–29 (Morgan Kaufmann, 1994).

49.   Hedberg, S. R. DART: Revolutionizing Logistics Planning. IEEE Intell. Syst. 17, 81–83 (2002).

50.   Scharre, P. Autonomous Weapons and Operational Risk. https://s3.amazonaws.com/files.cnas.org/documents/CNAS_Autonomous-weapons-operational-risk.pdf (2016).

51.   Scharre, P. Autonomous Weapons and Stability. (King’s College, 2020).

52.   Research and Markets Ltd. Artificial Intelligence in Military Market by Offering (Software, Hardware, Services), Technology (Machine Learning, Computer vision), Application, Installation Type, Platform, Region - Global Forecast to 2025. https://www.researchandmarkets.com/reports/5306656/artificial-intelligence-in-military-market-by (2021).

53.   Haner, J. K. Dark Horses in the Lethal AI Arms Race. https://justinkhaner.com/aiarmsrace (2019).

54.   Trajtenberg, M. AI as the next GPT: a Political-Economy Perspective. http://www.nber.org/papers/w24245 (2018) doi:10.3386/w24245.

55.   Maas, M. M. Artificial Intelligence Governance Under Change: Foundations, Facets, Frameworks. (University of Copenhagen, 2020).

56.   Drezner, D. W. Technological change and international relations. Int. Relat. 33, 286–303 (2019).

57.   Morgan, F. E. et al. Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World. 224 https://www.rand.org/content/dam/rand/pubs/research_reports/RR3100/RR3139-1/RAND_RR3139-1.pdf (2020).

58.   Allen, G. & Chan, T. Artificial Intelligence and National Security. http://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf (2017).

59.   Kania, E. AlphaGo and Beyond: The Chinese Military Looks to Future “Intelligentized” Warfare. Lawfare https://www.lawfareblog.com/alphago-and-beyond-chinese-military-looks-future-intelligentized-warfare (2017).

60.   Nelson, A. J. The Impact of Emerging Technologies on Arms Control Regimes. (2018).

61.   Horowitz, M. C. Do Emerging Military Technologies Matter for International Politics? Annu. Rev. Polit. Sci. 23, 385–400 (2020).

62.   Defense Science Board. Defense Science Board Summer Study on Autonomy. https://www.hsdl.org/?abstract&did=794641 (2016).

63.   Soare, S. R. Digital Divide? Transatlantic defence cooperation on Artificial Intelligence. https://www.iss.europa.eu/content/digital-divide-transatlantic-defence-cooperation-ai (2020).

64.   Adamsky, D. The Culture of Military Innovation: The Impact of Cultural Factors on the Revolution in Military Affairs in Russia, the US, and Israel. (Stanford University Press, 2010).

65.   Verbruggen, M. The Role of Civilian Innovation in the Development of Lethal Autonomous Weapon Systems. Glob. Policy 10, 338–342 (2019).

66.   Gilli, A. Preparing for “NATO-mation”: the Atlantic Alliance toward the age of artificial intelligence. http://www.ndc.nato.int/news/news.php?icode=1270 (2019).

67.   Verbruggen, M. Drone swarms: coming (sometime) to a war near you. Just not today. Bulletin of the Atomic Scientists https://thebulletin.org/2021/02/drone-swarms-coming-sometime-to-a-war-near-you-just-not-today/ (2021).

68.   Gilli, A. & Gilli, M. Why China Has Not Caught Up Yet: Military-Technological Superiority and the Limits of Imitation, Reverse Engineering, and Cyber Espionage. Int. Secur. 43, 141–189 (2019).

69.   Amodei, D. & Hernandez, D. AI and Compute. OpenAI Blog https://blog.openai.com/ai-and-compute/ (2018).

70.   Ayoub, K. & Payne, K. Strategy in the Age of Artificial Intelligence. J. Strateg. Stud. 39, 793–819 (2016).

71.   Horowitz, M. C. Artificial Intelligence, International Competition, and the Balance of Power. Texas National Security Review (2018).

72.   Dafoe, A. On Technological Determinism: A Typology, Scope Conditions, and a Mechanism. Sci. Technol. Hum. Values 40, 1047–1076 (2015).

73.   Bleek, P. C. When Did (and Didn’t) States Proliferate? Chronicling the Spread of Nuclear Weapons. 56 https://www.belfercenter.org/sites/default/files/files/publication/When%20Did%20%28and%20Didn%27t%29%20States%20Proliferate%3F_1.pdf (2017).

74.   Meyer, S., Bidgood, S. & Potter, W. C. Death Dust: The Little-Known Story of U.S. and Soviet Pursuit of Radiological Weapons. Int. Secur. 45, 51–94 (2020).

75.   Rosert, E. & Sauer, F. How (not) to stop the killer robots: A comparative analysis of humanitarian disarmament campaign strategies. Contemp. Secur. Policy 0, 1–26 (2020).

76.   Belfield, H. Activism by the AI Community: Analysing Recent Achievements and Future Prospects. in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 15–21 (ACM, 2020). doi:10.1145/3375627.3375814.

77.   McDonald, J. What if Military AI is a Washout? https://jackmcdonald.org/book/2021/06/what-if-military-ai-sucks/ (2021).

78.   Garfinkel, B. & Dafoe, A. How does the offense-defense balance scale? J. Strateg. Stud. 42, 736–763 (2019).

79.   Lieber, K. A. Grasping the Technological Peace: The Offense-Defense Balance and International Security. Int. Secur. 25, 71 (2000).

80.   Payne, K. I, Warbot: The Dawn of Artificially Intelligent Conflict. (C Hurst & Co Publishers Ltd, 2021).

81.   Blain, L. South Korea’s autonomous robot gun turrets: deadly from kilometers away. http://newatlas.com/korea-dodamm-super-aegis-autonomos-robot-gun-turret/17198/ (2010).

82.   Velez-Green, A. The Foreign Policy Essay: The South Korean Sentry—A “Killer Robot” to Prevent War. Lawfare https://www.lawfareblog.com/foreign-policy-essay-south-korean-sentry%E2%80%94-killer-robot-prevent-war (2015).

83.   Israel Aerospace Industries. Harpy Loitering Weapon. https://www.iai.co.il/p/harpy.

84.   Future of Life Institute. 5 Real-Life Technologies that Prove Autonomous Weapons are Already Here. Future of Life Institute https://futureoflife.org/2021/11/22/5-real-life-technologies-that-prove-autonomous-weapons-are-already-here/ (2021).

85.   Tucker, P. SecDef: China Is Exporting Killer Robots to the Mideast. Defense One https://www.defenseone.com/technology/2019/11/secdef-china-exporting-killer-robots-mideast/161100/ (2019).

86.   Trevithick, J. Turkey Now Has Swarming Suicide Drones It Could Export. The Drive (2020).

87.   UN Panel of Experts on Libya. Letter dated 8 March 2021 from the Panel of Experts on Libya established pursuant to resolution 1973 (2011) addressed to the President of the Security Council. https://undocs.org/pdf?symbol=en/S/2021/229 (2021).

88.   Cramer, M. A.I. Drone May Have Acted on Its Own in Attacking Fighters, U.N. Says. The New York Times (2021).

89.   Kesteloo, H. Punisher Drones Are Positively Game-changing For Ukrainian Military In Fight Against Russia. DroneXL https://dronexl.co/2022/03/03/punisher-drones-ukrainian-military/ (2022).

90.   Trabucco, L. & Heller, K. J. Beyond the Ban: Comparing the Ability of ‘Killer Robots’ and Human Soldiers to Comply with IHL. Fletcher Forum World Aff. 46, (2022).

91.   Scharre, P. Robotics on the Battlefield, Part II: The Coming Swarm. 68 http://www.cnas.org/sites/default/files/publications-pdf/CNAS_TheComingSwarm_Scharre.pdf (2014).

92.   Hambling, D. Israel’s Combat-Proven Drone Swarm May Be Start Of A New Kind Of Warfare. Forbes https://www.forbes.com/sites/davidhambling/2021/07/21/israels-combat-proven-drone-swarm-is-more-than-just-a-drone-swarm/ (2021).

93.   Michel, A. H. The Killer Algorithms Nobody’s Talking About. Foreign Policy https://foreignpolicy.com/2020/01/20/ai-autonomous-weapons-artificial-intelligence-the-killer-algorithms-nobodys-talking-about/ (2020).

94.   Bergman, R. & Fassihi, F. The Scientist and the A.I.-Assisted, Remote-Control Killing Machine. The New York Times (2021).

95.   Knight, W. A Dogfight Renews Concerns About AI’s Lethal Potential. Wired (2020).

96.   US Department of Defense. Department of Defense Announces Successful Micro-Drone Demonstration. U.S. Department of Defense https://www.defense.gov/News/Releases/Release/Article/1044811/department-of-defense-announces-successful-micro-drone-demonstration/ (2017).

97.   Chavannes, E., Klonowska, K. & Sweijs, T. Governing autonomous weapon systems: Expanding the solution space, from scoping to applying. HCSS Secur. 39 (2020).

98.   Carpenter, C. ‘Lost’ Causes, Agenda Vetting in Global Issue Networks and the Shaping of Human Security. (Cornell University Press, 2014). doi:10.7591/9780801470363.

99.   Liu, H.-Y. Categorization and legality of autonomous and remote weapons systems. Int. Rev. Red Cross 94, 627–652 (2012).

100. Anderson, K., Reisner, D. & Waxman, M. Adapting the Law of Armed Conflict to Autonomous Weapon Systems. Int. Law Stud. 90, 27 (2014).

101. Human Rights Watch. Shaking the foundations: the human rights implications of killer robots. (2014).

102. Rosert, E. & Sauer, F. Prohibiting Autonomous Weapons: Put Human Dignity First. Glob. Policy 10, 370–375 (2019).

103. Kallenborn, Z. Meet the future weapon of mass destruction, the drone swarm. Bulletin of the Atomic Scientists https://thebulletin.org/2021/04/meet-the-future-weapon-of-mass-destruction-the-drone-swarm/ (2021).

104. Bahçecik, Ş. O. Civil Society Responds to the AWS: Growing Activist Networks and Shifting Frames. Glob. Policy 0, (2019).

105. Future of Life Institute. Slaughterbots. (2017).

106. Future of Life Institute. Slaughterbots - if human: kill(). (2021).

107. Turchin, A. & Denkenberger, D. Classification of Global Catastrophic Risks Connected with Artificial Intelligence. AI Soc. 35, 147–163 (2020).

108. Kallenborn, Z. & Bleek, P. C. Swarming destruction: drone swarms and chemical, biological, radiological, and nuclear weapons. Nonproliferation Rev. 25, 523–543 (2018).

109. Kunz, M. & Ó hÉigeartaigh, S. Artificial Intelligence and Robotization. in Oxford Handbook on the International Law of Global Security (eds. Geiss, R. & Melzer, N.) (Oxford University Press, 2021).

110. Rogers, J. The dark side of our drone future. Bulletin of the Atomic Scientists https://thebulletin.org/2019/10/the-dark-side-of-our-drone-future/ (2019).

111. Mani, L., Tzachor, A. & Cole, P. Global catastrophic risk from lower magnitude volcanic eruptions. Nat. Commun. 12, 4756 (2021).

112. Solodov, A., Williams, A., Hanaei, S. A. & Goddard, B. Analyzing the threat of unmanned aerial vehicles (UAV) to nuclear facilities. Secur. J. Lond. 31, 305–324 (2018).

113. Tang, A. & Kemp, L. A Fate Worse Than Warming? Stratospheric Aerosol Injection and Global Catastrophic Risk. Front. Clim. 3, 144 (2021).

114. Baum, S. D., Maher, T. M. & Haqq-Misra, J. Double catastrophe: intermittent stratospheric geoengineering induced by societal collapse. Environ. Syst. Decis. 33, 168–180 (2013).

115. Future of Life Institute. An Open Letter to the United Nations Convention on Certain Conventional Weapons. Future of Life Institute https://futureoflife.org/autonomous-weapons-open-letter-2017/ (2017).

116. Russell, S., Aguirre, A., Conn, A. & Tegmark, M. Why You Should Fear “Slaughterbots”—A Response. IEEE Spectr. (2018).

117. Aguirre, A. Why those who care about catastrophic and existential risk should care about autonomous weapons. EA Forum https://forum.effectivealtruism.org/posts/oR9tLNRSAep293rr5/why-those-who-care-about-catastrophic-and-existential-risk-2 (2020).

118. What Nuclear Weapons Delivery Systems Really Cost. Brookings https://www.brookings.edu/what-nuclear-weapons-delivery-systems-really-cost/ (2016).

119. Blumberg, Y. Here’s how much a nuclear weapon costs. CNBC https://www.cnbc.com/2017/08/08/heres-how-much-a-nuclear-weapon-costs.html (2017).

120. UN Secretary-General. Chemical and bacteriological (biological) weapons and the effects of their possible use : (1969).

121. Koblentz, G. D. Living Weapons: Biological Warfare and International Security. (Cornell University Press, 2011).

122. Baum, S. D., Barrett, A. M., Certan, U. & Maas, M. M. Autonomous Weapons and the Long-Term Future. (2022).

123. Sabbagh, D. Killer drones: how many are there and who do they target? The Guardian (2019).

124. Vailshery, L. S. Global consumer drone shipments 2020-2030. Statista https://www.statista.com/statistics/1234658/worldwide-consumer-drone-unit-shipments/ (2021).

125. Hambling, D. The U.S. Navy Plans To Foil Massive ‘Super Swarm’ Drone Attacks By Using The Swarm’s Intelligence Against Itself. Forbes (2020).

126. Kemp, L. Agents of Doom: Who is creating the apocalypse and why. BBC Future (2021).

127. Perkovich, G. Will You Listen? A Dialogue on Creating the Conditions for Nuclear Disarmament. Carnegie Endowment for International Peace https://carnegieendowment.org/2018/11/02/will-you-listen-dialogue-on-creating-conditions-for-nuclear-disarmament-pub-77614 (2018).

128. Lewis, J. Point and Nuke: Remembering the Era of Portable Atomic Bombs. Foreign Policy https://foreignpolicy.com/2018/09/12/point-and-nuke-davy-crockett-military-history-nuclear-weapons/ (2018).

129. Galison, P. L. & Bernstein, B. In Any Light: Scientists and the Decision to Build the Superbomb, 1952-1954. Hist. Stud. Phys. Biol. Sci. 19, 267–347 (1989).

130. Wellerstein, A. The leak that brought the H-bomb debate out of the cold. Restricted Data: The Nuclear Secrecy Blog http://blog.nuclearsecrecy.com/2021/06/14/the-leak-that-brought-the-h-bomb-debate-out-of-the-cold/ (2021).

131. Horgan, J. Bethe, Teller, Trinity and the End of Earth. Scientific American Blog Network https://blogs.scientificamerican.com/cross-check/bethe-teller-trinity-and-the-end-of-earth/ (2015).

132. Ellsberg, D. The Doomsday Machine: Confessions of a Nuclear War Planner. (Bloomsbury USA, 2017).

133. Scarry, E. Thermonuclear Monarchy: Choosing Between Democracy and Doom. (W. W. Norton & Company, 2016).

134. BBC. Hiroshima and Nagasaki: 75th anniversary of atomic bombings. BBC News (2020).

135. PBS News Hour. Types of Nuclear Bombs. PBS NewsHour https://www.pbs.org/newshour/nation/military-jan-june05-bombs_05-02 (2005).

136. Toon, O. B. et al. Rapidly expanding nuclear arsenals in Pakistan and India portend regional and global catastrophe. Sci. Adv. 5, eaay5478 (2019).

137. Rosenberg, D. A. & Moore, W. B. ‘Smoking Radiating Ruin at the End of Two Hours’: Documents on American Plans for Nuclear War with the Soviet Union, 1954-55. Int. Secur. 6, 3–38 (1981).

138. Rosenberg, D. Constraining Overkill: Contending Approaches to Nuclear Strategy, 1955–1965. in (Naval History and Heritage Command, 1994).

139. Badash, L. A Nuclear Winter’s Tale: Science and Politics in the 1980s. (MIT Press, 2009).

140. Sagan, C. Nuclear War and Climatic Catastrophe: Some Policy Implications. Foreign Aff. 62, 257–292 (1983).

141. Nuclear winter: The view from the US defense department. Survival 27, 130–134 (1985).

142. Tannenwald, N. The Nuclear Taboo: The United States and the Normative Basis of Nuclear Non-Use. Int. Organ. 53, 433–468 (1999).

143. Sauer, F. Atomic Anxiety: Deterrence, Taboo and the Non-Use of U.S. Nuclear Weapons. (Springer, 2015).

144. Robock, A. & Toon, O. B. Local Nuclear War, Global Suffering. Sci. Am. 302, 74–81 (2010).

145. Helfand, I. Nuclear Famine: Two Billion People At Risk? Global Impacts of Limited Nuclear War on Agriculture, Food Supplies, and Human Nutrition. https://www.psr.org/wp-content/uploads/2018/04/two-billion-at-risk.pdf (2013).

146. Coupe, J., Bardeen, C. G., Robock, A. & Toon, O. B. Nuclear Winter Responses to Nuclear War Between the United States and Russia in the Whole Atmosphere Community Climate Model Version 4 and the Goddard Institute for Space Studies ModelE. J. Geophys. Res. Atmospheres 124, 8522–8543 (2019).

147. Reisner, J. et al. Climate Impact of a Regional Nuclear Weapons Exchange: An Improved Assessment Based On Detailed Source Calculations. J. Geophys. Res. Atmospheres 123, 2752–2772 (2018).

148. Frankel, M., Scouras, J. & Ullrich, G. The Uncertain Consequences of Nuclear Weapons Use. https://apps.dtic.mil/sti/citations/ADA618999 (2015).

149. Scouras, J. Nuclear War as a Global Catastrophic Risk. J. Benefit-Cost Anal. 10, 274–295 (2019).

150. Gavin, F. J. We Need to Talk: The Past, Present, and Future of U.S. Nuclear Weapons Policy. War on the Rocks https://warontherocks.com/2017/01/we-need-to-talk-the-past-present-and-future-of-u-s-nuclear-weapons-policy/ (2017).

151. Rodriguez, L. How likely is a nuclear exchange between the US and Russia? https://rethinkpriorities.org/publications/how-likely-is-a-nuclear-exchange-between-the-us-and-russia (2019).

152. Baum, S., de Neufville, R. & Barrett, A. A Model for the Probability of Nuclear War. Glob. Catastrophic Risk Inst. Work. Pap. (2018) doi:10.2139/ssrn.3137081.

153. Baum, S. Reflections on the Risk Analysis of Nuclear War. in Proceedings of the Workshop on Quantifying Global Catastrophic Risks (ed. Garrick, B. J.) 19–50 (Garrick Institute for the Risk Sciences, University of California, 2018).

154. Schlosser, E. Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety. (Penguin Books, 2014).

155. Sagan, S. D. The Limits of Safety: Organizations, Accidents, and Nuclear Weapons. (Princeton University Press, 1993). doi:10.1515/9780691213064.

156. Sagan, S. D. Learning from Normal Accidents. Organ. Environ. 17, 15–19 (2004).

157. Maas, M. M. Regulating for ‘Normal AI Accidents’: Operational Lessons for the Responsible Governance of Artificial Intelligence Deployment. in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society 223–228 (Association for Computing Machinery, 2018). doi:10.1145/3278721.3278766.

158. Rodriguez, L. What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? Effective Altruism Forum https://forum.effectivealtruism.org/posts/GsjmufaebreiaivF7/what-is-the-likelihood-that-civilizational-collapse-would (2020).

159. Wiblin, R. Luisa Rodriguez on why global catastrophes seem unlikely to kill us all.

160. Belfield, H. Collapse, Recovery and Existential Risk. in The End of the World as We Know It (eds. Callahan, P., Centeno, M., Larcey, P. & Patterson, T.) (Routledge Press, 2022).

161. Schubert, S., Caviola, L. & Faber, N. S. The Psychology of Existential Risk: Moral Judgments about Human Extinction. Sci. Rep. 9, 15100 (2019).

162. Kohler, S. Cooperative Security and the Nunn-Lugar Act. 4 (1989).

163. Wunderlich, C., Müller, H. & Jakob, U. WMD Compliance and Enforcement in a Changing Global Context. https://www.unidir.org/publication/wmd-compliance-and-enforcement-changing-global-context (2020) doi:10.37559/WMD/21/WMDCE02.

164. Reif, K. & Bugos, S. U.S., Russia Extend New START for Five Years. Arms Control Association https://www.armscontrol.org/act/2021-03/news/us-russia-extend-new-start-five-years (2021).

165. Kühn, U. Why Arms Control Is (Almost) Dead. Carnegie Europe https://carnegieeurope.eu/strategiceurope/81209 (2020).

166. Wan, W. Nuclear Escalation Strategies and Perceptions: The United States, the Russian Federation, and China. https://unidir.org/escalation (2021) doi:10.37559/WMD/21/NRR/02.

167. Matteucci, K. T. Signs of Life in Nuclear Diplomacy: A Look Beyond the Doom and Gloom. Georget. J. Int. Aff. (2019).

168. Kirstensen, H. M. US Deploys New Low-Yield Nuclear Submarine Warhead. Federation Of American Scientists https://fas.org/blogs/security/2020/01/w76-2deployed/ (2020).

169. Fink, A. L. & Oliker, O. Russia’s Nuclear Weapons in a Multipolar World: Guarantors of Sovereignty, Great Power Status & More. Daedalus 149, 37–55 (2020).

170. Piotrowski, M. A. Russia’s Status-6 Nuclear Submarine Drone (Poseidon). https://pism.pl/publications/Russia_s_Status_6_Nuclear_Submarine_Drone__Poseidon_ (2018).

171. Vaddi, P. Bringing Russia’s New Nuclear Weapons Into New START. Carnegie Endowment for International Peace https://carnegieendowment.org/2019/08/13/bringing-russia-s-new-nuclear-weapons-into-new-start-pub-79672 (2019).

172. Edmonds, J. et al. Artificial Intelligence and Autonomy in Russia. 258 https://www.cna.org/CNA_files/centers/CNA/sppp/rsp/russia-ai/Russia-Artificial-Intelligence-Autonomy-Putin-Military.pdf (2021).

173. Kristensen, H. M. & Korda, M. Chinese nuclear weapons, 2021. Bull. At. Sci. 77, 318–336 (2021).

174. Kristensen, H. M. & Korda, M. China’s nuclear missile silo expansion: From minimum deterrence to medium deterrence. Bulletin of the Atomic Scientists https://thebulletin.org/2021/09/chinas-nuclear-missile-silo-expansion-from-minimum-deterrence-to-medium-deterrence/ (2021).

175. Wright, T. Is China gliding toward a FOBS capability? IISS https://www.iiss.org/blogs/analysis/2021/10/is-china-gliding-toward-a-fobs-capability (2021).

176. Acton, J. M. China’s Tests Are No Sputnik Moment. Carnegie Endowment for International Peace https://carnegieendowment.org/2021/10/21/china-s-tests-are-no-sputnik-moment-pub-85625 (2021).

177. Mills, C. Integrated Review 2021: Increasing the cap on the UK’s nuclear stockpile. (2021).

178. Acton, J. M. Is It a Nuke?: Pre-Launch Ambiguity and Inadvertent Escalation. https://carnegieendowment.org/2020/04/09/is-it-nuke-pre-launch-ambiguity-and-inadvertent-escalation-pub-81446 (2020).

179. Futter, A. & Zala, B. Strategic non-nuclear weapons and the onset of a Third Nuclear Age. Eur. J. Int. Secur. 6, 257–277 (2021).

180. Trabucco, L. & Maas, M. M. Into the Thick of It: Mapping the Emerging Landscape of Military AI Strategic Partnerships. in (2022).

181. Stanley-Lockman, Z. Military AI Cooperation Toolbox: Modernizing Defense Science and Technology Partnerships for the Digital Age. https://cset.georgetown.edu/wp-content/uploads/CSET-Military-AI-Cooperation-Toolbox.pdf (2021).

182. Bendett, S. & Kania, E. B. A new Sino-Russian high-tech partnership: authoritarian innovation in an era of great-power rivalry. 24 https://www.aspi.org.au/report/new-sino-russian-high-tech-partnership (2019).

183. Thompson, N. & Bremmer, I. The AI Cold War That Threatens Us All. Wired (2018).

184. Borning, A. Computer system reliability and nuclear war. Commun. ACM 30, 112–131 (1987).

185. Raushenbakh, B. V. Computer War. in Breakthrough: emerging new thinking: Soviet and Western scholars issue a challenge to build a world beyond war (eds. Gromyko, A. A. & Hellman, M.) (Walker, 1988).

186. Hoffman, D. The Dead Hand: The Untold Story of the Cold War Arms Race and Its Dangerous Legacy. (Anchor, 2010).

187. Harvey, J. R. U.S. Nuclear Command and Control for the 21st Century. Nautilus Institute for Security and Sustainability https://nautilus.org/napsnet/napsnet-special-reports/u-s-nuclear-command-and-control-for-the-21st-century/ (2019).

188. Cunningham, F. Nuclear Command, Control, and Communications Systems of the People’s Republic of China. Nautilus Institute for Security and Sustainability https://nautilus.org/napsnet/napsnet-special-reports/nuclear-command-control-and-communications-systems-of-the-peoples-republic-of-china/ (2019).

189. Gower, J. United Kingdom: Nuclear Weapon Command, Control, and Communications. https://securityandtechnology.org/wp-content/uploads/2020/07/gower_uk_nc3_report_IST.pdf (2019).

190. Johnson, J. Inadvertent escalation in the age of intelligence machines: A new model for nuclear risk in the digital age. Eur. J. Int. Secur. 1–23 (2021) doi:10.1017/eis.2021.23.

191. Johnson, J. ‘Catalytic nuclear war’ in the age of artificial intelligence & autonomy: Emerging military technology and escalation risk between nuclear-armed states. J. Strateg. Stud. 0, 1–41 (2021).

192. Geist, E. & Lohn, A. J. How Might Artificial Intelligence Affect the Risk of Nuclear War? 28 https://www.rand.org/pubs/perspectives/PE296.html (2018).

193. Fitzpatrick, M. Artificial Intelligence and Nuclear Command and Control. Survival 61, 81–92 (2019).

194. Field, M. Strangelove redux: US experts propose having AI control nuclear weapons. Bulletin of the Atomic Scientists https://thebulletin.org/2019/08/strangelove-redux-us-experts-propose-having-ai-control-nuclear-weapons/ (2019).

195. Johnson, J. Delegating strategic decision-making to machines: Dr. Strangelove Redux? J. Strateg. Stud. 0, 1–39 (2020).

196. Lowther, A. & McGiffin, C. America Needs a “Dead Hand”. War on the Rocks https://warontherocks.com/2019/08/america-needs-a-dead-hand/ (2019).

197. Freedberg, S. J. No AI For Nuclear Command & Control: JAIC’s Shanahan. Breaking Defense https://breakingdefense.sites.breakingmedia.com/2019/09/no-ai-for-nuclear-command-control-jaics-shanahan/ (2019).

198. Fedasiuk, R. We Spent a Year Investigating What the Chinese Army Is Buying. Here’s What We Learned. POLITICO (2021).

199. Fedasiuk, R., Melot, J. & Murphy, B. Harnessed Lightning: How the Chinese Military is Adopting Artificial Intelligence. https://cset.georgetown.edu/publication/harnessed-lightning/ (2021).

200. Loss, R. & Johnson, J. Will Artificial Intelligence Imperil Nuclear Deterrence? War on the Rocks https://warontherocks.com/2019/09/will-artificial-intelligence-imperil-nuclear-deterrence/ (2019).

201. Johnson, J. S. Artificial Intelligence:  A Threat to Strategic Stability. Strateg. Stud. Q. Spring 2020, 16–39 (2020).

202. Payne, K. Artificial Intelligence: A Revolution in Strategic Affairs? Survival 60, 7–32 (2018).

203. Amadae, S. M. et al. The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk. vol. 1 (SIPRI, 2019).

204. Avin, S. & Amadae, S. M. Autonomy and machine learning at the interface of nuclear weapons, computers and people. in The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk (ed. Boulanin, V.) (Stockholm International Peace Research Institute, 2019). doi:10.17863/CAM.44758.

205. Citron, D. & Chesney, R. Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics. Foreign Affairs vol. 98 (2019).

206. Seger, E. et al. Tackling threats to informed decisionmaking in democratic societies: promoting epistemic security in a technologically-advanced world. https://www.turing.ac.uk/research/publications/tackling-threats-informed-decision-making-democratic-societies (2020).

207. Favaro, M. Weapons of Mass Distortion: A new approach to emerging technologies, risk reduction, and the global nuclear order. 32 https://www.kcl.ac.uk/csss/assets/weapons-of-mass-distortion.pdf (2021).

208. Johnson, J. & Krabill, E. AI, Cyberspace, and Nuclear Weapons. War on the Rocks https://warontherocks.com/2020/01/ai-cyberspace-and-nuclear-weapons/ (2020).

209. Sharikov, P. Artificial intelligence, cyberattack, and nuclear weapons—A dangerous combination. Bull. At. Sci. 74, 368–373 (2018).

210. Schneier, B. The Coming AI Hackers. https://www.schneier.com/wp-content/uploads/2021/04/The-Coming-AI-Hackers.pdf (2021).

211. Brundage, M. et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. http://arxiv.org/abs/1802.07228 (2018).

212. Futter, A. Hacking the bomb: cyber threats and nuclear weapons. (Georgetown University Press, 2018).

213. Eilstrup-Sangiovanni, M. Why the World Needs an International Cyberwar Convention. Philos. Technol. 31, 379–407 (2018).

214. Johnson, J. The AI-cyber nexus: implications for military escalation, deterrence and strategic stability. J. Cyber Policy (2019).

215. Gartzke, E. & Lindsay, J. R. Weaving Tangled Webs: Offense, Defense, and Deception in Cyberspace. Secur. Stud. 24, 316–348 (2015).

216. Zwetsloot, R. & Dafoe, A. Thinking About Risks From AI: Accidents, Misuse and Structure. Lawfare https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure (2019).

217. Kallenborn, Z. AI Risks to Nuclear Deterrence Are Real. War on the Rocks https://warontherocks.com/2019/10/ai-risks-to-nuclear-deterrence-are-real/ (2019).

218. Armstrong, S., Bostrom, N. & Shulman, C. Racing to the Precipice: a model of artificial intelligence development. AI Soc. 31, 201–206 (2016).

219. Danzig, R. Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority. 40 https://www.cnas.org/publications/reports/technology-roulette (2018).

 


Comments
No comments on this post yet.
Be the first to respond.
More from MMMaas
Curated and popular this week
Relevant opportunities