By Paul Scharre & Megan Lamberth, Center for a New American Security.
From the introduction:
Advances in artificial intelligence (AI) pose immense opportunity for militaries around the world. With this rising potential for AI-enabled military systems, some activists are sounding the alarm, calling for restrictions or outright bans on some AI-enabled weapon systems.1 Conversely, skeptics of AI arms control argue that as a general-purpose technology developed in the civilian context, AI will be exceptionally hard to control.2 AI is an enabling technology with countless nonmilitary applications; this factor differentiates it from many other military technologies, such as landmines or missiles.3 Because of its widespread availability, an absolute ban on all military applications of AI is likely infeasible. There is, however, a potential for prohibiting or regulating specific use cases.
The international community has, at times, banned or regulated weapons with varying degrees of success. In some cases, such as the ban on permanently blinding lasers, arms control has worked remarkably well to date. In other cases, however, such as attempted limits on unrestricted submarine warfare or aerial bombardment of cities, states failed to achieve lasting restraint in war. States’ motivations for controlling or regulating weapons vary. States may seek to limit the diffusion of a weapon that is particularly disruptive to political or social stability, contributes to excessive civilian casualties, or causes inhumane injury to combatants.
This paper examines the potential for arms control for military applications of AI by exploring historical cases of attempted arms control, analyzing both successes and failures. The first part of the paper explores existing academic literature related to why some arms control measures succeed while others fail. The paper then proposes several criteria that influence the success of arms control.4 Finally, it analyzes the potential for AI arms control and suggests next steps for policymakers. Detailed historical cases of attempted arms control—from ancient prohibitions to modern agreements—can be found in appendix A.
History teaches us that policymakers, scholars, and members of civil society can take concrete steps today to improve the chances of successful AI arms control in the future. These include taking policy actions to shape the way the technology evolves and increasing dialogue at all levels to better understand how AI applications may be used in warfare. Any AI arms control will be challenging. There may be cases, however, where arms control is possible under the right conditions, and small steps today could help lay the groundwork for future successes.
See also summary Twitter thread by Paul Scharre.