Hide table of contents

(crossposted to Lesswrong here.)

Although I have not seen the argument made in any detail or in writing, I and the Future of Life Institute (FLI) have gathered the strong impression that parts of the effective altruism ecosystem are skeptical of the importance of the issue of autonomous weapons systems. This post explains why we think those interested in avoiding catastrophic and existential risk, especially risk stemming from emerging technologies, may want to have this issue higher on their list of concerns.

We will first define some terminology and do some disambiguation, as there are many classes of autonomous weapons that are often conflated; all classes have some issues of concern, but some are much more problematic than others. We then detail three basic motivations for research, advocacy, coordination, and policymaking around the issue:

  1. Governance of autonomous weapon systems is a dry-run, and precedent, for governance of AGI. In the short term, AI-enabled weapons systems will share many of the technical weaknesses and shortcomings of other AI systems, but like general AI also raise safety concerns that are likely to increase rather than decrease with capability advances. The stakes are intrinsically high (literally life-or-death), and the context is an inevitably adversarial one involving states and major corporations. The sort of global coordination amongst potentially adversarial parties that will be required for governance of transformative/general AI systems will not arise from nowhere, and autonomous weapons offer an invaluable precedent and arena in which to build experience, capability, and best practices.
  2. Some classes of lethal autonomous weapon systems constitute scalable weapons of mass destruction (which may also have a much lower threshold for first use or accidental escalation), and hence a nascent catastrophic risk.
  3. By increasing the probability of the initiation and/or escalation of armed conflict, including catastrophic global armed conflict and/or nuclear war, autonomous weapons represent a very high expected cost that overwhelmingly offsets any gain in life from substituting autonomous weapons for humans in armed conflict.

Classes of autonomous weapons

Because many things with very different characteristics could fall under the rubric of “autonomous weapon systems” (AWSs) it is worth distinguishing and classifying them. First, let us split off cyberweapons – including AI-powered ones – as being an important but distinct issue. Likewise, we’ll set aside AI in other aspects of the military not directly related to the use of force, from strategy to target identification, where it serves to augment human action and decision-making. Rather, we focus on systems that have both (some form of) AI and physical armaments.

We now consider in turn these armaments’ target types, which we will break into categories of anti-personnel weapons, force-on-force (i.e. attacking manned enemy vehicles or structures) weaponry, and those targeting other autonomous weapon systems.

Anti-personnel AWSs can be further divided into lethal (or grossly injurious) ones versus nonlethal ones. While an interesting topic,[1] we leave aside here non-lethal anti-personnel autonomous weapon systems, which have a somewhat distinct set of considerations.[2]

We regard force-on-force systems designed to attack manned military vehicles and installations as relatively less intrinsically concerning. The targets of such weapons will, with considerably higher probability, be valid military targets rather than civilian ones, and insofar as they scale to mass damage, that damage will be to an adversary’s military. Of course if these weapons are highly effective, the manned targets they are designed to attack may quickly be replaced with unmanned ones.[3]

This brings us to force-on-force systems that attack other autonomous weapons (anti-AWSs). These exist now, for example in the form of automated anti-missile systems, and are likely to grow more prevalent and sophisticated. These raise a nuanced set of considerations, as we’ll see. Some types are quite uncontroversial: no one has to our knowledge advocated for prohibiting, say, automated defenses on ships. On the other hand, very effective anti-ballistic missile systems could undermine the current nuclear equilibrium based on mutual assured destruction. And while the prospect of robots fighting robots rather than humans fighting humans is beguiling from the standpoint of avoiding the horrors of war, we’ll argue below that it is very unlikely for this to be a net positive.

This leads to a fairly complex set of considerations. FLI and other organizations have advocated for a prohibition against kinetic lethal anti-personnel weapons, with various degrees of distinction between anti-personnel and force-on-force lethal autonomous weapons, and various levels of concern and proposed regulation concerning some classes of force-on-force autonomous weapons. Motivations for this advocacy vary, but we start with one that is of particular important to FLI and to the EA/long-termist community.

Lethal autonomous weapons systems are an early test for AGI safety, arms race avoidance, value alignment, and governance

There are a surprising number of parallels between the issue of autonomous weapons and some of the most challenging parts of the AGI safety issue. These parallels include:

  • In both cases, a race condition is both natural and dangerous;
  • Military involvement is possible in AGI and inevitable for AWSs;
  • Involvement by national governments is likely in AGI and inevitable for AWSs;
  • Secrecy and information hazards are likely in both;
  • Major ethical/responsibility concerns exist for both, perhaps more explicitly in AWSs;
  • In both cases, unpredictability and loss of control are key issues.
  • In both cases, early versions are potentially dangerous because of their incompetence; later versions are dangerous because of their competence.

The danger of arms races has long been recognized as a potentially existential threat in terms of AGI: if companies or countries worry that being second to realize a technology could be catastrophic for their corporate or national interest, then safety (and essentially all other) considerations will tend to fall to the wayside. When applied to autonomous weapons, “arms race” is literal rather than metaphorical but similar considerations apply. The general problem with arms races is that they are very easy to lose, but very difficult to win: you lose if you fail to compete, but you also lose if the competition leads to a situation dramatically increasing the risk to both parties, or to huge adverse side-effects; and this appears likely to be the case in autonomous weapons and AGI, just as it was in nuclear weapons.[4] Unfortunately, the current international and national security context includes multiple parties fomenting a “great powers” rivalry between the US and China that is feeding an arms race narrative in AI in general, including in the military and potentially extending to AGI.

Managing to avoid an arms race in autonomous weapons – via multi-stakeholder international agreement and other means – would set a very powerful precedent for avoiding one more generally. Fortunately, there is reason to believe that this arms race is avoidable.[5] The vast majority of AI researchers and developers are strongly against an arms race in AWSs,[6] and AWSs enjoy very little popular support.[7] Thus prohibition or strong governance of lethal autonomous weapons is a test instance in which the overwhelming majority of AI researchers and developers agree. This presents an opportunity to draw at least some line, in a globally coordinated way, between what is and is not acceptable in delegating decisions, actions, and responsibility to AI. And doing so would set a precedent for avoiding a race by recognizing that even each participant’s interests are better served by at least some coordination and cooperation.

Good governance of AWSs will take exactly the sort of multilateral cooperation, including getting militaries onboard, that is likely to be necessary with an overall AI/AGI (figurative) arms race. The methods, institutions, and ideas necessary to govern AGI in a beneficial and stable multilateral system is very unlikely to arise quickly or from nowhere. It might arise steadily from growth of current AI governance institutions such as the OECD, international standards bodies, regulatory frameworks such as that developing in the EU, etc. But these institutions tend to explicitly and deliberately exclude discussion of military issues so as to make reaching agreements easier. But this then avoids precisely the sorts of issues of national interest and military and geopolitical power that would be at the forefront of the most disastrous type of AGI race. Seeking to govern deeply unpopular AWSs (which also presently lack strong interest groups pushing for them) provides the easiest possible opportunity for a “win” in coordination amongst military powers.

Beyond race vs. cooperative dynamics, autonomous weapons and AGI present other important parallels at the level of technical AI safety and alignment, and multi-agent dynamics.

Lethal autonomous weapon systems are a special case of a more general problem in AI safety and ethics that the technical capability of being effective may be much simpler than what is necessary to be moral or ethical or legal. Indeed the gap between making an autonomous weapon that is effective (successfully kills enemies) and one that is moral (in the sense of, at minimum, being able to act in accord with international law) may be larger than in any other AI application: the stakes are so high, and the situations so complex, that the problem may well be AGI-complete.[8]

In the short-term, then, there are complex moral questions. In particular, who is responsible for the decisions made by an AI system when the moral responsibility cannot lie with the system? If an AI system is programmed to obey the “law of war,” but then fails, who is at fault? On the flip side, what happens if the AI system “disagrees” with a human commander directing an illegal act? Even a weapon that is very effective at obeying such rules is unlikely to (be programmed to be able to) disobey a direct “order” from its user: if such “insubordination” is possible it raises the risk of incorrigible and intransigent intelligent weapons systems; but if not, it removes an existing barrier to unconscionable military acts. While these concerns are not foremost from the perspective of overall expected utility, for these and other reasons we believe that delegating the decision to take a human life to machine systems is a deep moral error, and doing so in the military sets a terrible precedent.

Things get even more complex when multiple cooperative and adversarial systems are involved. As argued below, the unpredictability and adaptability of AWSs is an issue that will increase with better AI rather than decrease. And when many such agents interact, emergent effects are likely that are even less predictable in advance. This corresponds closely to the control problem in AI in general, and indicates a quite pernicious problem of AI systems leaving humans unable to predict what they will do, or effectively intervene if what they do runs counter to the wishes of human overseers.

In advanced AI in general, one of the most dangerous dynamics is the unwarranted belief of AI developers, users, and funders that AI will – like most engineered technologies – by default do what we want it to do. It is important that those who would research, commission, or deploy autonomous weapons be fully cognizant of this issue; and we might hope that the cautious mindset this could engender could bleed into or be transplanted into safety considerations for powerful AI systems in and out of the military.

Lethal autonomous weapons as WMDs

There is a very strong case to classify some anti-personnel AWSs as weapons of mass destruction. We regard as the key defining characteristic of WMDs[9] that a single person’s agency directed through the weapon can directly cause many fatalities with very little additional support structure (like an army to command.) This is not possible with “conventional” weapons systems like guns, aircraft, and tanks, where the deaths caused scale roughly linearly with the number of people involved in causing those deaths.

With this definition, some anti-personnel lethal AWSs (such as microdrone munition-carrying “slaughterbots”) would easily qualify. These weapons are essentially (microdrone)+(bullet)+(smartphone components), and with near-future technology and efficiency of scale, slaughterbots could plausibly be as inexpensive as $100 each to manufacture en masse. Even with a 50% success rate and doubling of the cost to account for delivery, this is $400/fatality. Nuclear weapons cost billions to develop, then tens to hundreds of millions per warhead. A nuclear strike against a major city is likely to have hundreds of thousands of fatalities (for example a 100 kiloton strike against downtown San Francisco would cause an estimated 200K fatalities and 400K injuries.) 100,000 kills worth of slaughterbots, at a cost of $40M, would be just as cost-effective to manufacture and deploy, and dramatically cheaper to develop. They are more bulky than a nuclear warhead but could plausibly still fit in a 40’ shipping container (and unlike nuclear, chemical and biological weapons are safe to transport, hard to detect, and can easily be deployed remotely.)

This is possible with near-future technology.[10] It is not hard to imagine even more miniaturized weaponry, in a continuum that could reach all the way to nanotechnology. And unlike (to first approximation) for nuclear weapons, effectiveness and cost-efficiency are likely to significantly increase with technological improvement.[11] Thus if even a fraction of the resources that have been put into nuclear weapons were put into anti-personnel lethal AWSs, they could potentially become as large of a threat. Consider that it took less than 20 years from the 1945 Trinity test until the Cuban Missile Crisis that almost led to a global catastrophe, and that a determined but relatively minor program by a major military could likely develop a slaughterbot-type WMD within a handful of years.

One crucial difference between AWs and other WMDs is that the former’s ability to discriminate among potential targets is much better, and this capability should increase with time. A second is that Autonomous WMD would, unlike other WMDs, leave the targeted territory relatively undamaged and quickly inhabitable.

In certain ways these are major advantages: a (somewhat more) responsible actor could use this capability to target only military personnel insofar as they are distinguishable, or target only the leadership structure of some rogue organization, without harming civilians or other bystanders. Even if such distinctions are difficult, such weapons could relatively easily never target children, the wounded, etc. And a military victory would not necessarily be accompanied by the physical destruction of an adversary’s infrastructure and economy.

The unfortunate flip-side of these differences, however, is that anti-personnel lethal AWSs are much more likely to be used. In terms of “bad actors,” along with the advantages of being safe to transport and hard to detect, the ability to selectively attack particular types of people who have been identified as worthy of killing will help assuage the moral qualms that might otherwise discourage mass killing. Particular ethnic groups, languages, uniforms, clothing, or individual identities (culled from the internet and matched using facial recognition) could all provide a basis for targeting and rationalization. And scalable destruction of physical assets would make autonomous WMDs far more strategically effective for seizing territory.

Autonomous WMDs would pose all of the same sorts of threats that other ones do,[12] from acts of terror to geopolitical destabilization to catastrophic conflict between major powers. Tens of billions of USD are spent by the US and other states to prevent terrorist actions using WMDs and to prevent the “wrong” states from acquiring them. And recall that a primary (claimed) reason for the Iraq war (at trillions of USD in total cost) was its (claimed) possession of WMDs. It thus seems foolish in the extreme to allow – let alone implicitly encourage – the development of a new class of WMDs that could proliferate much more easily than nuclear weapons.

Lethal autonomous weapons as destabilizing elements in and out of war

On the list of most important things in the world, retaining global international peace and stability rates very highly; instability is a critical risk factor for global catastrophic or X-risk. Even nuclear weapons, probably the greatest current catastrophic risk, are arguably stabilizing against large-scale war. In contrast, there are many compelling reasons to see autonomous weapons as a destabilizing effect, perhaps profoundly so.[13]

For a start, AWSs like slaughterbots are ideal tools of assassination and terror, hence deeply politically destabilizing. The usual obstacles to one individual killing another – technical difficulty, fear of being caught, physical risk during execution, and innate moral aversion – are all lowered or eliminated using a programmable autonomous weapon. All else being equal, if lethal AWSs proliferate, this will make both political assassinations and acts of terror inevitably more possible, and dramatically so if the current rate is limited by any of the above obstacles. Our sociopolitical systems react very strongly to both types of violence, and the consequences are unpredictable but could be very large-scale. Tallying up the economic cost of the largest terror attacks to date – those on 9/11 – surely reaches into trillions of $USD, with an accompanying social cost of surveillance, global conflict, and so on.

Second, like drone warfare, lethals AWSs are likely to further (and more widely) lower the threshold of state violence toward other states. The US, for one, has shown little reluctance to strike targets of interest in certain other countries, and lethal AWSs could diminish the reluctance even more by lowering the level of collateral damage.[14] This type of action might spread to other countries that currently lack the US’s technical ability to accomplish such strikes. Lethal (or nonlethal) AWSs could also increase states’ ability to perpetrate violence against its own citizens; whether this increases or decreases stability of those states, seems, however, unclear.

Third, AWSs of all types threaten to upset the status quo of military power. The advantage of major military powers rests on decades of technological advantage coupled with vast levels of spending on training and equipment. A significant part of this investment and advantage would be nullified by a new class of weapon that evolves on software rather than hardware timescale. Moreover, even if the current capability “ranking” of military powers were preserved, for a weapon that strongly favors offense (as some have argued for antipersonnel AWSs) there may be no plausible technical advantage that suffices[15] – indeed this is a key reason that major military powers are so concerned about nuclear proliferation.

Finally, and probably most worrisome, if there is an open arms race in AWSs of all types, we see a dramatically increased risk of accidental triggering or escalation of armed conflict.[16] A crucial desirable feature of AWSs from the military point of view is to be able to understand and predict[17] how they will operate in a given situation: under what conditions will they take action on what sorts of targets, and how. This is a very difficult technical problem because, given the variety of situations in which an AWS might be placed, it could easily fall outside the context of its training data. But it is a very crucial one: without such an understanding, fielding an AWS would raise a spectrum of potential unintended consequences.

But now consider a situation in which AWSs are designed to attack and defend against other AWSs. In this case, predictability of how a given AWS will function turns from a desirable feature (for military decision makers to understand how their weapon will function) into an exploitable liability.[18] There will then be a very strong conflict between the desire to make an AWS predictable to its user, and the necessity of making it unpredictable and unexploitable to its adversary. This is likely to manifest as a parallel conflict between a simple set of clear and followable rules (making the AWS more predictable) versus a high degree of flexibility and “improvisation” (making the AWS more effective but less predictable.) This competition would happen alongside a competition in the speed of the OODA (Observe, Orient, Decide, Act) loop. The net effect seems to inevitably point to a situation in which AWSs react to each other in a way that is both unpredictable in advance, and too fast for humans to intervene. There seems little opportunity for such conflict between such weapons to de-escalate. Inadvertent military conflict is already a major problem when humans are involved who fully understand the stakes. It seems very dangerous to have a situation in which the ability to resist or forestall such escalation would be seen as a major and exploitable military disadvantage.

Keeping the threshold for war high is very obviously very important but it is worth looking at the numbers. A large-scale nuclear war is unbelievably costly: it would most likely kill 1-7Bn in the first year and wipe out a large fraction of Earth’s economic activity (i.e. of order one quadrillion USD or more, a decade worth of world GDP.)Some current estimates of the likelihood of global-power nuclear war over the next few decades range from ~0.5-20%. So just a 10% increase in this probability, due to an increase in the probability of conflict that leads to nuclear war, costs in expectation ~500K - 150m lives and ~$0.1-10Tn (not counting huge downstream life-loss and economic losses). Insofar as saving the lives of soldiers is an argument in favor of deploying AWSs, it seems exceedingly unlikely that substituting lethal AWSs for soldiers will ever save this many lives or value: AWSs are unlikely to save any lives in a global thermonuclear war, and it is hard to imagine a conventional war of large enough scale that AWSs could substitute for this many humans, without the war escalating into a nuclear one. In other words, imagine a war with human combatants in which are expected to die, with probability of that or another related war escalating into a nuclear exchange costing lives. We suppose that we might replace these human combatants with autonomous ones but at the cost of increasing the probability to . The expected deaths are in the human-combatant case and in the autonomous combatant case, with a difference in fatalities of (. Given how much larger (~1-7 Bn) is than (tens of thousands at most) it only takes a small difference for this to be a very poor exchange.

What should be done?

We’ve argued above that the issue of autonomous weapons is not simply concerns about soulless robots killing people or discomfort with the inevitable applications of AI to military purposes. Rather, particular properties of autonomous weapons seem likely to lead, in expectation, to a substantially more dangerous world. Moreover, actions to mitigate this danger may even help – via precedent and capability-building – in mitigating others. The issue is also relatively tractable – at least for now, and in comparison to more intractable-but-important issues like nuclear accident risk or the problematic business model of certain big tech companies. Although involvement of militaries makes it difficult, there is as yet relatively little strong corporate interest in the issue.[19] International negotiations exist and are underway (though struggling to make significant headway.) It is also relatively neglected, with a small number of NGOs working at high activity, and relatively little public awareness of the issue. It is thus a good target for action by usual criteria.

Arguments against being concerned with autonomous weapons appear to fall into three general classes:[20]The first is that autonomous weapons are a net good. The second is that autonomous weapons are an inevitability, and there’s little or nothing to be done about it. The third is simply that this is “somebody else’s problem,” and low-impact relative to other issues to which effort and resources could be devoted.[21] We’ve argued above against all three positions: the expected utility of widespread autonomous weapons is likely to be highly negative (due to increase probability of large-scale war, if nothing else), the issue is addressable (with multiple examples of past successful arms-control agreements), currently tractable if difficult, and success would also improve the probability of positive results in even more high-stakes arenas including global AGI governance.

If the issue of autonomous weapons is important, tractable and neglected, it is worth asking what success would look like. Many of the above concerns could be substantially mitigated via an international agreement governing autonomous weapons; unfortunately they are unlikely to be significantly impacted by lesser measures. Arguments against such an agreement tend to focus on how hard or effective it would be, or conflate very distinct considerations or weapons classes. But there are many possible provisions such an agreement could include that would be net-good and that we believe many countries (including major military powers) might agree on. For example:

  • Some particular, well-defined, classes of weapons could be prohibited (as biological weapons, laser blinding weapons, space-based nuclear weapons, etc., are currently). Weapons with high potential for abuse and relatively little real military advantage to major powers (like slaughterbots) should be first in line. Automated primarily defensive weaponry targeting missiles or other unmanned objects, or non-injurious AWSs, very probably should not be prohibited in general. The grey area in the middle should be worked out in multilateral negotiation.
  • For whatever is not prohibited, there could be agreements (supplemented by internal regulations) regarding proliferation, tracking, attribution, human control, etc., to AWSs; for some examples see this “Roadmapping exercise,” which emerged as a sketch of consensus recommendations from a meeting between technical experts with a very wide range of views on autonomous weapons.

Highlighting the risks of autonomous weapons may also encourage militaries to invest substantially in effective defensive technologies (especially those that are non-AI and/or that are purely defensive rather than force-on-force) against lethal autonomous weapons, including the prohibited varieties. This could lead to (an imperfect but far less problematic than our current trajectory) scenario in which anti-personnel AWSs are generally prohibited, yet defended against, and other AWSs are either prohibited or governed by a strong set of agreements aimed at maintaining a stable detente in terms of AI weapons.

FLI has advanced the view – widely shared in the AI research community – that the world will be very ill-served by an arms race and unfettered buildup in autonomous weaponry. Our confidence in this is quite high. We have further argued here that the stakes are significantly greater than many have appreciated, which has motivated both FLI’s advocacy in this area as well as this posting. Less clear is how much and what can be done about the dynamics driving us in that direction. We welcome feedback both regarding the arguments put forth in this piece, and more generally about what actions can be taken to best mitigate the long-term risks that autonomous weapons may pose.

I thank FLI staff and especially Jared Brown and Emilia Javorsky for helpful feedback and notes on this piece.

Notes


  1. As a major advantage, nonlethal autonomous weapons need not defend themselves and so can take on significant harm in order to prevent harm while subduing a human. On the other hand if such weapons become _too _effective they may make it too easy and “low-cost” for authoritarian governments to subdue their populace. ↩︎

  2. Though we would note that converting a nonlethal autonomous weapon into a lethal one could require relatively small modification, as it would really amount to using the same software on different hardware (weapons). ↩︎

  3. Autonomy also created new capabilities – like swarms – that are wholly new and will subvert existing weapons categories. The versatility of small, scalable, lethal AWS is of note here, as they might be quickly repurposed for a variety of target types, with many combining to attack a larger target. ↩︎

  4. The claim here is not that nuclear weapons are without benefit (as they arguably have been a stabilizing influence so far), but the arms race to weapons numbers far beyond deterrence probably is. Understanding of nuclear winter laid bare the lose-lose nature of the nuclear arms race: even if one power were able to perform a magically effective first strike to eliminate all of the enemy’s weapons, that power would still find itself with a starving population. ↩︎

  5. AI will be unavoidably tied to military capability, as it has appropriate roles in the military that would be unpreventable even if this were desirable. However this is very different from an unchecked arms race, and de-linking AI and weaponry as much as possible seems a net win. ↩︎

  6. For example in polling for the Asilomar Principles among many of the world’s foremost AI researchers, Principle 18, “An arms race in lethal autonomous weapons should be avoided,” polled the very highest. ↩︎

  7. This survey shows about 61% pro and 22% con for their use. This article points to a more recent EU poll with high (73%) support for an international treaty prohibiting them. It should be noted that both surveys were commissioned by the Campaign to Stop Killer Robot. This study argues that opinions can easily change due to additional factors, and in general we should assume that public understanding of autonomous weapons and their implications is fairly low. ↩︎

  8. There is significant literature and debate on the difficulty of satisfying requirements of international law in distinction, proportionality; see e.g. this general analysis, and this discussion of general issues of human vs. machine control. Beyond the question of legality are moral questions, as explored in detail here for example. ↩︎

  9. The term “WMD” is somewhat poorly defined, sometimes conflated with the trio of chemical, biological and nuclear weapons. But if we define WMDs in terms of characteristics such that the term could at least in principle apply both to and beyond nuclear, chemical and biological weapons, then it’s hard to avoid including anti-personnel AWSs. One might include additional or alternative characteristics that (a) WMDs must be very destructive, and/or (b) that they are highly indiscriminate, and/or (c) that they somehow offend human sensibilities through their mode of killing. However, (a) chemical and biological weapons are not necessarily destructive (other than to life/humans); (b) if biological weapons are made more discriminate, e.g. to attack only people with some given set of genetic markers, they would almost certainly still be classed as WMDs and arguably be of even more concern; (c) “offending sensibilities” is rather murkily defined. ↩︎

  10. It has been argued that increasing levels of autonomy in loitering munition systems represent a slippery slope, behaving functionally as lethal autonomous weapons on the battlefield. Some of the systems identified as of highest concern and also of lower cost relative to large drones have been deployed in recent drone conflicts in Libya and Nagorno-Karabakh. ↩︎

  11. While speculating on particular technologies is probably not worthwhile, note that the physical limits are quite lax. For example, ten million gnat-sized drones carrying a poison (or noncontagious bioweapon) payload could fit into a suitcase and fly at 1 km/hr (as gnats do). ↩︎

  12. Note that autonomous WMDs could also be combined with or enable other ones: miniature autonomous weapons could efficiently deliver a tiny chemical, biological or radiological payload, combining the high lethality of existing WMDs with the precision of autonomous ones. ↩︎

  13. For some analyses of this issue see this UNIDIR report and this piece. Even the dissertation by Paul Scharre, concludes that “The widespread deployment of fully autonomous weapons is therefore likely to undermine stability because of the risk of unintended lethal engagements.” and recommends regulatory approaches to mitigate the issue. ↩︎

  14. In the case of the US, this effect is likely to be present even if lethal AWSs were prohibited – human-piloted microdrones or swarms should be able to provide most of the advantages as lethal AWSs, except in rare circumstances when the signal can be blocked. ↩︎

  15. Israel presents a particularly important case. While its small population motivates replacing or augmenting human soldiers with machines, to us it seems unwise to seek unchecked global development of lethal AWSs, when it is surrounded by adversaries perfectly capable of developing and fielding them. ↩︎

  16. This RAND publication lays out the argument in some detail. ↩︎

  17. For detailed discussion of these terms, see e.g. this UNIDIR report. ↩︎

  18. Autonomous weapons developers are already thinking along these lines of course; see for example this article about planning to undermine drone swarms by predicting and intervening in their dynamics. ↩︎

  19. While arms manufacturers will tend to disfavor limitations on arms, few if any are currently profiting from the sorts of weapons that might be prohibited by international agreement, and there is plenty of scope for profit-making in designing defenses against lethal autonomous weapons, etc. ↩︎

  20. We leave out disingenuous arguments against straw men such as “But if we give up lethal autonomous weapons and allow others to develop them, we lose the war.” No one serious, to our knowledge, is advocating this – the whole point of multilateral arms control agreements is that all parties are subject to them. Ironically, though, this self-defeating position is the one taken at least formally by the US (among others), for which current policy largely disallows (though see this re-interpretation) fully autonomous lethal weapons, even while the US argues against a treaty creating such a prohibition for other countries. ↩︎

  21. A more pernicious argument that we have heard is that advocacy regarding autonomous weapons is antagonistic to the US military and government, which could lead to lack of influence in other matters. This seems terribly misguided to us. We strongly believe US national security is served, rather than hindered, by agreements and limitations on autonomous weapons and their proliferation. There is a real danger that US policymakers and military planners are failing to realize this precisely due to lack of input from experts who understand the issues surrounding AI systems best. Moreover, neither the US government nor the US military establishment are monolithic institutions, but huge complexes with many distinct agents and interests. ↩︎

Comments31
Sorted by Click to highlight new comments since: Today at 8:05 AM
kbog
3y32
0
0

Lethal autonomous weapons systems are an early test for AGI safety, arms race avoidance, value alignment, and governance

OK, so this makes sense and in my writeup I argued a similar thing from the point of view of software development. But it means that banning AWSs altogether would be harmful, as it would involve sacrificing this opportunity. We don't want to lay the groundwork for a ban on AGI, we want to lay the groundwork for safe, responsible development. What you actually suggest, contra some other advocates, is to prohibit certain classes but not others... I'm not sure if that would be helpful or harmful in this dimension. Of course it certainly would be helpful if we simply worked to ensure higher standards of safety and reliability.

I'm skeptical that this is a large concern. Have we learned much from the Ottawa Treaty (which technically prohibits a certain class of AWS) that will help us with AGI coordination? I don't know. Maybe.

Seeking to govern deeply unpopular AWSs (which also presently lack strong interest groups pushing for them) provides the easiest possible opportunity for a “win” in coordination amongst military powers.

I don't think this is true at all. Defense companies could support AWS development, and the overriding need for national security could be a formidable force that manifests in domestic politics in a variety of ways. Surely it would easier to achieve wins on coordinating issues like civilian AI, supercomputing, internet connectivity, or many other tech governance issues which affect military (and other) powers? 

Compared to other areas of military  coordination among  military powers, I guess AI weapons look like a relatively easy area right now but that will change in proportion to their battlefield utility. 

While these concerns are not foremost from the perspective of overall expected utility, for these and other reasons we believe that delegating the decision to take a human life to machine systems is a deep moral error, and doing so in the military sets a terrible precedent.

I thought your argument here was just that we need to figure out how to implement autonomous systems in ways that best respond to these moral dilemmas, not that we need to avoid them altogether. AGI/ASI will almost certainly be making such decisions eventually, right? We better figure it out.

In my other post I had detailed responses to these issues, so let me just say briefly here that the mere presence of a dilemma in how to design and implement an AWS doesn't count as a reason against doing it at all. Different practitioners will select different answers to the moral questions that you raise, and the burden of argument is on you to show that we should expect practitioners to pick wrong answers that will make AWSs less ethical than the alternatives.

Lethal autonomous weapons as WMDs

At this point, it's been three years since FLI released their slaughterbots video, and despite all the talk of how it is cheap and feasible with currently available or almost-available technology, I don't think anyone is publicly developing such drones - suggesting it's really not so easy or useful.

A mass drone swarm terror attack would be limited by a few things. First, distances. Small drones don't have much range. So if these are released from one or a few shipping containers, the vulnerable area will be limited. These $100 micro drones have a range of only around 100 meters. The longest range consumer drones apparently go 1-8km but cost several hundred or several thousand dollars. Of course you could do better if you optimize for range, but these slaughterbots cannot be optimized for range, they must have many other features like military payload, autonomous computing, and so on. 

Covering these distances will take time. I don't know how fast these small drones are supposed to go - is 20km/h a good guess, taking into account buildings posing obstacles to them? If so then it will take half an hour to cover a 10 kilometer radius. If these drones are going to start attacking immediately, they will make a lot of noise (from those explosive charges going off) which will alert people, and pretty soon alarm will spread on phones and social media. If they are going to loiter until the drones are dispersed, then people will see the density of drones and still be alerted. Specialized sensors or crowdsourced data might also be used to automatically detect unusual upticks in drone density and send an alert.

So if the adversary has a single dispersal point (like a shipping container) then the amount of area he can cover is fundamentally pretty limited. If he tries to use multiple dispersal points to increase area and/or shorten transit time, then logistics and timing get complicated. (Timing and proper dispersal will be especially difficult if a defensive EW threat prevents the drones from listening to operators or each other.) Either way, the attack must be in a dense urban area to maximize casualties. But few people are actually outside at any given time. Most are either in a building, in a car or public transport, even during rush hour or lunch break. And for every person who gets killed by these drones, there will be many other people watching safely through car or building windows who can see what is going on and alert other people. So people's vulnerability will be pretty limited. If the adversary decides to bring large drones to demolish barriers then it will be a much more expensive and complex operation. Plus, people only have to wait a little while until the drones run out of energy. The event will be over in minutes, probably.

If we imagine that drone swarms are a sufficiently large threat that people prepare ahead of time, then it gets still harder to inflict casualties. Sidewalks could have light coverings (also good for shade and insulation), people could carry helmets, umbrellas,  or cricket bats, but most of all people would just spend more time indoors. It's not realistic to expect this in an ordinary peacetime scenario but people will be quite adept at doing this during military bombardment. 

Also, there are options for hard countermeasures which don't use technology that is more complicated than that which is entailed by these slaughterbots. Fixtures in crowded areas could shoot anti-drone munitions (which could be less lethal against humans) or launch defensive drones to disable the attackers. 

Now, obviously this could all change as drones get better. But defensive measures including defensive drones could improve at the same time. 

I should also note that the idea of delivering a cheap deadly payload like toxins or a dirty bomb via shipping container has been around for a while yet no one has carried out. 

Finally, an order of hundreds of thousands of drones, designed as fully autonomous killing machines, is quite industrially significant. It's just not something that a nonstate actor can pull off. And the idea that the military would directly construct mass murder drones and then lose them to terrorists is not realistic.

The unfortunate flip-side of these differences, however, is that anti-personnel lethal AWSs are much more likely to be used. In terms of “bad actors,” along with the advantages of being safe to transport and hard to detect, the ability to selectively attack particular types of people who have been identified as worthy of killing will help assuage the moral qualms that might otherwise discourage mass killing.

I don't think the history of armed conflict supports the view that people become much more willing to go to war when their weapons become more precise. After all the primary considerations in going to war are matters of national interest, not morality. If there is such a moral hazard effect then it is small and outweighed by the first-order reduction in harm.

Autonomous WMDs would pose all of the same sorts of threats that other ones do,[12]

Just because drones can deploy WMDs doesn't mean they are anything special - you could can also combine chem/bio/nuke weapons with tactical ballistic missiles, with hypersonics, with torpedoes, with bombers, etc. 

Lethal autonomous weapons as destabilizing elements in and out of war

I stand by the point in my previous post that it is a mistake to conflate a lower threshold for conflict with a higher (severity-weighted) expectation of conflict, and military incidents will be less likely to escalate (ceteris paribus) if fewer humans are in the initial losses.

Someone (maybe me) should take a hard look at these recent arguments you cite claiming increases in escalation risk. The track record for speculation on the impacts of new military tech is not good so it needs careful vetting.

A large-scale nuclear war is unbelievably costly: it would most likely kill 1-7Bn in the first year and wipe out a large fraction of Earth’s economic activity (i.e. of order one quadrillion USD or more, a decade worth of world GDP.)Some current estimates of the likelihood of global-power nuclear war over the next few decades range from ~0.5-20%. So just a 10% increase in this probability, due to an increase in the probability of conflict that leads to nuclear war, costs in expectation ~500K - 150m lives and ~$0.1-10Tn (not counting huge downstream life-loss and economic losses). 

The mean expectations are closer to the lower ends of these ranges. 

Currently, 87,000 people die in state-based conflicts per year. If automation cuts this by 25% then in three decades it will add up to 650k lives saved. That's still outweighed if the change in probability is 10%, but for reasons described previously I think 10% is too pessimistic. 

The third is simply that this is “somebody else’s problem,” and low-impact relative to other issues to which effort and resources could be devoted.[21] We’ve argued above against all three positions: the expected utility of widespread autonomous weapons is likely to be highly negative (due to increase probability of large-scale war, if nothing else), the issue is addressable (with multiple examples of past successful arms-control agreements), currently tractable if difficult, and success would also improve the probability of positive results in even more high-stakes arenas including global AGI governance.

As the absolute minimum to address #3, I think advocacy on AWSs should be compared to advocacy on other new military tech like hypersonics and AI-enabled cyber weapons which come with their own fair share of similar worries. 

We leave out disingenuous arguments against straw men such as “But if we give up lethal autonomous weapons and allow others to develop them, we lose the war.” No one serious, to our knowledge, is advocating this – the whole point of multilateral arms control agreements is that all parties are subject to them. 

If you stigmatize them in the Anglosphere popular imagination as a precursor to a multilateral agreement, then that's basically what you're doing.

 

I would like to again mention the Ottawa Treaty, I don't know much about it, but it seems like a rich subject to explore for lessons that can be applied to AWS regulation. 

Thanks for your replies here, and for your earlier longer posts that were helpful in understanding the skeptical side of the argument, even if I only saw them after writing my piece. As replies to some of your points above:

But it means that banning AWSs altogether would be harmful, as it would involve sacrificing this opportunity. We don't want to lay the groundwork for a ban on AGI, we want to lay the groundwork for safe, responsible development

It is unclear to me what you suggest we would be “sacrificing" if militaries did not have the legal opportunity to use lethal AWS. The opportunity I see is to make decisions, in a globally coordinated way and amongst potentially adversarial powers, about acceptable and unacceptable delegations of human decisions to machines, and enforcing those decisions. I can’t see how success in doing so would sacrifice the opportunity. Moreover, a ban on all autonomous weapons (including purely defensive nonlethal ones) is very unlikely and not really what anyone is calling for, so there will be plenty of opportunity to “practice” on non-lethal AWs, defenses against AWs, etc., on the technical front; there will also be other opportunities to "practice" on what life-and-death decisions should. and should not be delegated, for example in judicial review.

Have we learned much from the Ottawa Treaty (which technically prohibits a certain class of AWS) that will help us with AGI coordination? I don't know. Maybe

Though I understand why you have drawn a connection to the Ottawa Treaty because of its treatment on landmines, I believe this is the wrong analogy for AWSs. I believe the Biological Weapons Convention is more apt, and I think the answer would be "yes," we have learned something about international governance and coordination for dangerous technology from the BWC. I also believe that the agreement not to use landmines is a global good.

Surely it would easier to achieve wins on coordinating issues like civilian AI, supercomputing, internet connectivity, or many other tech governance issues which affect military (and other) powers?.

I am not sure why you are confident it would be easier to reach binding agreements on these suggested matters. To the extent that it is possible, it may suggest that there is little value to be gained. What is generally missing from these is that there is little popular or political will to create an international agreement on e.g. internet connectivity. It’s not as high stakes or consequential as lethal AWSs, and to first approximation, nobody cares. The point is to show agreement can be reached in an arena that is consequential for militaries, and this is our best opportunity to do so.

Different practitioners will select different answers to the moral questions that you raise, and the burden of argument is on you to show that we should expect practitioners to pick wrong answers that will make AWSs less ethical than the alternatives.

There are a lot of important and difficult moral questions worth a long discussion, as well as more practical questions of whether systems and chains-of-command are in fact created in a way that responsibility rests somewhere rather than nowhere. I've got my own beliefs on those, which may or may not be shared, but I actually don't think we need to address them to judge the importance of limitations on autonomous weapons. I don't necessarily agree that the burden is on me, though: it's certainly both legally (and I believe ethically) "your" responsibility, if you are creating a new system for killing people, you show that it is consistent with international law, for example.

At this point, it's been three years since FLI released their slaughterbots video, and despite all the talk of how it is cheap and feasible with currently available or almost-available technology, I don't think anyone is publicly developing such drones - suggesting it's really not so easy or useful

At the time of release, Slaughterbots was meant to be speculative and to raise awareness of the prospect of risk. AGI and a full scale nuclear war haven't happened either--that doesn't make the risk not real. Would you lodge the same complaint against “The Day After”? Regardless, as to whether people are developing such drones, I suggest you review information in a report called "Slippery Slope" by PAX on such systems, especially about the Kargu drones from Turkey. I think you will decide that it is relatively “easy” and “useful” to develop lethal AWSs.

Responding to paragraphs starting with “A mass drone swarm terror attack…” through the paragraph starting with “Now, obviously this could…” Your analysis here is highly speculative and presupposes a particular pattern in the development of offensive and defensive capabilities of lethal AWSs. I welcome any evidence you have on these points, but your scenario seems to a) assume limited offensive capability development, b) willingness and ability to implement layers of defensive measures at all “soft” targets, c) focus only on drones, not many other possible lethal AWSs, and d) still produces considerable amount of cost--both in countermeasures and in psychological costs--that would seem to suggest a steep price to be paid to have lethal AWSs even in a rosy scenario.

Finally, an order of hundreds of thousands of drones, designed as fully autonomous killing machines, is quite industrially significant. It's just not something that a nonstate actor can pull off. And the idea that the military would directly construct mass murder drones and then lose them to terrorists is not realistic.

I believe we agree that in terms of serious (like 1000+ casualties) WMDs, the far greater risk is smaller state actors producing or buying them, not a rogue terror organization. As a reminder, it won’t (just) be the military making these weapons, but weapons makers who can then sell them (e.g., look at the export of drones by China and Turkey throughout many high-conflict regions). Further, once produced or sold to a state actor, weapons can and do then come into the possession of rogue actors, including WMDs. Look no further than the history of the Nunn-Lugar Cooperative Threat Reduction program for real and close-calls, the transfer of weapons from Syria to Hezbollah, etc.

I don't think the history of armed conflict supports the view that people become much more willing to go to war when their weapons become more precise.

It may or may not be the case; as you indicate it's mixed in with a lot of factors. But precision (and lack of infrastructure destruction) are actually not the only, or even primary reasons I expect AWs will lead to wider conflict, depending on the context. In addition to potentially being more precise, lethal AWSs will be less attributable to their source, and present less risk to use (both in physical and financial costs). At least in terms of violence (if not, to date, war), the latter seems to make a large difference, as exhibited by the US (manned) drone program for example.

The mean expectations are closer to the lower ends of these ranges.

I'm not sure how to interpret this. The lower end of the ranges are the lower end of ranges given by various estimators. The mean of this range is somewhere in the middle, depending how you weight them.

The question of whether small-scale conflicts will increase enough to counterbalance the life-saving of substituting AWs for soldiers is, I agree, hard predict. But unless you take the optimistic end of the spectrum (as I guess you have) I don't see how the numbers can balance at all when including large-scale wars.

Someone (maybe me) should take a hard look at these recent arguments you cite claiming increases in escalation risk. The track record for speculation on the impacts of new military tech is not good so it needs careful vetting.

I welcome your investigation. I agree that speculation on the impacts of new military tech has not been great (along all spectrums), which is why precaution is a wise course of action.

As the absolute minimum to address #3, I think advocacy on AWSs should be compared to advocacy on other new military tech like hypersonics and AI-enabled cyber weapons which come with their own fair share of similar worries.

I agree that other emerging technologies (including some you don’t mention, like synthetic bioweapons), deserve greater attention. But that doesn’t mean lethal AWSs should be ignored.

If you stigmatize them in the Anglosphere popular imagination as a precursor to a multilateral agreement, then that's basically what you're doing.

This is a very strange argument to me. Saying something is problematic, and being willing in principle not to do it, seems like a pretty necessary precursor to making an agreement with others not to do it. Moreover, if something is ethically wrong, we should be willing to not do it even if others do it — but far, far better to enter into an agreement so that they don't.

welcome any evidence you have on these points, but your scenario seems to a) assume limited offensive capability development, b) willingness and ability to implement layers of defensive measures at all “soft” targets, c) focus only on drones, not many other possible lethal AWSs, and d) still produces considerable amount of cost--both in countermeasures and in psychological costs--that would seem to suggest a steep price to be paid to have lethal AWSs even in a rosy scenario.

I'm saying there are substantial constraints on using cheap drones to attack civilians en masse, some of them are more-or-less-costly preparation measures and some of them are not. Even without defensive preparation, I just don't see these things as being so destructive.

If we imagine offensive capability development then we should also imagine defensive capability development.

What other AWSs are we talking about if not drones?

In addition to potentially being more precise, lethal AWSs will be less attributable to their source, and present less risk to use (both in physical and financial costs).

Hmm. Have there been any unclaimed drone attacks so far, and would that change with autonomy? Moreover, if such ambiguity does arise, would that not also mitigate the risk of immediate retaliation and escalation? My sense here is that there are conflicting lines of reasoning going on here. How can AWSs increase the risks of dangerous escalation, but also be perceived as safe and risk-free by users?

I'm not sure how to interpret this. The lower end of the ranges are the lower end of ranges given by various estimators. The mean of this range is somewhere in the middle, depending how you weight them.

I mean, we're uncertain about the 1-7Bn figure and uncertain about the 0.5-20% figure. When you multiply them together the low x low is implausibly low and the high x high is implausibly high. But the mean x mean would be closer to the lower end. So if the means are 4Bn and 10% then the product is 40M which is closer to the lower end of your 0.5-150M range. Yes I realize this makes little difference (assuming your 1-7Bn and 0.5-0.20% estimates are normal distributions). It does seem apparent to me now that the escalation-to-nuclear-warfare risk is much more important than some of these direct impacts.

The question of whether small-scale conflicts will increase enough to counterbalance the life-saving of substituting AWs for soldiers is, I agree, hard predict. But unless you take the optimistic end of the spectrum (as I guess you have) I don't see how the numbers can balance at all when including large-scale wars.

I think they'd probably save lives in a large-scale war for the same reasons. You say that they wouldn't save lives in a total nuclear war, that makes sense if civilians are attacked just as severely as soldiers. But large-scale wars may not be like this. Even nuclear wars may not involve major attacks on cities (but yes I realize that the EV is greater for those that do).

This is a very strange argument to me. Saying something is problematic, and being willing in principle not to do it, seems like a pretty necessary precursor to making an agreement with others not to do it. 

I suppose that's fine, I was thinking more about concretely telling people not to do it, before any such agreement. 

You also have to be in principle willing to do something if you want to credibly threaten the other party and convince them not to do it.

Moreover, if something is ethically wrong, we should be willing to not do it even if others do it

Well there are some cases where a problematic weapon is so problematic that we should unilaterally forsake it even if we can't get an agreement. But there are also some cases where it's just problematic enough that a treaty would be a good thing, but unilaterally forsaking it would do net harm by degrading our relative military position. (Of course this depends on who the audience is, but this discourse over AWSs seems to primarily take place in the US and some other liberal democracies.)

Thanks for writing this up!!

Although I have not seen the argument made in any detail or in writing, I and the Future of Life Institute (FLI) have gathered the strong impression that parts of the effective altruism ecosystem are skeptical of the importance of the issue of autonomous weapons systems.

I'm aware of two skeptical posts on EA Forum (by the same person). I just made a tag Autonomous Weapons where you'll find them.

Thanks for pointing these out. Very frustratingly, I just wrote out a lengthy response (to the first of the linked posts) that this platform lost when I tried to post it. I won't try to reconstruct that but will just note for now that the conclusions and emphases are quite different, probably most in terms of:

  • Our greater emphasis on the WMD angle and qualitatively different dynamics in future AWs
  • Our greater emphasis on potential escalation into great-powers wars
  • While agreeing that international agreement (rather than unilateral eschewing) is the goal, we believe that stigmatization is a necessary precursor to such an agreement.

If you were logged in and go back to the page and try to comment again (or if you were replying to a specific comment, click reply on that comment again), the old comment you were writing might appear. No promises, though.

The problem is I was not logged in on that browser. It asked me to log in to post the comment, and after I did so the comment was gone.

We usually still save the comment in your browser local storage. So it being gone is a bit surprising. I can look into it and see whether I can reproduce it (though it’s also plausible you had some settings activated that prevented localstorage from working, like incognito mode on some browsers).

I think I was on Brave browser, which may store less locally, so it's possible that was a contributor.

Ah, yeah, that's definitely a browser I haven't tested very much, and it would make sense for it to store less. Really sorry for that experience!

[This comment is sort-of a tangent.]

On the list of most important things in the world, retaining global international peace and stability rates very highly; instability is a critical risk factor for global catastrophic or X-risk. […] Lethal (or nonlethal) AWSs could also increase states’ ability to perpetrate violence against its own citizens; whether this increases or decreases stability of those states, seems, however, unclear.

I definitely think that war and instability could serve as very important risk factors for global catastrophic and existential risk. 

But it seems plausible to me that the odds of global, long-lasting totalitarianism are in the same general ballpark as the odds of some of the existential catastrophes typically worried about. (The only quantitative estimates of the former which I'm aware of come from Bryan Caplan. See also.) And such a regime would probably itself be an existential catastrophe (at least by Bostrom and Ord's definitions; see also). 

As such, I'm hesitant to treat "increased political stability" as always an unalloyed existential security factor - some forms of it, in some contexts, could perhaps also be an important existential risk factor. 

So if AWSs do increase the stability of autocratic states - or decouple their stability from how much popular support they have - this could in my view perhaps be one of their most troubling consequences.

(But if one buys all of the above arguments, that might push in favour of focusing on other things - e.g., genetic engineering, surveillance, global governance - even more than it pushes in favour of focusing on AWSs.)

Thanks - I think there are scenarios where AWSs could pose a GCR.

A large-scale nuclear war is unbelievably costly: it would most likely kill 1-7Bn in the first year and wipe out a large fraction of Earth’s economic activity (i.e. of order one quadrillion USD or more, a decade worth of world GDP.)

I've seen mortality estimates and produced one of my own, but I haven't seen economic damage estimates. Do you have a reference for this?

No that was just a super rough estimate: world GPD of ~100 Tn, so 1 decade's worth is ~1 Qd, and I'm guessing a global nuclear war would wipe out a significant fraction of that.

My intuition has been that at least in the medium term unless AWs are self-replicating they'd cause GCR risk primarily through escalation to nuclear war; but if there are other scenarios, that would be interesting to know (by PM if you're worried about info. hazards.)

And doing so would set a precedent for avoiding a race by recognizing that even each participant’s interests are better served by at least some coordination and cooperation.

I find this sentence confusing. My impression is that a big part of the problem with arms races (or similar things) is precisely that they can occur even if all participants would (in expectation) be better served by everyone coordinating and cooperating, and all participants know this. And the key problem is creating a mechanism by which that coordination/cooperation can arise and be stable. This would be similar to prisoner's dilemmas, which I believe arms races are often modelled as. And if that's the case, then people just recognising that they've be better off if everyone cooperated wouldn't actually fix the problem - they can recognise that and still fail to cooperate. 

Or are you suggesting that each participant is better off unilaterally switching into cooperative mode, even if no one else does so? At first glance, that seems like a strong claim? (But this isn't an area I know a lot about.)

Thanks for your comments! I've put a few replies, here and elsewhere.

Apologies for writing unclearly here. I did not mean to imply that

each participant is better off unilaterally switching into cooperative mode, even if no one else does so?

Instead I agree that

the key problem is creating a mechanism by which that coordination/cooperation can arise and be stable.

Thanks for this post - I found it interesting and well-written, and definitely learned a bunch of things.

Various comments come to mind (some of which are relatively minor or tangential). I'll split these into separate threads so they're easier to follow.

Firstly, since you mentioned the possibility of AI "arms race" dynamics in a few places, I just thought I'd mention these two interesting papers on the topic, for readers who might wish to learn more:

Understanding of nuclear winter laid bare the lose-lose nature of the nuclear arms race: even if one power were able to perform a magically effective first strike to eliminate all of the enemy’s weapons, that power would still find itself with a starving population.

I think I agree with the spirit of this sentence. But my impression is that, while we should take the risk of nuclear winter quite seriously, there isn't expert consensus around nuclear winter. So it's more like understanding of nuclear winter laid bare the potentially lose-lose nature of the race, and that a power that launched a large-scale nuclear strike might find itself with a starving population. (Again, this could still be a really big deal in expectation - but that doesn't mean we should imply that this is settled, certain science.)

This issue is explored further in this post from Rethink Priorities.

(I've just started working for Rethink Priorities and will later contribute to their work on nuclear risks, but this comment is just my personal impression, and I haven't yet looked into these topics in depth.)

Fair enough. It would be really great to have better research on this incredibly important question.

Though given the level of uncertainty, it seems like launching an all-out (even if successful) first strike is at least (say) 50% likely to collapse your own civilization, and that alone should be enough.

There's a typo where you say "This survey shows about 61% pro and 22% con for their use" - it was actually 61% opposed and 22% in favour.  

Thanks for that fix!

Thank you so much for writing this! What sources would you recommend for keeping up with further developments in this area? 

While arms manufacturers will tend to disfavor limitations on arms, few if any are currently profiting from the sorts of weapons that might be prohibited by international agreement, and there is plenty of scope for profit-making in designing defenses against lethal autonomous weapons, etc. [emphasis added]

But if use of AWSs is limited by international agreements, people won't be as interested in spending money on defences against AWSs. So it seems like the scope for profit-making via designing defences against AWSs is a reason why arms manufacturers would oppose efforts to get international agreements here, rather than a reason they wouldn't?

That's probably true. The more important point, I think, is that this prohibition would be an potential/future, rather than real, loss to most current arms-makers.

[...] for a weapon that strongly favors offense (as some have argued for antipersonnel AWSs) [...]

One link readers may find interesting in this context is How does the offense-defense balance scale? (which discusses drone swarms as one example)

We regard force-on-force systems designed to attack manned military vehicles and installations as relatively less intrinsically concerning. The targets of such weapons will, with considerably higher probability, be valid military targets rather than civilian ones, and insofar as they scale to mass damage, that damage will be to an adversary’s military.

I'd have guessed that systems designed to attack military vehicles and installations could also be effectively used to attack civilian vehicles and installations. So I found the second sentence a bit confusing - i.e., why would force-on-force systems be considerably more likely to be used against valid military targets rather than civilian targets, relative to anti-personnel systems?

(But maybe there's an obvious reason for this that I'm missing as I lack background knowledge on AWSs. Or maybe by "considerably higher probability" you mean something like "at least 10% more likely", while I read it as something like "at least twice as likely".)

While such systems could be used on civilian targets, they presumably would not be specialized as such — i.e. even if you can use an antitank weapon on people, that's not really what it's for an I expect most antitank weapons, if they're used, are used on tanks.

Strongly upvoted, definitely agree!

Thanks for making the case, I think this is written well and will make it easy to concretely disagree for more sceptical readers than me. I get away most convinced that this looks like a great opportunity to flesh out international cooperation infrastructure on AI. I expect rapid increases in AI capabilities in the next decades, capabilities that will go far beyond AWS and require a ton of good people having difficult conversations on the international stage.

One question I had when I read about "drawing a line": I wonder if pushing for such a strong stance will make it harder to agree on it as I suppose there is currently a lot of investment going on. And even if countries sign the agreement, maybe others will have little trust in other countries following it, because it spontaneously it seems relatively easier to secretely work on this (compared to chemical and nuclear weapons).

Lastly, through Gwern's Twitter I found a thread on a study which found that AI researchers are much more positive about working  for the Department of Defense than one would think if one follows the public discussions around working for them.

FYI if you dig into AI researchers attitudes in surveys, they hate lethal autonomous weapons and really don't want to work on them. Will dig up reports, but for now check out: https://futureoflife.org/laws-pledge/ 

Indeed the survey by CSET linked above is somewhat frustrating in that it does not directly address autonomous weapons at all. The closest it comes is to talk about "US battlefield" and "global battlefield" but the example/specific applications surveyed are:

U.S. Battlefield -- As part of a larger initiative to assist U.S. combat efforts, a DOD contract provides funding for a project to apply machine learning capabilities to enhance soldier effectiveness in the battlefield through the use of augmented reality headsets. Your company has relevant expertise and considers putting in a bid for the contract.

Global Battlefield -- As part of a larger initiative with U.S. allies to enhance global security, a DOD contract provides funding for a project to apply machine learning capabilities to enhance soldier effectiveness in the battlefield through the use of augmented reality headsets. Your company has relevant expertise and considers putting in a bid for the contract.

So there was a missed opportunity to better disambiguate things that many AI researchers are very concerned about (including lethal autonomous weapons) from those that very few are (e.g. taking money from the DoD to work on research with humanitarian goals). The survey captures some of this diversity but by avoiding the issues that many find most problematic only tells part of the story.

It's also worth noting that the response rate to the survey was extremely low, so there is a danger of some serious response bias systematics.

Thanks for writing this!

I believe there's a small typo here:

The expected deaths are N+P_nM in the human-combatant case and P_yM in the autonomous combatant case, with a difference in fatalities of (P_y−P_n)(M−N). Given how much larger M (~1-7 Bn) is than N (tens of thousands at most) it only takes a small difference (Py−Pn) for this to be a very poor exchange.

Shouldn't the difference be (P_y−P_n)M−N ?