Hide table of contents

In this post I comprehensively review the risks and upsides of lethal autonomous weapons (LAWs). I incorporate and expand upon the ideas in this previous post of mine and the comments, plus other recent debates and publications.

My principle conclusions are:

1. LAWs are more likely to be a good development than a bad one, though there is quite a bit of uncertainty and one could justify being neutral on the matter. It is not justified to expend effort against the development of lethal autonomous weapons, as the pros do not outweigh the cons.

2. If someone still opposes lethal autonomous weapons, they should focus on directly motivating steps to restrict their development with an international treaty, rather than fomenting general hostility to LAWs in Western culture.

3. The concerns over AI weapons should pivot away from accidents and moral dilemmas, towards the question of who would control them in a domestic power struggle. This issue is both more important and more neglected.

Background: as far as I can tell, there has been no serious analysis judging whether the introduction of LAWs would be a good development or not. Despite this lack of foundation, a few members in or around the EA community have made some efforts to attempt to stop the new technology from being created, most notably the Future of Life Institute. So we should take a careful look at this issue and see whether these efforts ought to be scaled up, or if they are harmful or merely a waste of time.

This article is laid out as a systematic classification of potential impacts. I’m not framing it as a direct response to any specific literature because the existing arguments about AI weapons are pretty scattered and unstructured.

Responsibility: can you hold someone responsible for a death caused by an LAW?

Opponents of LAWs frequently repeat the worry that they prevent us from holding people responsible for bad actions. But the idea of “holding someone responsible” is vague language and there are different reasons to hold people responsible for bad events. We should break this down into more specific concerns so that we can talk about it coherently.

Sense of justice to victims, family and compatriots

When crimes are committed, it’s considered beneficial for affected parties to see that the perpetrators are brought to justice. When a state punishes one of its own people for a war crime, it’s a credible signal that it disavows the action, providing reassurance to those who are suspicious or afraid of the state. Unpunished military crimes can become festering wounds of sociopolitical grievance. (See, for example, the murder of Gurgen Magaryan.)

But if an LAW takes a destructive action, there are many people who could be punished for it:

· The operational commander, if she authorized the use of LAWs somewhere where it wasn’t worth the risks.

· The tactical commander or the Range Safety Officer, if the LAW was not employed properly or if appropriate precautions were not taken to protect vulnerable people.

· The maintenance personnel, if they were negligent in preventing or detecting a deficiency with the weapon.

· The procurement team, if they were negligent in verifying that the new system met its specified requirements.

· The weapons manufacturer, if they failed to build a system that met its specified/advertised requirements.

· The procurement team, if they issued a flawed set of requirements for the new system.

If everyone does their job well, and a deadly accident still happens, then the affected parties may not feel much grievance at all, since it was just a tragedy, so that they wouldn’t demand justice in the first place. Now with the uncertainty and disinformation of the real world, they may falsely believe that someone deserves to be punished; however people can basically be scapegoated – this can happen anyway with human-caused accidents; someone in the bureaucracy takes the blame. And finally, the AI itself might be punished, if that provides some sense of closure – it seems ridiculous now, but with more sophisticated LAWs, that could change.

Overall, when it comes to the need to provide a sense of justice to aggrieved parties after war crimes and accidents, the replacement of humans with LAWs leaves a variety of options on the table. In addition, it might help prevent the sense of grievance from turning up in the first place.

Institutional incentives against failures

We hold people responsible for crimes because this prevents crimes from becoming more common. If LAWs are developed and deployed but no one in this process faces penalties for bad behavior, there could be no serious restraint against widespread, repeated failures.

Looking at recent AI development, we can see that incentives against AI failure really don’t require the identification of a specific guilty person. For instance, there has been significant backlash against Northpointe’s COMPAS algorithm after a ProPublica analysis argued that it produced racially biased results. Sometimes these incentives don’t require failures to occur at all; Google halted its involvement with Project Maven after employees opposed the basic idea of Google working on military AI tech. And Axios rejected the use of facial recognition on its body cam footage after its ethics panel decided that such technology wouldn’t be suitably beneficial. There is a pervasive culture of fear about crimes, accidents, and disparate impacts perpetrated by AI, such that anyone working on this technology will know that their business success depends in part on their ability to avoid such perceptions.

That being said, this kind of populist moral pressure is fickle and unreliable, as governments and companies are sometimes willing to overlook it. We should have proper institutional mechanisms rather than relying on social pressure. And ideally, such mechanisms could dampen the flames of social pressure, so that the industry is policed in a manner which is more fair, more predictable, and less burdensome.

But just as noted above, if an LAW commits a bad action, there are many actors who might be punished for it:

· The operational commander, if she authorized the use of LAWs somewhere where it wasn’t worth the risks.

· The tactical commander or the Range Safety Officer, if the LAW was not employed properly or if appropriate precautions were not taken to protect vulnerable people and equipment.

· The maintenance personnel, if they were negligent in preventing or detecting a deficiency with the weapon.

· The procurement team, if they were negligent in verifying that the new system met its specified requirements.

· The weapons manufacturer, if they failed to build a system that met its specified/advertised requirements.

· The procurement team, if they issued a flawed set of requirements for the new system.

The idea that automated weapons prevent people from being punished for failures is easily rejected by anyone with military experience because we are well accustomed to the fact that, even if Private Joe is the one who pulled the trigger on a friendly or a civilian, the fault often lies with an officer or his first-line noncommissioned officer who gets disciplined accordingly. Replacing Private Joe with a robot doesn’t change that. The military has an institutional concern for safety and proactively assigns responsibility for risk mitigation.

That being said, there are some problems with ‘automation bias’ where people are prone to offload too much responsibility to machines (Scharre 2018), thus limiting their ability to properly respond to institutional incentives for proactive safety. However this could be ameliorated with proper familiarization as these autonomous systems are more thoroughly used and understood. It’s not clear if this kind of bias will persist across cultures and in the long run. It’s also not clear if it is greater than humans’ tendency to offload too much responsibility to other humans.

Are the existing laws and policies adequate?

Even if accountability for LAWs is theoretically acceptable, the existing policies and laws for things like war crimes and fratricide might have loopholes or other deficiencies for bad actions perpetrated by LAWs. I don’t know if they actually do. But if so, the laws and guidelines should and will be updated appropriately as force compositions change. There may be some deficiencies in the meantime, but remember that banning LAWs would take time and legal work to achieve in the first place. In the time that it takes to achieve a national or international LAW ban, laws and policies could be suitably updated to accommodate LAW failures anyway. And actual LAW development and deployment will take time as well

Conclusion

The need to hold someone responsible for failures generally does not pose a major reason against the development and deployment of LAWs. There is some potential for problems where operators offload too much responsibility to the machine because they misunderstand its capabilities and limitations, but it’s not apparent that this gives much overall reason to oppose the technology.

Existing laws and policies on war may be insufficient to accommodate LAWs, but they can be expected to improve in the same time that it would take for LAW bans or development/deployment to actually occur.

Do LAWs create a moral hazard in favor of conflict?

An obvious benefit of LAWs is that they keep soldiers out of harm’s way. But some people have worried that removing soldiers from conflict zones removes a strong political incentive against warmaking, which means more war and more casualties from collateral damage. This argument would also apply against technologies like body armor and the replacement of conscripted armies with smaller professional fighting forces, as these developments also reduce battlefield deaths, but curiously people don’t condemn those developments. Thus, the use of this argument is suspiciously arbitrary. And it does seem wrong to say that we should all use conscripted armies as cannon fodder, or ban body armor, because more grievous battlefield casualties will ensure that politicians don’t start too many wars. I would sooner assume that the net effect of military safety innovations is to reduce overall deaths, despite small potential increases in the frequency of conflict. International relations literature generally doesn’t focus on military deaths as being the sole or primary obstacle to military conflict; in reality states follow patterns of broader incentives like state security, regime preservation, international law and norms, elite opinion, popular opinion and so on, which are in turn only partially shaped by battlefield casualties. You could eliminate combatant deaths entirely, and there would still be a variety of reasons for states to not go to war with each other. For one thing, they would definitely seek to avoid the risk of civilian casualties in their own countries.

Still, it’s not obvious that this logic is wrong. So let’s take a more careful look. A key part of this is the civilian casualty ratio: how many victims of war are civilians, as opposed to soldiers? Eckhardt (1989) took a comprehensive look at war across centuries and found that the ratio is about 50%. A New York Times article claimed that a 2001 Red Cross study identified a civilian casualty ratio of 10:1 (so 91% of casualties are civilians) for the later 20th century, but I cannot find the study anywhere. Since we are specifically looking at modern, technological warfare, I decided to look at recent war statistics on my own.

Conventional, interstate conflict

The deadliest recent interstate war was the 1980-1988 Iran-Iraq War. Estimates vary widely, but the one source which describes both military and civilian casualties lists 900,000 military dead and 100,000 civilian dead. Recent census evidence suggests the military death tolls might actually have been far lower, possibly just a few hundred thousand. One could also include the Anfal Genocide for another 50,000-100,000 civilian deaths. So the civilian causalty ratio for the Iran-Iraq War was between 10% and 40%.

In the 1982 Falklands War, 3 civilians were killed compared to 900 military personnel, for a civilian casualty ratio of 0.3%.

Casualty figures for the 1982 Lebanon War are deeply disputed and uncertain. Total deaths probably amount to 15,000-20,000. A study by an independent newspaper in Beirut, supplemented by Lebanese officials, found that approximately half of the local dead were civilians; meanwhile an official at a relief organization claimed that 80% of the were civilians (Race and Class). 657 Israeli military deaths, and 460-3,500 civilian massacre victims, could change this figure a little bit. Yasser Arafat and the Israeli government have both made more widely differing claims, which should probably be considered unreliable. Overall, the civilian casualty ratio for the 1982 Lebanon War was probably between 50% and 80%.

A 2002 Medact report summarizes a couple sources on Iraqi casualties in the 1991 Gulf War, but Kuwaiti and coalition deaths should be included as well. Then the Gulf War caused 105,000-125,000 military deaths and 8,500-26,000 proximate civilian deaths, for a civilian casualty ratio between 8% and 17%. The number of civilian casualties can be considered much higher if we include indirect deaths from things like the worsened public health situation in Iraq, but we should not use this methodology without doing it consistently across the board (I will return to this point in a moment).

The Bosnian War killed 57,700 soldiers and 38,200 civilians, for a civilian casualty ratio of 40%.

Azeri civilian casualties from the Nagorno-Karabakh War are unknown; just looking at the Armenian side, there were 6,000 military and 1,264 civilian deaths, for a civilian casualty ratio of 17%.

The Kosovo War killed 2,500-5,000 soldiers and 13,548 civilians, for a civilian casualty ratio between 73% and 84%.

The 2003 invasion of Iraq killed approximately 6,000 combatants and 4,000 civilians according to a Project on Defense Alternatives study, for a civilian casualty ratio of 40%.

The 2008 Russo-Georgian War killed 350 combatants and 400 civilians, for a civilian casualty ratio of 53%.

The War in Donbass, while technically not an interstate conflict, is still a very recent case of conventional warfare between organized advanced combatants. There have been 10,000 military deaths and 3,300 civilian deaths, for a civilian casualty ratio of 25%.

The international military intervention against ISIL was for the most part fought against a conventionally organized proto-state, even though ISIL wasn’t officially a state. There were 140,000 combatant deaths and 54,000 civilian deaths in Iraq and Syria, for a civilian casualty ratio of 28%.

The 2020 Karabakh War killed 10,000-20,000 soldiers and 150 civilians, for a civilian casualty ratio of around 1%. This is in spite of the fact that both sides repeatedly targeted cities, even with cluster bombs.

There have also been a few wars where I can’t find any information indicating that there was a significant number of civilian casualties, like the Toyota War and the Kargil War, though I haven’t done a deep dive through sources.

Overall, it looks like civilians usually constitute a minority of proximate deaths in modern interstate warfare, though it can vary greatly. However, civilian casualties could be much higher if we account for all indirect effects. For instance, the Gulf War led to a variety of negative health outcomes in Iraq, meaning that total civilian deaths could add up to several hundred thousand (Medact). When considering the long-run effects of things like refugee displacement and institutional shock and decay, which often have disproportionate impact on children and the poor, it may well be the case that civilians bear the majority of burdens from modern conventional warfare.

One issue with this line of reasoning is that it must also be applied to alternative practices besides warfare. For instance, if disputes are answered with economic sanctions rather than warfare, then similar indirect effects may worsen or end many civilian lives, while leaving military personnel largely unscathed. Frozen conflicts can lead to impoverished regions with inferior political status and systematic under-provisioning of institutions and public goods. If disputes are resolved by legal processes or dialogue, then there probably aren’t such negative indirect effects, but this is often unrealistic for serious interstate disputes on the anarchic international stage (at least not unless backed up by a credible willingness to use military force). Moreover, veteran mortality and trauma as an indirect result of combat experience may also be significant.

So let’s say that our expectation for the average civilian casualty ratio in the future is 50%. Then, if militaries became completely automated, the frequency of war would have to more-than­-double in order for total death to increase. On its face, this seems highly unlikely. If the average civilian casualty ratio were 75%, then fully automating war would only have to increase its frequency by more than one-third for total deaths to increase, but still this seems unlikely to me. It is difficult to see how nations will become much more willing to go to war when, no matter how well automated their militaries are, they will face high numbers of civilian casualties.

NATO participation in overseas conflicts

The fact that people use the moral hazard argument against AI weapons but haven’t used it against other developments such as body armor may be partially caused by political circumstance. The typical frame of reference for Western anti-war attitudes is not near-peer conventional conflict, but recent wars (usually counterinsurgencies) where America and Western European states have suffered relatively few casualties compared to very large numbers of enemies and civilians. For instance, in the Iraq War, coalition deaths totaled 4,800 whereas Iraqi deaths totaled 35,000 (combatants) and 100,000 (civilians). And Western liberal states are famously averse to suffering military casualties, at least when voters do not perceive the war effort as necessary for their own security. So in this context, the ‘moral hazard’ argument makes more sense.

However, in this specific context, it’s not clear that we should desire less warmaking willpower in the first place! Note that the worries over casualties were not a significant obstacle to the rapid and operationally successful invasions of Kuwait, Iraq and Afghanistan. Rather, such aversion to casualties primarily serves as an obstacle to protracted counterinsurgency efforts like the follow-on campaigns in Iraq and Afghanistan. And these stability efforts are good at least half the time. Consider that America’s withdrawal from Iraq in 2011 partially contributed to the growth of ISIL and the 2014-2017 Iraqi Civil War (NPR). The resurgence of the Taliban in Afghanistan was partially caused by the lack of international forces to bolster Afghanistan’s government in the wake of the invasion (Jones 2008). In 2006, 2007, and 2009, the majority of Afghans supported the presence of US and NATO forces in Afghanistan, with an overwhelming majority opposing Taliban rule (BBC). Military operations can be very cost effective compared to domestic programs: Operation Inherent Resolve killed ISIL fighters for less than $200,000 each, per US government statistics (Fox News); this is low compared to the cost of neutralizing a violent criminal or saving a few lives in the US. A variety of research indicates that peacekeeping is a valuable activity.

While some people have argued that a more pacifist route would promote stronger collective norms against warfare in the long run, this argument cuts both ways: stability operations can promote stronger norms against radical Islamist ideology, unlawful rebellion, ethnic cleansing, violations of UN and NATO resolutions, and so on.

In any case, America is slowly de-escalating its involvement in the Middle East in order to focus more on posturing for great power competition in the Indo-Pacific. So this concern will probably not be as important in the future as it seems now.

Conclusion

For conventional conflicts, reducing soldier casualties probably won’t increase the frequency of warfare very much, so LAW use will reduce total death (assuming that they function similarly to humans, which we will investigate below). For overseas counterinsurgency and counterterrorism operations such as those performed by NATO countries in the Middle East, the availability of LAWs could make military operations more frequent, but this could just as easily be a good thing rather than a bad one (again, assuming that the LAWs have the same effects on the battlefield as humans).

Will LAWs worsen the conduct of warfare?

Accidents

AI systems can make mistakes and cause accidents, but this is a problem for human soldiers as well. Whether accident rates will rise, fall or remain the same depends on how effective AI is when the military decides to adopt it for a given battlefield task. Instead of trying to perform guesswork on how satisfactory or unsatisfactory we perceive current AI systems to be, we should start with the fact that AI will only be adopted by militaries if it has a sufficient level of battlefield competence. For instance, it must be able to discriminate friendly uniforms from enemy uniforms so that it doesn’t commit fratricide. If a machine can do that as well as a human can, then it could similarly distinguish enemy uniforms from civilian clothes. Since the military won’t adopt a system that’s too crude to distinguish friendlies from enemies, it won’t adopt one that can’t distinguish enemies from civilians. In addition, the military already has incentives to minimize collateral damage; rules of engagement tend to be adopted by the military for military purposes rather than being forced by politicians. Preventing harm to civilians is commonly included as an important goal in Western military operations. Less professional militaries and autocratic countries tend to care less about collateral damage, but in those militaries, it’s all the more important that their undisciplined soldiers be replaced by more controllable LAWs which will be less prone to abuses and abject carelessness.

You could worry that LAWs will have a specific combination of capabilities which renders them better for military purposes while being worse all-things-considered. For instance, maybe a machine will be better than humans at navigating and winning on the battlefield, but at the cost of worse accidents towards civilians. A specific case of this is the prevalent idea that AI doesn’t have enough “common sense” to behave well at tasks outside its narrow purview. But it’s not clear if this argument works because common sense is certainly important from a warfighting point of view; conversely, it may not be one of the top factors that prevents soldiers from causing accidents. Maybe AI’s amenability to updates, its computational accuracy, or its great memory will make it very good at avoiding accidents, with its lack of common sense being the limiting factor preventing it from having battlefield utility.

Even if AI does lead to an increase in accidents, this could easily be outweighed by the number of soldiers saved by being automated off the battlefield. Consider the introduction of Phalanx CIWS. This automated weapon system has killed two people in accidents. Meanwhile, at least four American CIWS-equipped ships have suffered fatal incidents: the USS Stark (struck by an Iranian missile in 1987), the USS Cole (damaged by a suicide bomb in 2000), the USS Fitzgerald (collided with a transport in 2017), and the USS John S. McCain (collided with a transport in 2017). There were 71 deaths from these incidents out of approximately a thousand total crewmen. Replacing the ships’ nine Phalanx systems with, say, eighteen (less effective) manual autocannon turrets would have added thirty-six gun crew, who would suffer 2.5 statistical deaths given the accident mortality rate on these four ships. This still does not include possible deaths from minor incidents and deaths from incidents in foreign navies which also use Phalanx. Thus, it appears that the introduction of Phalanx is more likely to have decreased accident deaths.

Overall, it’s not clear if accidental casualties will increase or decrease with automation of the battlefield. In the very long run however, machines can be expected to surpass humans in many dimensions, in which case they will robustly reduce the risk of accidents. If we delay development and reinforce norms and laws against LAWs now, it may be more difficult to develop and deploy such clearly beneficial systems in the longer-run future.

Ethics

It’s not immediately clear whether LAWs would be more or less ethical than human soldiers. Unlike soldiers, they would not be at risk of independently and deliberately committing atrocities. Their ethical views could be decided more firmly by the government based on the public interest, as opposed to humans who lack ethical transparency and do not change their minds easily. They could be built free of vices like anger, libido, egotism, and fatigue which can lead to unethical behavior.

One downside is that their behavior could be more brittle in unusual situations where they have to make an ethical choice beyond the scope of regular laws and procedures, but these are outlier scenarios which are minor in comparison to other issues (Scharre 2018), and often it’s not clear what the right action is in those cases anyway. If the right action in a situation will be unclear, then we shouldn’t be worried about what action will be taken. Moral outcomes matter, not the mere process of moral deliberation. If there is a moral dilemma where people intractably disagree about the right thing to do, then settling the matter with even a coin flip would be just as good as randomly picking a person to use their personal judgment.

LAWs increase the psychological distance from killing (Scharre 2018), potentially reducing the level of restraint against dubious killing. However, it’s a false comparison to measure the psychological distance of a infantryman against the psychological distance of a commander employing LAWs. The infantryman will be replaced by the LAW which has no psychology at all. Rather, we should compare the psychological distance of a commander using infantrymen against the psychological distance of a commander using LAWs. In this case, there will be a similar level of psychological distance. In any case, increased psychological distance from killing can be beneficial for the mental health of military personnel.

Some object that it is difficult or impossible to reach a philosophical consensus on the ethical issues faced by AI systems. However, this carries no weight as it is equally impossible to reach a philosophical consensus on how humans should behave either. And in practice, legal/political compromises, perhaps in the form of moral uncertainty, can be implemented despite the lack of philosophical consensus.

AI systems could be biased in their ethical evaluations, but this will generally be less significant than bias among humans.

Overall, it’s not really clear whether LAWs will behave more or less ethically than humans, though I would expect them to be a little more ethical because of the centralized regulation and government standards. But in the long run, AI behavior can become more ethically reliable than human behavior on the battlefield. If we delay development and reinforce norms and laws against LAWs now, it may be more difficult to develop and deploy such clearly beneficial systems in the longer-run future.

Escalation

There is a worry that autonomous weapons will make tense military situations between non-belligerent nations less stable and more escalatory, prompting new outbreaks of war (Scharre 2018). It’s not clear if accidental first shots will become more frequent: LAW behavior could be more tightly controlled to avoid mistakes and incidents. However, they do have faster response times and more brittleness in chaotic environments, which could enable quicker escalation of violent situations, analogous to a “flash crash” on Wall Street.

However, the flip side of this is that having fewer humans present in these kinds of situations implies that outbreaks of violence will have less political sting and therefore more chance of ending with a peaceful solution. A country can always be satisfactorily compensated for lost machinery through financial concessions; the same cannot be said for lost soldiers. Flash firefights often won’t lead to war in the same sense that flash crashes often don’t lead to recessions.

A related benefit of rapid reactions is that they make it more risky for the enemy to launch a surprise strike. When a state can’t count on human lethargy to inhibit enemy reactions, it’s going to think twice before initiating a war.

Overall, it seems that escalation problems will get worse, but these countervailing considerations reduce the magnitude of the worry.

Hacking

One could argue that the existence of LAWs makes it possible for hackers such as an unfriendly advanced AI agent to take charge of them and use them for bad ends. However, in the long run a very advanced AI system would have many tools at its disposal for capturing global resources, such as social manipulation, hacking, nanotechnology, biotechnology, building its own robots, and things which are beyond current human knowledge. A superintelligent agent would probably not be limited by human precautions; making the world as a whole less vulnerable to ASI is not a commonly suggested strategy for AI safety since we assume that once it gets onto the internet then there's not really anything that can be done to stop it. Plus, it's wrong to assume that an AI system with battlefield capabilities, which is just as good at general reasoning as the humans it replaced, would be vulnerable to a simple hack or takeover. If a machine can perform complex inference regarding military rules, its duties on the battlefield, and the actions it can take, then it's likely to have plenty of resistance and skepticism about dubious commands.

The more relevant issue is the near term. In this case, autonomous technologies might not be any less secure than many other technologies which we currently rely on. A fighter jet has electronics, as does a power plant. Lots of things can theoretically be hacked, and hacking an LAW to cause some damage isn't necessarily any more destructive than hacking infrastructure or a manned vehicle. Replace the GPS coordinates in a JDAM bomb package and you've already figured out how to use our existing equipment to deliberately cause many civilian casualties. Remote-controlled robots are probably easier to hack than LAWs, because they rely on another channel sending them orders. Things like this don't happen often, however. Also, the military has perfectly good incentives on its own to avoid using systems which are vulnerable to hacking. By far the largest risk with hacking is enemy actors hacking weapons for their own purposes – something which the military will be adequately worried about avoiding.

Trust

Lyall and Wilson (2009), examining both a comprehensive two-century data set and a close case study of two U.S. Army divisions in Iraq, found that mechanization worsens the prospects for effective counterinsurgency. Putting troops behind wheels and armor alienates the populace and increases opposition to the militiary involvement. And I think this can be a case of perverse incentives rather than just a military error because, as noted above, liberal democracies are loss-averse in protracted counterinsurgencies. It takes a lot of political willpower to suffer an additional twenty American troop deaths even if it means that the campaign will be wrapped up sooner with one hundred fewer Iraqi deaths.

I presume that if LAWs are used for counterinsurgency operations then they are very likely to display this same effect. They could be overused to save soldier lives in the short term with worse results for the locals and for the long term.

However, running LAWs in COIN operations in close proximity to civilians puts them in one of the most complex and difficult environments for an AI to navigate safely and effectively. I don’t think it makes military sense to do this until AI is very competent. In addition, robotic vehicles could easily be used in counterinsurgencies if LAWs are unavailable, and they will cause similarly negative perceptions. (Locals may not notice the difference. I vaguely recall a story of US infantry in the invasion of Afghanistan who, with all their body armor and Oakleys, were actually thought by the isolated villagers to be some kind of robotic or otherworldly visitors. I don’t remember the exact details but you get the point.) So while I would definitely urge extra caution about using LAWs for counterinsurgency, I consider this only a small reason against the development of LAWs in general.

Conclusion

The weight of evidence does not show that LAWs will display significantly more accidents or unethical behavior, in fact they’re more likely to be safer because of centralized standards and control.

Hacking vulnerability will be increased, but this is not an externality to conventional military decision making.

The main risk is that LAWs could cause conflicts to start and escalate more quickly. However, the automation of militaries would help prevent such an incident from becoming a casus belli, and could provide a more predictable deterrent against initiating conflicts.

LAWs could worsen our counterinsurgency efforts, but this is a relatively small issue since they are inherently unsuited for the purpose, and the alternative might be equally problematic robots anyway.

How will LAW development affect general AI development?

Arms races

You might say that LAWs will prompt an international arms race in AI. Arms races don’t necessarily increase the risk of war; in fact they can decrease it (Intriligator and Brito 1984). We should perhaps hold a weak overall presumption to avoid arms races. It’s more apparent that an AI arms race could increase the risk of a catastrophe with misaligned AGI (Armstrong et al 2016) and would be expensive. But faster AI development will help us avoid other kinds of risks unrelated to AI, and it will expedite humanity's progress and expansion.

Moreover, no military is currently at the cutting edge of AI or machine learning (as far as we can tell). The top research is done in academia and the tech industry, at least in the West. Finally, if there is in fact a security dilemma regarding AI weaponry, then activism to stop it is unlikely to be fruitful. The literature on the efficacy of arms control in international relations is rather mixed; it seems to work only as long as the weapons are not actually vital for national security.

Secrecy

AI development for military purposes would likely be carried out in secret, making it more difficult for states to coordinate and share information about its development. However, that doesn’t prevent such coordination and sharing from occurring with non-military AI. Moreover, Armstrong et al (2016) showed that uncertainty about AI capabilities actually increases safety in an arms race to AGI.

Safety research

A particular benefit of militaries investing in AI research is that their systems are frequently safety-critical and tasked with taking human life, and therefore they are built to higher-than-usual standards of verification and safety. Building and deploying AI systems in tense situations where many ethics panels and international watchdogs are voicing public fears is a great way to improve the safety of AI technology. It could lead to collaboration, testing and lobbying for safety and ethics standards, that can be applied to many types of AI systems elsewhere in society.

Conclusion

LAW development could lead to an arms race, possibly with a lot of uncertainty about other countries’ capabilities. This would be costly and would probably increase the risk of misaligned AGI. However, it might lead to faster development of beneficial AI technology, not only in terms of basic competence but also in terms of safety and reliability.

How will LAWs change domestic conflict?

Crime and terrorism

Some fear that LAWs could be cheaply used to kill people without leaving much evidence behind. However, autonomous targeting is clearly worse for an intentional homicide; it’s cheaper, easier and more reliable if you pilot the drone yourself. A drone with a firearm was built and widely publicized in 2015 but, as far as I know, there have been no drone murders. Autonomy could be more useful in the future if drone crime becomes such a threat as to prompt people to install jamming devices in their homes, but then it will be very difficult for the drone to kill someone inside or near their house (which can also be guarded with simple improvements like sensors and translucent windows).

People similarly worry that assassinations will be made easier by drones. Autonomy could already be useful here because top political leaders are sometimes protected with jamming devices. However, when a politician is traveling in vehicles, speaking indoors or behind bullet-resistant glass, and protected by security teams, it becomes very difficult for a drone to harm them. Small simple drones with toxins or explosives can be blocked by security guards. One drone assassination attempt was made in 2018; it failed (though most assassination attempts do anyway). In the worst case, politicians can avoid public events entirely, only communicating from more secure locations to cameras and tighter audiences; this would be unfortunate but not a major problem for government functioning.

The cheapness of drones makes them a potential tool for mass killing. Autonomous targeting would be more useful if someone is using large numbers of drones. However, it’s not clear how terrorists would get their hands on military targeting software – especially because it’s likely to be bound up with military-specific electronic hardware. We don’t see tools like the Army Battle Command System being used by Syrian insurgents; we don’t see pirated use of foreign military software in general. The same thing is likely to apply to military AI software. And the idea that the military would directly construct mass murder drones and then lose them to terrorists is nonsense. Moreover, civilian technology could easily be repurposed for the same ends. So, while drone terrorism is a real risk, banning LAWs from the military is unlikely to do anything to prevent it. You might be able to slightly slow down some of the general progress in small autonomous robots, perhaps by banning more technology besides LAWs, but that would inhibit the development of many beneficial technologies as well.

In addition, there are quite a few countermeasures which could substantially mitigate the impact of drone terrorism: both simple passive measures, and active defenses from equally sophisticated autonomous systems.

Meanwhile, LAWs and their associated technologies could be useful for stopping crime and terrorism.

Suppressing rebellions

LAWs (or less-than-lethal AI weapons using similar technology) could be used for internal policing to stifle rebellions. Their new military capability likely won’t change much, as rebels could obtain their own AI and rebels have usually taken asymmetric approaches with political and tactical exploits to circumvent their dearth of heavy organized combat power. And, per Liall and Wilson (2009), using mechanized forces against an insurgency provokes more backlash and increases the chance that the counterinsurgency will fail; by similar logic, LAWs probably wouldn’t be productive in reinforcing most fragile states unless they were significantly more capable than humans.

The relevant difference is that the AI could be made with blind obedience to the state or to the leader, so it could be commanded to suppress rebellions in cases where human policemen and soldiers would stand down. In a democracy like the US, the people might demand that LAWs be built with appropriate safeguards to respect the Constitution and reject unlawful orders no matter what the President demands, or perhaps with a rule against being used domestically at all. But in autocratic countries, the state might be able to deploy LAWs without such restrictions.

In 2018, the Sudanese Revolution and the Armenian Revolution were both aided by military personnel breaking away from the government. And in 2019, Bolivian president Evo Morales was made to step down by the police and military. Therefore, this is a real issue.

LAWs might have another effect making effective revolutions more common by forcing people to stick to peaceful methods which are, per Chenoweth (2011), more successful. But this is debatable because it assumes that protest and rebellion movements behave irrationally. Also, peaceful rebellions in general might be less successful if they no longer pose a credible threat of violence.

LAWs might make revolutions easier by reducing the size of the military. Military personnel who are trained on weapons systems like artillery and warships can still be used in a secondary capacity as security when the use of their powerful weapons would be inappropriate. Autonomous weapons do not possess this flexibility. You cannot take a C-RAM air defense system and tell it to defend the gates of the presidential palace against a crowd of demonstrators, but you can do this with the gun crew of some manual system. The convergence of militaries towards more and more complex force-on-force capabilities could lead to them becoming less useful for maintaining state stability.

Finally, as noted in a previous section, the use of LAWs to bolster state stability against the populace might provoke more alienation and backlash. If autocrats make errant over-use of LAWs to protect their regimes, as many states have done with regular mechanized forces, they could inadvertently weaken themselves. Otherwise, smart states will seek to avoid being forced to deploy LAWs against their own people.

So there is only weak overall reason to believe that LAWs will make revolutions more difficult. Also, it’s not clear if making rebellions harder is a bad thing. Many antigovernment movements just make things worse. For instance, corrupt Iraqi and Afghan security forces with Islamist sympathies have done major damage to their countries during the recent conflicts.

On the other hand, if rebellion becomes more difficult, then AI weapons could also change state behavior: the state might act in a more autocratic manner since it feels more secure. I wouldn’t think that this is a major problem in a liberal democracy, but there are differing opinions on that – and not every country is a liberal democracy, nor are they going to change anytime soon (Mounk and Foa 2018). In an extreme case, drone power might lead to heavy social stratification and autocracy.

But we should remember that many other things may change in this equation. Social media has weakened state stability in recent years, plausibly to the detriment of global well-being. Future technologies may increase popular power even further.

In summary, there are weak reasons to believe that LAWs will increase state stability, pacify political insurrectionists, and make state behavior more autocratic. It’s a complex issue and it is not clear if increased state stability will be good or bad. However, there is a noteworthy tail risk of serious autocracy.

Domestic democide

Democide is the the intentional killing of unarmed people by government agents acting in their authoritative capacity and pursuant to government policy or high command. LAWs could make this issue worse by executing government orders without much question, in a context where humans might refuse to obey the order. On the flip side of the coin, LAWs could faithfully follow the law, and would not be vulnerable to the excesses of vengeful human personnel. LAWs would not have a desire to commit rape, which has been a major component of the suffering inflicted by democidal actors.

I don’t have any special level of familiarity with the histories of the worst democides, such as the Great Leap Forward, the Great Purge, the Nazi Holocaust, and the brutalities of the Sino-Japanese War. But from what I’ve learned, the agents of evil were neither restrained by much government oversight nor likely to object much to their assignments, which suggests that replacing them with LAWs wouldn’t change much. But there were many small acts of opportunistic brutality that weren’t necessary for achieving government objectives. States like Stalinist USSR wouldn’t have built LAWs with special safeguards against committing terrible actions, but I do think that the LAWs wouldn’t have as much drive for gratuitous brutality as was present among some of the human soldiers and police. A democidal state could program LAWs to want to cause gratuitous suffering to victims, but it would be more politically advantageous (at least on the international stage) to avoid doing such a thing. Conversely, if it wanted to perform gratuitous repression, it could always rely on human personnel to do it.

Smaller-scale, more complex instances of democide have more often been characterized by groups acting wrongly on the basis of their racial or political prejudices, orthogonal or sometimes contradictory to the guidance of the state. This pattern occurs frequently in ethnic clashes in places like the Middle East and the Caucasus. And when characterizing the risks which motivate the actions of antifa in the United States, Gillis (2017) writes “The danger isn’t that the KKK persuades a hundred million people to join it and then wins elections and institutes fascist rule. That’s a strawman built on incredibly naive political notions. The danger is that the fascist fringe spreads terror, pushes the overton window to make hyper-nationalism and racism acceptable in public, and gradually detaches the actual power of the state (the police and their guns) from the more reserved liberal legal apparatus supposedly constraining them.” In this sort of context, replacing human policing with machines which are more directly amenable to public oversight would be a positive development.

So overall, I would expect states with LAWs to be a bit less democidal.

Conclusion

Their impact on rebellions and state stability is complex and there is a significant risk that they will make things worse by increasing autocracy. On the other hand, there is also a significant chance that they will make things better by reducing state fragility. This question partly, but not entirely, hinges on whether it’s a good thing for armed rebellion to become more difficult. I don’t think either is more likely than the other, but the possible risks seem more severe in magnitude than the possible benefits.

Replacing humans soldiers with LAWs will probably reduce the risks of democide.

For the time being, I would support a rule where autonomous weapons must refuse to target the country’s own people no matter what; this should be pretty feasible given that the mere idea of using facial recognition in American policing is very controversial.

Indirect effects of campaigning against killer robots

It’s one thing to discuss the theory of what LAW policy should be, but in practice we should also look at the social realities of activism. Since people such as those working at FLI have taken a very public-facing approach in provoking fear of AI weapons (example) we should view all facets of the general public hostility to LAWs as potential consequences of such efforts.

Stymying the development of nonlethal technology

There is quite some irony to the fact that the same community of AI researchers and commentators which attempted to discourage frank discussion of AGI risks by condemning alarmism and appealing to the benefits of technological progress, just because of some unintended secondary reactions to the theory of AGI risk, has quickly condemned military AI development in a much more explicitly alarmist manner and made direct moves to restrict technological development. This fearmongering has created costs spilling over into other technologies which are more desirable and defensible than LAWs.

Project Maven provided targeting data for counterterrorist and counterinsurgent strikes which were managed by full human oversight, but Google quit the project after mass employee protest. Eleven employees actually resigned in protest, an action which can generally harm the company and its various projects. Google also killed its technology ethics board after internal outrage at its composition, which was primarily due to a gender politics controversy but partially exacerbated by the inclusion of drone company CEO Dyan Gibbens; the ethics board would not have had much direct power but could have still played an important role in fostering positive conversations and trust for the industry in a time of political polarization. 57 scientists called for a boycott of a South Korean university because it built a research center for military AI development, but the university rejected the notion that its work would involve LAWs.

Disproportionately worsening the strategic position of the US and allies

The reality of this type of activism falls far short of the ideal where countries across the world are discouraged from building and adopting LAWs. Rather, this activism is mostly concentrated in Western liberal democracies. The problem is that these are precisely the states for which increased military power is important. Protecting Eastern Europe and Georgia from Russian military interventions is (or would have been) a good goal. Suppressing Islamist insurgencies in the Middle East is a good goal. Protecting Taiwan and other countries in the Pacific Rim from Chinese expansionism is a good goal.

And the West does not have a preponderance of military power. We are capable of winning a war against Russia, but the potential costs are high enough to allow Russia to be quite provocative without major repercussion. More importantly, China might prevail today in a military conflict with the US over an issue like the status of Taiwan. The consequences could include destruction from the war itself, heightened tension among others in the region, loss of local freedoms and democracy (consider Hong Kong), deliberate repression of dissident identities (consider the Uyghurs and Falun Gong), reduced prosperity due to Chinese state ventures supplanting the private sector (Boeing et al 2015, Fang et al 2015) or whatever other economic concessions China foists upon its neighbors, loss of strategic resources and geopolitical expansion which would give China more power to alter world affairs. Chinese citizens and elites are hawkish and xenophobic (Chen Weiss 2019, Zhang 2019). Conflict may be inevitable due to Thucydides’ Trap or Chinese demands for global censorship. While some other Indo-Pacific countries are also guilty of bad domestic policies (like India and Indonesia), they are democracies which have more openness and prospects for cooperation and improvement over time. Even if US military spending is not worth the money, it’s still the case that American strength is preferable (holding all else equal) to American weakness.

The majority of foreign policy thinkers believe (though not unanimously) that it’s good for the world order if the US maintains commitments to defend allies. The mixed attitudes of Silicon Valley and academia towards military AI development interfere with this goal.

If you only make careful political moves towards an international AI moratorium then you aren’t perpetuating the problem, but if you spread popular fear and opposition to AI weapons (especially among coders and computer science academia) then it is a different story. In fact, you might even be worsening the prospects for international arms limitations because China and Russia could become more confident in their ability to match or outpace Western military AI development when they can see that our internal fractures are limiting our technological capacities. In that case we would have little direct leverage to get them to comply with an AI weapons ban.

Safety and ethics standards

If a ban is attempted in vain, the campaign may nevertheless lead to more rigorous standards of ethics and safety in military AI, as a strong signal is sent against the dangers of AI weapons. Additionally, banning LAWs could inspire greater motivations for safety and ethics in other AI applications.

However, if we compare anti-robot campaigning against the more reasonable alternative – directly advocating for good standards of ethics and safety – then it will have less of an impact. Additionally, the idea of a ban may distract people from accepting the coming technology and doing the necessary legal and technical work to ensure ethics and safety.

Conclusion

The current phenomenon of liberal Westerners condemning LAWs is bad for the AI industry and bad for the interests of America and most foreign countries. It might indirectly lead to more ethics and safety in military AI.

Aggregation and conclusion

Here are all the individual considerations regarding AI use:

Here are the major categories:

Here are my principle conclusions:

1. LAWs are more likely to be a good development than a bad one, though there is quite a bit of uncertainty and one could justify being neutral on the matter. It is not justified to expend effort against the development of lethal autonomous weapons, as the pros do not outweigh the cons.

2. If someone still opposes lethal autonomous weapons, they should focus on directly motivating steps to restrict their development with an international treaty, rather than fomenting general hostility to LAWs in Western culture.

3. The concerns over AI weapons should pivot away from accidents and moral dilemmas, towards the question of who would control them in a domestic power struggle. This issue is both more important and more neglected.

Addendum: is AI killing intrinsically better or worse than manual killing?

There is some philosophical debate on whether it’s intrinsically bad for someone to be killed via robot rather than by a human. One can surmise immediately that arguments against LAWs on this basis are based on dubious ideologies which marginalize people’s actual preferences and well-being in favor of various nonexistent abstract principles. Such views are not compatible with honest consequentialism which prioritizes people’s real feelings and desires. Law professor Ken Anderson succinctly trashed such arguments by hypothesizing a military official who says… “Listen, you didn’t have to be killed here had we followed IHL [international humanitarian law] and used the autonomous weapon as being the better one in terms of reducing battlefield harm. You wouldn’t have died. But that would have offended your human dignity… and that’s why you’re dead. Hope you like your human dignity.”

Also, in contrast to a philosopher's abstract understanding of killing in war, soldiers do not kill after some kind of pure process of ethical deliberation which demonstrates that they are acting morally. Soldiers learn to fight as a mechanical procedure, with the motivation of protection and success on the battlefield, and their ethical standard is to follow orders as long as those orders are lawful. Infantry soldiers often don't target individual enemies; rather, they lay down suppressive fire upon enemy positions and use weapons with a large area of effect, such as machine-guns and grenades. They don't think about each kill in ethical terms; instead they mainly rely on their Rules Of Engagement, which is an algorithm that determines when you can or can't use deadly force upon another human. Furthermore, military operations involve the use of large systems where there it is difficult to determine a single person who has the responsibility for a kinetic effect. In artillery bombardments for instance, an officer in the field will order his artillery observer to make a request for support or request it himself based on an observation of enemy positions which may be informed by prior intelligence analysis done by others. The requested coordinates are checked by a fire direction center for avoidance of collateral damage and fratricide, and if approved then the angle for firing is relayed to the gun line. The gun crews carry out the request. Permissions and procedures for this process are laid out beforehand. At no point does one person sit down and carry out philosophical deliberation on whether the killing is moral - it is just a series of people doing their individual jobs making sure that a bunch of things are being done correctly. The system as a whole looks just as grand and impersonal as LAWs do. This further undermines philosophical arguments against LAWs.

Comments10
Sorted by Click to highlight new comments since: Today at 9:12 PM

This is an extremely well-structured and quite comprehensive take on the controversy and if a person fresh to the issue were to read just one article on the topic, this would be one of the best candidates. It is also the first LAWS piece I encountered in months that expanded on the previous discussions rather than just rehashing them. This is ready to be published in a scientific journal at a minimal cost of adding some references and editing the text to fit their preferences, and I heartily encourage you to submit it asap.

Regarding the substance – the section on responsibility is spot on, and the discussion of the sense of justice is the best and most tacit treatment of the topic I’ve seen. Same for most of your treatment of the moral hazard in favor of conflict. However, as you explicitly rely on the premise that Western interventions are on average good, this argument is a non-starter in the academic, political and media circles that most prominently call for a ban. I have long suspected that the real motivation for the relative ferocity of the opposition is the mistaken diagnosis that the US military and its allies are the most disruptive and destructive force on the planet, a force with neo-colonial motivations and goals. The criminal incompetence and callous conduct of many such campaigns lends plausibility to that view, and therefore I usually strengthen this particular argument by pointing to a unique opportunity for remodeling the Western armed forces to better fit the human rights paradigm – and opportunity offered by automatization of frontline combat. It is not just about reducing direct civilian casualties by increasing precision, lessening force protection or preventing atrocities – it is about re-orienting the human military professionals from delivery of force to effective conflict resolution, letting them focus on the soft jobs and strict supervision as LAWs do the lifting, marching and shooting.

(On the margins – comparing the cost of killing and ISIS fighter versus domestic interventions and calling it “cost-effective” appears heartless even within the community of committed utilitarians, let alone outside it. Not to mention that measuring the effectiveness of a military intervention by body count is highly misleading.)

You are more than right to shift focus to the issue of LAWs-enabled authoritarian abuses and societal control over the military and government. However, you seem to significantly underestimate the price of the authoritarians (and terrorists) getting their hands on such technology. The much higher incidence of abuse would not be offset by the greater stability of otherwise decent governments, partially because decent-but-weak governments will not be able to afford such technology in sufficient quantity, and partially because such governments are violently rebelled against at a much lower rate than parasitic or tyrannical regimes. Jihadism is currently the only ideology that foments significant unrest not connected with abuse or abject poverty, and even jihadism usually rides the coattails of other grievances, like in Syria, Iraq or Libya. There is no doubt in my mind that stopping even a portion of the worlds autocrats from acquiring LAWs is a goal worthy of very significant investments – I just do not think it can ever be achieved by declaring unilateral robot disarmament. In fact one of the most potent arguments for developing well-behaved, hacking-proof LAWs is their unique ability to stop rogue LAWs to be inevitably developed by such rogue actors.

The illustration of the algorithmic nature of the human-based modern military machine is again the most succinct yet accurate I have encountered in the literature.

In conclusion, while I generally agree with 95% of the points you are making and applaud your focus on the domestic control issue and the stress you put on the generally beneficial character of the Western military influence, I believe you do not go far enough in examining the detrimental potential of LAWs in wrong hands and therefore of the abdication of this technology by the West. I believe that both the benefits of handling the new Revolution in Military Affairs well and the costs of mishandling it are larger than you appear to estimate.

Great job, and again - make sure to publish this!

Thank you for the comment. I will make some changes based on your comment and see about getting it published.

However, as you explicitly rely on the premise that Western interventions are on average good, this argument is a non-starter in the academic, political and media circles that most prominently call for a ban.

I said that stability operations (which do not include the initial invasions of Iraq/Afghanistan and the strikes on Libya) are good at least half the time. I think this is shared by the majority of think tankers, politicians, and academics concerned with foreign policy, though of course the most prominent anti-LAW activists will probably be a different story.

it is about re-orienting the human military professionals from delivery of force to effective conflict resolution, letting them focus on the soft jobs and strict supervision as LAWs do the lifting, marching and shooting.

I don't think this is a serious benefit. Reducing the manpower requirements for firepower and maneuvering does not increase the competence and number of people in other sectors. People in non-force and non-military roles require different qualifications, they cannot be simply recruited from regular soldiers. This is like saying that a car company which automates its production line will now be able to produce more fuel-efficient vehicles.

(On the margins – comparing the cost of killing and ISIS fighter versus domestic interventions and calling it “cost-effective” appears heartless even within the community of committed utilitarians, let alone outside it.

Not sure exactly what your argument is, but desiring the death of active ISIS combatants is a quite popular and common-sense view. Especially among most non-utilitarians, who frequently disregard the welfare of immoral people, whereas utilitarians are more likely to be sympathetic to the personal interests of even ISIS combatants.

Not to mention that measuring the effectiveness of a military intervention by body count is highly misleading.)

Just because it's rough doesn't mean it's misleading. Of course there is complexity and uncertainty to the matter. The accurate comparison is the typical cost of averting a death vs the cost of averting a foreign death from insurgency or terrorism. Getting there would require some assumptions about the average harm posed by an average combatant and similar issues, on which I don't have any data. But based on the figures I provided, it seems like an intuitively persuasive comparison. I can however add a few more lines explicating the uncertainty.

The much higher incidence of abuse would not be offset by the greater stability of otherwise decent governments, partially because decent-but-weak governments will not be able to afford such technology in sufficient quantity, and partially because such governments are violently rebelled against at a much lower rate than parasitic or tyrannical regimes.

Hmm. I can think of lots of examples of flawed democracies facing recent internal unrest. E.g.: Afghanistan, Iraq, Somalia, Nigeria, India, the Philippines, Thailand, Ukraine, and Georgia.

I think your classification is odd here, contrasting decent-but-weak governments with tyrannical ones. What about bad-but-weak governments? Won't they similarly be unable to afford advanced LAW technology?

This would benefit from a clearer formalization of state types, which I will think about using.

Jihadism is currently the only ideology that foments significant unrest not connected with abuse or abject poverty,

I don't think so. See: populism, socialism, ethno-separatism (though your definition of 'abuse' is maybe doing a lot of work here). But I'm not sure what your point is here.

However you have made me realize that liberal states are less likely to use LAWs for internal policing, for political reasons, so the increase in state stability does indeed appear likely to disproportionately accrue to bad regimes.

Thanks kbog for this detailed exposition of your views. I found this claim of yours quite remarkable: "as far as I can tell, there has been no serious analysis judging whether the introduction of LAWs would be a good development or not". It sounds as if you're dismissing the entire body of writing by e.g. Stuart Russell as not being serious analysis, and I also don't see it prominently cited or addressed in you piece even though you describe it as a "review".  I think you'd find it quite interesting to have a conversation with him. 

Sorry, I worded that poorly - my point was the lack of comprehensive weighing of pros and cons, as opposed to analyzing just 1 or 2 particular problems (e.g. swarm terrorism risk).

Thank you for this contribution and for all the work that this article required. 

I will try to go in order through the arguments that I find most interesting to further discuss. 

A) Responsibility: 

  1. What is the evidence that human societies are capable of holding responsible generals and technology producers for death caused by a certain technology? 

If I think about the poor record the International Criminal Court has of bringing war criminals to justice, and the fact that the use of cluster bombs in Laos or Agent Orange in Vietnam did not lead to major trials, I am skeptical on whether someone would be hold accountable for crimes committed by LAWs. 

2.  What evidence do we have that international lawmaking follows suit when a lethal technology is developed as the writer assumes it will happen?

 From Wikipedia on Arms Control: "According to a 2020 study in the American Political Science Review, arms control is rare because successful arms control agreements involve a difficult trade-off between transparency and security. For arms control agreements to be effective, there needs to be a way to thoroughly verify that a state is following the agreement, such as through intrusive inspections. However, states are often reluctant to submit to such inspections when they have reasons to fear that the inspectors will use the inspections to gather information about the capabilities of the state, which could be used in a future conflict."

B) Moral Hazard:

The article does a good work in looking at the evidence available for ratio military/civilian casualties in war. 

However, in order for the comparison to make more sense I would argue that the different examples should be weighted according to the number of victims. 

Intuitively to me, the case for LAWs increasing the chance of overseas conflicts such as the Iraq invasion is a very relevant one, because of the magnitude of civilian deaths. 

From what the text says I do not see why the conclusion is that banning LAWs would have a neutral effect on the likelihood of overseas wars, given that the texts admits that it is an actual concern.

I think the considerations about counterinsurgencies operations being positive for the population is at the very least biased towards favoring Western intervention. 

One could say that if a state like Iraq was not destabilized in the first place, it would not be ground for the expansion of groups such as ISIL. 

Twenty years and 38,000 civilian deaths after the beginning of the Afghanistan War, the Talibans still control half of the country.

C) Domestic conflict 

I think this section makes lots of assumptions and this stresses the high level of uncertainty we have about this topic, I will just quote some of the passages that are open to debate:

"Their new military capability likely won’t change much, as rebels could obtain their own AI and rebels have usually taken asymmetric approaches with political and tactical exploits to circumvent their dearth of heavy organized combat power." 

"If autocrats make errant over-use of LAWs to protect their regimes, as many states have done with regular mechanized forces, they could inadvertently weaken themselves. Otherwise, smart states will seek to avoid being forced to deploy LAWs against their own people."

"Future technologies may increase popular power even further." -> the rise of social media and conspirationism does not necessarily constitute an increase in popular power.

D) Effects of campaigning against killer robots

The considerations about China and the world order in this section seem simplistic and rely on many assumptions. 

Hi Tommaso,

If I think about the poor record the International Criminal Court has of bringing war criminals to justice, and the fact that the use of cluster bombs in Laos or Agent Orange in Vietnam did not lead to major trials, I am skeptical on whether someone would be hold accountable for crimes committed by LAWs. 

But the issue here is whether responsibility and accountability is handled worse with LAWs as compared with normal killing. You need a reason to be more skeptical for crimes committed by LAWs than you are for crimes not committed by LAWs. That there is so little accountability for crimes committed without LAWs even suggests that we have nothing to lose.

What evidence do we have that international lawmaking follows suit when a lethal technology is developed as the writer assumes it will happen?

I don't think I make such an assumption?  Please remind me (it's been a while since I wrote the essay), you may be mistaking a part where I assume that countries will figure out safety and accountability for their own purposes. They will figure out how to hold people accountable for bad robot weapons just as they hold people accountable for bad equipment and bad human soldiers, for their own purposes without reference to international laws.

However, in order for the comparison to make more sense I would argue that the different examples should be weighted according to the number of victims. 

I would agree if we had a greater sample of large wars, otherwise the figure gets dominated by the Iran-Iraq War, which is doubly worrying because of the wide range of estimates for that conflict. You could exclude it and do a weighted average of the other wars. Either way, seems like civilians are still just a significant minority of victims on average. 

Intuitively to me, the case for LAWs increasing the chance of overseas conflicts such as the Iraq invasion is a very relevant one, because of the magnitude of civilian deaths.

Yes, this would be similar to what I say about the 1991 Gulf War - the conventional war was relatively small but had large indirect costs mostly at civilians. Then, "One issue with this line of reasoning is that it must also be applied to alternative practices besides warfare..." For Iraq in particular, while the 2003 invasion certainly did destabilize it, I also think it's a mistake to think that things would have been decent otherwise (imagine Iraq turning out like Syria in the Arab Spring; Saddam had already committed democide once, he could have done it again if Iraqis acted on their grievances with his regime).

From what the text says I do not see why the conclusion is that banning LAWs would have a neutral effect on the likelihood of overseas wars, given that the texts admits that it is an actual concern.

My 'conclusion' paragraph states it accurately with the clarification of 'conventional conflicts' versus 'overseas counterinsurgency and counterterrorism'

I think the considerations about counterinsurgencies operations being positive for the population is at the very least biased towards favoring Western intervention. 

Well, the critic of AI weapons needs to show that such interventions are negative for the population. My position in this essay was that it's unclear whether they are good or bad. Yes, I didn't give comprehensive arguments in this essay. But since then I've written about these wars in my policy platform where you can see me seriously argue my views, and there I take a more positive stance (my views have shifted a bit in the last year or so). 

The considerations about China and the world order in this section seem simplistic and rely on many assumptions. 

Once more, I got you covered! See my more recent essay here about the pros and cons (predominately cons) of Chinese international power. (Yes it's high time that I rewrote and updated this article)

Excellent analysis, thank you! The issue definitely needs a more nuanced discussion. The increasing automation of weaponry (and other technology) won't be stopped globally and pervasively, so we should endeavor to shape how it is developed and applied in a more positive direction.

While I'm broadly uncertain about the overall effects of LAWs within the categories you've identified, and it seems plausible that LAWs are more likely to be good given those particular consequences, one major consideration for me against LAWs is that it plausibly would differentially benefit small misaligned groups such as terrorists. This is the main point of the Slaughterbots video. I don't know how big this effect is, especially since I don't know how much terrorism there is or how competent terrorists are; I'm just claiming that it is plausibly big enough to make a ban on LAWs desirable.

See "crime and terrorism" section.

Ah, somehow I missed that, thanks!