Postdoc at Polish Academy of Sciences, working in military ethics, mostly but not exclusively on the ethics of autonomous weapons. Reducitarian.
I suppose this would depend on the specifics of the society one lives in - is the social safety net rarely used and not under strain because everybody is generally prosperous and it exists just in case? Or is it already strained and failing to catch some? At least in the latter case EAs making risky choices and ending up putting avoidable pressures on the social safety net would come at a direct cost to underprivileged individuals, kinda like flower-children relying on free neighborhoods clinics in the 60's ended up hurting local community access to basic healthcare.
I'm also skeptical by default o any EA exceptionalism - I donate much so I am allowed to offload risks onto society could become "I donate much so I never tip/settle small debts/generally freeride in minor ways whenever I can". For me the strength of the basic EA pitch has always been its universalizability and full compatibility with otherwise respectable, responsible, others-friendly lifestyle.
Just a minor point - i I am willing to rely on family/friends as my financial safety net, then I should also be ready to reciprocate this to an equal degree. Relying on each other for financial safety does not obviate the need for saving, on the contrary, it necessitates them. Not saving and risking having to rely on a government safety net, while defensible, is not wholly unproblematic - surely this is not a universalizable strategy.
This is an extremely well-structured and quite comprehensive take on the controversy and if a person fresh to the issue were to read just one article on the topic, this would be one of the best candidates. It is also the first LAWS piece I encountered in months that expanded on the previous discussions rather than just rehashing them. This is ready to be published in a scientific journal at a minimal cost of adding some references and editing the text to fit their preferences, and I heartily encourage you to submit it asap.
Regarding the substance – the section on responsibility is spot on, and the discussion of the sense of justice is the best and most tacit treatment of the topic I’ve seen. Same for most of your treatment of the moral hazard in favor of conflict. However, as you explicitly rely on the premise that Western interventions are on average good, this argument is a non-starter in the academic, political and media circles that most prominently call for a ban. I have long suspected that the real motivation for the relative ferocity of the opposition is the mistaken diagnosis that the US military and its allies are the most disruptive and destructive force on the planet, a force with neo-colonial motivations and goals. The criminal incompetence and callous conduct of many such campaigns lends plausibility to that view, and therefore I usually strengthen this particular argument by pointing to a unique opportunity for remodeling the Western armed forces to better fit the human rights paradigm – and opportunity offered by automatization of frontline combat. It is not just about reducing direct civilian casualties by increasing precision, lessening force protection or preventing atrocities – it is about re-orienting the human military professionals from delivery of force to effective conflict resolution, letting them focus on the soft jobs and strict supervision as LAWs do the lifting, marching and shooting.
(On the margins – comparing the cost of killing and ISIS fighter versus domestic interventions and calling it “cost-effective” appears heartless even within the community of committed utilitarians, let alone outside it. Not to mention that measuring the effectiveness of a military intervention by body count is highly misleading.)
You are more than right to shift focus to the issue of LAWs-enabled authoritarian abuses and societal control over the military and government. However, you seem to significantly underestimate the price of the authoritarians (and terrorists) getting their hands on such technology. The much higher incidence of abuse would not be offset by the greater stability of otherwise decent governments, partially because decent-but-weak governments will not be able to afford such technology in sufficient quantity, and partially because such governments are violently rebelled against at a much lower rate than parasitic or tyrannical regimes. Jihadism is currently the only ideology that foments significant unrest not connected with abuse or abject poverty, and even jihadism usually rides the coattails of other grievances, like in Syria, Iraq or Libya. There is no doubt in my mind that stopping even a portion of the worlds autocrats from acquiring LAWs is a goal worthy of very significant investments – I just do not think it can ever be achieved by declaring unilateral robot disarmament. In fact one of the most potent arguments for developing well-behaved, hacking-proof LAWs is their unique ability to stop rogue LAWs to be inevitably developed by such rogue actors.
The illustration of the algorithmic nature of the human-based modern military machine is again the most succinct yet accurate I have encountered in the literature.
In conclusion, while I generally agree with 95% of the points you are making and applaud your focus on the domestic control issue and the stress you put on the generally beneficial character of the Western military influence, I believe you do not go far enough in examining the detrimental potential of LAWs in wrong hands and therefore of the abdication of this technology by the West. I believe that both the benefits of handling the new Revolution in Military Affairs well and the costs of mishandling it are larger than you appear to estimate.
Great job, and again - make sure to publish this!
I see your point about the 'risk' language. I think the matter depends on whether/to what extent you find EA contributions to be a matter of universal duty. The more you view them this way, the less speaking about 'risk' here makes sense. This, however, is not a given - even if I myself feel morally bound to contribute to a certain extent, I may not believe others around me have such duties (the duty may derive from the promises I made, my attachment to consistency of my views etc.) And in that latter case obliging them to help me because I sacrificed my savings or the good cause seems not ok.
Pooling of risk as you view it would require forming some kind of non-profit insurance entity that would still need to be well-organized/chartered and come with some operating costs. May be worth contemplating for the community as a venture, especially as its charter could include a mechanism for the money being automatically donated when certain risks do not materialize for the contributors.