All arguments I have seen for the EA philosophy (in particular some of its ‘nastier’ consequences) to date have derived from broadly utilitarian principles. Can anyone point me to alternatives broadly under the wing of deontology, virtuism or otherwise? I identify as utilitarian so this is more so out of curiosity than anything else.
[epistemic status: very insecure, but I've been thinking about it for a while; there's probably a more persuasive argument out there]
I think you can easily extrapolate from a Kantian imperfect duty to help other to EA (but I understand peolpe seldom have the patience to engage with this point in Kantian philosophy); also, I remeber seeing a recent paper that used normative uncertainty to argue, quite successfully, that a deontological conception of moral obligation, given uncertainty, would end up in some sort of maximization. Other philosophers (Shelly Kagan, Derek Parfit) have persuasively argued that plausible versions of the most accepted moral philosophies tend to collapse into each other.
It'd be wonderful if someone could easily provide an argument reducing consequentialism, deonlogism and virtue ethics into each other. People could stop arguing like "you can only accept that if you're a x-utilitarian...", and focus on how to effectively realize moral value (which is a hard enough subject).
My own personal and sketchy take here would be something like:
To consistently live with virtue in society, I must follow moral duties defined by social norms that are fair, stable and efficient – that, in some way, strive for general happiness (otherwise,s ociety will change or collapse).
To maximize general happiness, I need to recognize that I am a limited rational agent, and devise a life plan that includes acquiring virtuous habits, and cooperating with others through rules and principles that define moral obligations for reasonable individuals.
To act taking Reason in me as an end in itself and according to the moral law, I need to live in society, and recognize my own limitations and my dependence on other rational beings, thus adopting habits that prevent vice and allow me to be recognized as a virtuous cooperator. To consistently do this, at least in scenarios of factual and normative uncertainty, implies acting in a way that can be described as restrictedly optimizing a cardinal social welfare function
Maybe a deontological version would consist of not merely doing enough to avoid violating moral law, but using evidence to absolutely minimize the risk of violating any such duties. For example, the Center for Effective Deontology might research contracts people commonly sign (like cell phones or insurance) and provide advice on how to avoid accidentally violating them to reduce promise-breaking.