For me, basically every other question around effective altruism is less interesting than this basic one of moral obligation. It’s fun to debate whether some people/institutions should gain or lose status, and I participate in those debates myself, but they seem less important than these basic questions of how we should live and what our ethics should be.
Prompted by this quote from Scott Alexander's recent Effective Altruism As A Tower Of Assumptions, I'm linking a couple of my old LessWrong posts that speak to "these basic questions". They were written and posted before or shortly after EA became a movement, so perhaps many in EA have never read them or heard of these arguments. (I have not seen these arguments reinvented/rediscovered by others, or effectively countered/refuted by anyone, but I'm largely ignorant of the vast academic philosophy literature, in which the same issues may have been discussed.)
The first post, Shut Up and Divide?, was written in response to Eliezer Yudkowsky's slogan of "shut up and multiply" but I think also works as a counter against Peter Singer's Drowning Child argument, which many may see as foundational to EA. (For example Scott wrote in the linked post, "To me, the core of effective altruism is the Drowning Child scenario.")
The second post, Is the potential astronomical waste in our universe too small to care about?, describes a consideration through which someone who starts out with relatively high credence in utilitarianism (or utilitarian-ish values) may nevertheless find it unwise to devote much resources to utilitarian(-like) pursuits in the universe that we find ourselves in.
To be clear, I continue to have a lot of moral uncertainty and do not consider these to be knockdown arguments against EA or against caring about astronomical waste. There are probably counterarguments to them I'm not aware of (either in the existing literature or in platonic argument space), and we are probably still ignorant of many other relevant considerations. (For one such consideration, see my Beyond Astronomical Waste.) I'm drawing attention to them because many EAs may have too much trust in the foundations of EA in part because they're not aware of these arguments.
“Shut Up and Divide” boils down to “actually, you maybe shouldn’t care about individual strangers, because that’s more logically consistent (unless you multiply, in which case it’s equally consistent)”. But caring is a higher and more human virtue than being consistent, especially since there are two options here: be consistent and care about individual strangers, or just be consistent. You only get symmetry if the adoption of ‘can now ethically ignore suffering of strangers’ as a moral principle is considered a win for the divide side. That’s the argument that would really shake the foundations of EA.
So actually we have three choices: divide, multiply, or be scope insensitive. In an ideal world populated by good and rational people, they’d probably still care relatively more about their families, but no one will be indifferent to the suffering of the far away. Loving and empathizing with strangers is widely agreed to be a vital and beautiful part of what makes us human, despite our imperfections. The fact that we have this particular cognitive bias of scope insensitivity may be fundamentally human in some sense, but it’s not really part of what makes us human. Nobody’s calling scope sensitive people sociopaths. Nobody’s personal idea of utopia elevates this principle of scope insensitivity to the level of ‘love others’.
Likewise, very few would prefer/imagine this idealized world as filled with ‘divide’ people rather than ‘multiply’ people. Because:
Most people’s imagined inhabitants of utopia fit the former profile much more closely. So I think that “Shut Up and Divide” only challenges the Drowning Child argument insofar as you have very strange ethical intuitions, not shared by many. To really attack this foundation you’d have to argue for why these common intuitions about good and bad are wrong, not just that they’re ripe for inconsistencies when held by normal humans (which every set of ethical principles is).
You're right, I misrepresented your point here. This doesn't affect the broader idea that the apparent symmetry only exists if you have strange ethical intuitions, which are left undefended.
I stan... (read more)