I don't intend to convince you to leave EA, and I don't expect you to convince me to stay. But typical insider "steel-manned" arguments against EA lack imagination about other people's perspectives: for example, they assume that the audience is utilitarian. Outsider anti-EA arguments are often mean-spirited or misrepresent EA (though I think EAs still under-value these perspectives). So I provide a unique perspective: a former "insider" who had a change of heart about the principles of EA.
Like many EAs, I'm a moral anti-realist. This is why I find it frustrating that EAs act as if utilitarianism is self-evident and would be the natural conclusion of any rational person. (I used to be guilty of this.) My view is that morality is largely the product of the whims of history, culture, and psychology. Any attempt to systematize such complex belief systems will necessarily lead to unwanted conclusions. Given anti-realism, I don't know what compels me to "bite bullets" and accept these conclusions. Moral particularism is closest to my current beliefs.
Some specific issues with EA ethics:
- Absurd expected value calculations/Pascal's mugging
- Hypothetically causing harm to individuals for the good of the group. Some utilitarians come up with ways around this (e.g. the reputation cost would outweigh the benefits). But this raises the possibility that in some cases the costs won't outweigh the benefits, and we'll be compelled to do harm to individuals.
- Under-valuing violence. Many EAs glibly act as if a death from civil war or genocide is no different from a death from malaria. Yet this contradicts deeply held intuitions about the costs of violence. For example, many people would agree that a parent breaking a child's arm through abuse is far worse than a child breaking her arm by falling out of a tree. You could frame this as a moral claim that violence holds a special horror, or as an empirical claim that violence causes psychological trauma and other harms, which must be accounted for in a utilitarian framework. The unique costs of violence are also apparent through people's extreme actions to avoid violence. Large migrations of people are most associated with war. Economic downturns cause increases in migration to a lesser degree, and disease outbreaks to a far lesser degree. This prioritization doesn't line up with how bad EAs think these problems are.
Once I rejected utilitarianism, much of the rest of EA fell apart for me:
- Valuing existential risk and high-risk, high-reward careers rely on expected value calculations
- Prioritizing animals (particularly invertebrates) relied on total-view utilitarianism (for me). I value animals (particularly non-mammals) very little compared to humans and find the evidence for animal charities very weak, so the only convincing argument for prioritizing farmed animals was their large numbers. (I still endorse veganism, I just don't donate to animal charities.)
- GiveWell's recommendations are overly focused on disease-associated mortality and short-term economic indicators, from my perspective. They fail to address violence and exploitation, which are major causes of poverty in the developing world. (Incidentally, I also think that they undervalue how much reproductive freedom benefits women.)
The remaining principles of EA, such as donating significant amounts of one's money and ensuring that a charity is effective in achieving its goals, weren't unique enough to convince me to stay in the community.
I really like this response -- thanks, Eric. I'd say the way I think about maximizing expected value is that it's the natural thing you'll end up doing if you're trying to produce a particular outcome, especially a large-scale one that doesn't hinge much on your own mental state and local environment.
Thinking in 'maximizing-ish ways' can be useful at times in lots of contexts, but it's especially likely to be helpful (or necessary) when you're trying to move the world's state in a big way; not so much when you're trying to raise a family or follow the rules of etiquette, and possibly even less so when the goal you're pursuing is something like 'have fun and unwind this afternoon watching a movie'. There my mindset is a much more dominant consideration than it is in large-scale moral dilemmas, so the costs of thinking like a maximizer are likelier to matter.
In real life, I'm not a perfect altruist or a perfect egoist; I have a mix of hundreds of different goals like the ones above. But without being a strictly maximizing agent in all walks of life, I can still recognize that (all else being equal) I'd rather spend $1000 to protect two people from suffering from violence (or malaria, or what-have-you) than spend $1000 to protect just one person from violence. And without knowing the right way to reason with weird extreme Pascalian situations, I can still recognize that I'd rather spend $1000 to protect those two people, than spend $1000 to protect three people with 50% probability (and protect no one the other 50% of the time).
Acting on preferences like those will mean that I exhibit the outward behaviors of an EV maximizer in how I choose between charitable opportunities, even if I'm not an EV maximizer in other parts of my life. (Much like I'll act like a well-functioning calculator when I'm achieving the goal of getting a high score on a math quiz, even though I don't act calculator-like when I pursue other goals.)
For more background on what I mean by 'any policy of caring a lot about strangers will tend to recommend behavior reminiscent of expected value maximization, the more so the more steadfast and strong the caring is', see e.g. 'Coherent decisions imply a utility funtion' and The "Intuitions" Behind "Utilitarianism":
... (read more)