I don't intend to convince you to leave EA, and I don't expect you to convince me to stay. But typical insider "steel-manned" arguments against EA lack imagination about other people's perspectives: for example, they assume that the audience is utilitarian. Outsider anti-EA arguments are often mean-spirited or misrepresent EA (though I think EAs still under-value these perspectives). So I provide a unique perspective: a former "insider" who had a change of heart about the principles of EA.
Like many EAs, I'm a moral anti-realist. This is why I find it frustrating that EAs act as if utilitarianism is self-evident and would be the natural conclusion of any rational person. (I used to be guilty of this.) My view is that morality is largely the product of the whims of history, culture, and psychology. Any attempt to systematize such complex belief systems will necessarily lead to unwanted conclusions. Given anti-realism, I don't know what compels me to "bite bullets" and accept these conclusions. Moral particularism is closest to my current beliefs.
Some specific issues with EA ethics:
- Absurd expected value calculations/Pascal's mugging
- Hypothetically causing harm to individuals for the good of the group. Some utilitarians come up with ways around this (e.g. the reputation cost would outweigh the benefits). But this raises the possibility that in some cases the costs won't outweigh the benefits, and we'll be compelled to do harm to individuals.
- Under-valuing violence. Many EAs glibly act as if a death from civil war or genocide is no different from a death from malaria. Yet this contradicts deeply held intuitions about the costs of violence. For example, many people would agree that a parent breaking a child's arm through abuse is far worse than a child breaking her arm by falling out of a tree. You could frame this as a moral claim that violence holds a special horror, or as an empirical claim that violence causes psychological trauma and other harms, which must be accounted for in a utilitarian framework. The unique costs of violence are also apparent through people's extreme actions to avoid violence. Large migrations of people are most associated with war. Economic downturns cause increases in migration to a lesser degree, and disease outbreaks to a far lesser degree. This prioritization doesn't line up with how bad EAs think these problems are.
Once I rejected utilitarianism, much of the rest of EA fell apart for me:
- Valuing existential risk and high-risk, high-reward careers rely on expected value calculations
- Prioritizing animals (particularly invertebrates) relied on total-view utilitarianism (for me). I value animals (particularly non-mammals) very little compared to humans and find the evidence for animal charities very weak, so the only convincing argument for prioritizing farmed animals was their large numbers. (I still endorse veganism, I just don't donate to animal charities.)
- GiveWell's recommendations are overly focused on disease-associated mortality and short-term economic indicators, from my perspective. They fail to address violence and exploitation, which are major causes of poverty in the developing world. (Incidentally, I also think that they undervalue how much reproductive freedom benefits women.)
The remaining principles of EA, such as donating significant amounts of one's money and ensuring that a charity is effective in achieving its goals, weren't unique enough to convince me to stay in the community.
Lila has probably read those. I think Singer’s book contained something to the effect that the book is probably not meant for anyone who wouldn’t pull out the child. MacAskill’s book is more of a how-to; such a meta question would feel out of place there, but I’m not sure; it’s been a while since I read it.
Especially texts that appeal to moral obligation (which I share) signal that the reader needs to find an objective flaw in them to be able to reject them. That, I’m afraid, leads to people attacking EA for all sorts of made-up or not-actually-evil reasons. That can result in toxoplasma and opposition. If they could just feel like they can ignore us without attacking us first, we could avoid that.
A lot of your objections take the form of likely-sounding counternarratives to my narratives. They don’t make me feel like my narratives are less likely than yours, but I increasingly feel like this discussion is not going to go anywhere unless someone jumps in with solid knowledge of history or organizational culture, historical precedents and empirical studies to cite, etc.
That’s a good way to approach the question! We shouldn’t only count those that join the movement for a while and then part ways with it again but also those that hear about it and ignore it, publish a nonconstructive critique of it, tell friends why EA is bad, etc. With small rhetorical tweaks of the type that I’m proposing, we can probably increase the number of those that ignore it solely at the expense of the numbers who would’ve smeared it and not at the expense of the numbers who would’ve joined. Once we exhaust our options for such tweaks, the problem becomes as hairy as you put it.
I haven’t really dared to take a stab at how such an improvement should be worded. I’d rather base this on a bit of survey data among people who feel that EA values are immoral from their perspective. The positive appeals may stay the same but be joined by something to the effect that if they think they can’t come to terms with values X and Y, EA may not be for them. They’ll probably already have known that (and the differences may be too subtle to have helped Lila), but saying it will communicate that they can ignore EA without first finding fault with it or attacking it.
Oh dear, yeah! We should both be writing our little five-hour research summaries on possible cause areas rather than starting yet another marketing discussion. I know someone at CEA who’d get cross with me if he saw me doing this again. xD
It’s well possible that I’m overly sensitive to being attacked (by outside critics), and I should just ignore it and carry on doing my EA things, but I don’t think I overestimate this threat to the extend that I think further investment of our time into this discussion would be proportional.
Sure. But Lila complained about small things that are far from universal to effective altruism. The vast majority of people who differ in their opinions on the points described in the OP do not leave EA. As I mentioned in my top level comment, Lila is simply confused about many of the foundational philosophical issues which she thinks pose an obstacle to her being in effective altruism. Some people will always fall through the cracks, and in this case one of them decided to write about it. Don't over-update based on an examp... (read more)