I don't intend to convince you to leave EA, and I don't expect you to convince me to stay. But typical insider "steel-manned" arguments against EA lack imagination about other people's perspectives: for example, they assume that the audience is utilitarian. Outsider anti-EA arguments are often mean-spirited or misrepresent EA (though I think EAs still under-value these perspectives). So I provide a unique perspective: a former "insider" who had a change of heart about the principles of EA.
Like many EAs, I'm a moral anti-realist. This is why I find it frustrating that EAs act as if utilitarianism is self-evident and would be the natural conclusion of any rational person. (I used to be guilty of this.) My view is that morality is largely the product of the whims of history, culture, and psychology. Any attempt to systematize such complex belief systems will necessarily lead to unwanted conclusions. Given anti-realism, I don't know what compels me to "bite bullets" and accept these conclusions. Moral particularism is closest to my current beliefs.
Some specific issues with EA ethics:
- Absurd expected value calculations/Pascal's mugging
- Hypothetically causing harm to individuals for the good of the group. Some utilitarians come up with ways around this (e.g. the reputation cost would outweigh the benefits). But this raises the possibility that in some cases the costs won't outweigh the benefits, and we'll be compelled to do harm to individuals.
- Under-valuing violence. Many EAs glibly act as if a death from civil war or genocide is no different from a death from malaria. Yet this contradicts deeply held intuitions about the costs of violence. For example, many people would agree that a parent breaking a child's arm through abuse is far worse than a child breaking her arm by falling out of a tree. You could frame this as a moral claim that violence holds a special horror, or as an empirical claim that violence causes psychological trauma and other harms, which must be accounted for in a utilitarian framework. The unique costs of violence are also apparent through people's extreme actions to avoid violence. Large migrations of people are most associated with war. Economic downturns cause increases in migration to a lesser degree, and disease outbreaks to a far lesser degree. This prioritization doesn't line up with how bad EAs think these problems are.
Once I rejected utilitarianism, much of the rest of EA fell apart for me:
- Valuing existential risk and high-risk, high-reward careers rely on expected value calculations
- Prioritizing animals (particularly invertebrates) relied on total-view utilitarianism (for me). I value animals (particularly non-mammals) very little compared to humans and find the evidence for animal charities very weak, so the only convincing argument for prioritizing farmed animals was their large numbers. (I still endorse veganism, I just don't donate to animal charities.)
- GiveWell's recommendations are overly focused on disease-associated mortality and short-term economic indicators, from my perspective. They fail to address violence and exploitation, which are major causes of poverty in the developing world. (Incidentally, I also think that they undervalue how much reproductive freedom benefits women.)
The remaining principles of EA, such as donating significant amounts of one's money and ensuring that a charity is effective in achieving its goals, weren't unique enough to convince me to stay in the community.
Thank you, Lila, for your openness on explaining your reasons for leaving EA. It's good to hear legitimate reasons why someone might leave the community. It's certainly better than the outsider anti-EA arguments that do tend to misrepresent EA too often. I hope that other insiders who leave the movement will also be kind enough to share their reasoning, as you have here.
While I recognize that Lila does not want to participate in a debate, I nevertheless would like to contribute an alternate perspective for the benefit of other readers.
Like Lila, I am a moral anti-realist. Yet while she has left the movement largely for this reason, I still identify strongly with the EA movement.
This is because I do not feel that utilitarianism is required to prop up as many of EA's ideas as Lila does. For example, non-consequentialist moral realists can still use expected value to try and maximize good done without thinking that the maximization itself is the ultimate source of that good. Presumably if you think lying is bad, then refraining from lying twice may be better than refraining from lying just once.
I agree with Lila that many EAs act too glib about deaths from violence being no worse than deaths from non-violence. But to the extent that this is true, we can just weight these differently. For example, Lila rightly points out that "violence causes psychological trauma and other harms, which must be accounted for in a utilitarian framework". EAs should definitely take into account these extra considerations about violence.
But the main difference between myself and Lila here is that when she sees EAs not taking things like this into consideration, she takes that as an argument against EA; against utilitarianism; against expected value. Whereas I take it as an improper expected value estimate that doesn't take into account all of the facts. For me, this is not an argument against EA, nor even an argument against expected value -- it's an argument for why we need to be careful about taking into account as many considerations as possible when constructing expected value estimates.
As a moral anti-realist, I have to figure out how to act not by discovering rules of morality, but by deciding on what should be valued. If I wanted, I suppose I could just choose to go with whatever felt intuitively correct, but evolution is messy, and I trust a system of logic and consistency more than any intuitions that evolution has forced upon me. While I still use my intuitions because they make me feel good, when my intuitions clash with expected value estimates, I feel much more comfortable going with the EV estimates. I do not agree with everything individual EAs say, but I largely agree with the basic ideas behind EA arguments.
There are all sorts of moral anti-realists. Almost by definition, it's difficult to predict what any given moral anti-realist would value. I endorse moral anti-realism, and I just want to emphasize that EAs can become moral anti-realist without leaving the EA movement.
I really like this response -- thanks, Eric. I'd say the way I think about maximizing expected value is that it's the natural thing you'll end up doing if you're trying to produce a particular outcome, especially a large-scale one that doesn't hinge much on your own mental state and local environment.
Thinking in 'maximizing-ish ways' can be useful at times in lots of contexts, but it's especially likely to be helpful (or necessary) when you're trying to move the world's state in a big way; not so much when you're trying to raise a family or follow the rule... (read more)