I don't intend to convince you to leave EA, and I don't expect you to convince me to stay. But typical insider "steel-manned" arguments against EA lack imagination about other people's perspectives: for example, they assume that the audience is utilitarian. Outsider anti-EA arguments are often mean-spirited or misrepresent EA (though I think EAs still under-value these perspectives). So I provide a unique perspective: a former "insider" who had a change of heart about the principles of EA.
Like many EAs, I'm a moral anti-realist. This is why I find it frustrating that EAs act as if utilitarianism is self-evident and would be the natural conclusion of any rational person. (I used to be guilty of this.) My view is that morality is largely the product of the whims of history, culture, and psychology. Any attempt to systematize such complex belief systems will necessarily lead to unwanted conclusions. Given anti-realism, I don't know what compels me to "bite bullets" and accept these conclusions. Moral particularism is closest to my current beliefs.
Some specific issues with EA ethics:
- Absurd expected value calculations/Pascal's mugging
- Hypothetically causing harm to individuals for the good of the group. Some utilitarians come up with ways around this (e.g. the reputation cost would outweigh the benefits). But this raises the possibility that in some cases the costs won't outweigh the benefits, and we'll be compelled to do harm to individuals.
- Under-valuing violence. Many EAs glibly act as if a death from civil war or genocide is no different from a death from malaria. Yet this contradicts deeply held intuitions about the costs of violence. For example, many people would agree that a parent breaking a child's arm through abuse is far worse than a child breaking her arm by falling out of a tree. You could frame this as a moral claim that violence holds a special horror, or as an empirical claim that violence causes psychological trauma and other harms, which must be accounted for in a utilitarian framework. The unique costs of violence are also apparent through people's extreme actions to avoid violence. Large migrations of people are most associated with war. Economic downturns cause increases in migration to a lesser degree, and disease outbreaks to a far lesser degree. This prioritization doesn't line up with how bad EAs think these problems are.
Once I rejected utilitarianism, much of the rest of EA fell apart for me:
- Valuing existential risk and high-risk, high-reward careers rely on expected value calculations
- Prioritizing animals (particularly invertebrates) relied on total-view utilitarianism (for me). I value animals (particularly non-mammals) very little compared to humans and find the evidence for animal charities very weak, so the only convincing argument for prioritizing farmed animals was their large numbers. (I still endorse veganism, I just don't donate to animal charities.)
- GiveWell's recommendations are overly focused on disease-associated mortality and short-term economic indicators, from my perspective. They fail to address violence and exploitation, which are major causes of poverty in the developing world. (Incidentally, I also think that they undervalue how much reproductive freedom benefits women.)
The remaining principles of EA, such as donating significant amounts of one's money and ensuring that a charity is effective in achieving its goals, weren't unique enough to convince me to stay in the community.
You can be a moral realist and be very skeptical of anyone who is confident in a moral system, and you can be an anti-realist and be really confident in a moral system. The metaethical question of realism can affect the normative question over moral theories, but it doesn't directly tell us whether to be confident or not.
Anti realists reject the claim that any moral propositions are true. So they don't think there is a fact of the matter about what we morally ought to do. But this doesn't mean they believe that anyone's moral opinion is equally valid. The anti realist can believe that our moral talk is not grounded in facts while also believing that we should follow a particular moral system.
Finally, it seems to me that utilitarians in EA seem to have arguments for it which are at least as well grounded as people with any other moral view in any other community, with the exception of actual moral philosophers. I, for instance, don't think utilitarianism is self-evident. But I think that debunking arguments against moral intuitions are very good and that subjective normativity is the closest pointer to objective normativity that we have, that this implies that we ought to give equal respect to subjective normativity experienced by others, that the von Neumann-Morgenstein axioms of decisionmaking are valuable for a moral theory and point us towards maximizing expected value, and that there is no reason to be risk averse. I think a lot of utilitarians in EA would say something vaguely like this, and the fact that they don't do so explicitly is no proof that they have no justification whatsoever or respect for opposing views.
Empirically, yes, this happens to be the case, but realists don't disagree with that. (Science is also the product of the whims of history, culture, and psychology.) They disagree over whether these products of history, culture, and psychology can be justified as true or not.
Plenty of realists have had this view. And you could be an anti-realist who believes in systematizing complex belief systems as well - it's not clear to me why you can't be both.
So I'm just not sure that your reasoning for your normative ideas is valid, because you're talking as if they follow from your metaethical assumptions when they really should not (without some more details/assumptions).
Note that many people in EA take moral uncertainty seriously, something which is rarely done anywhere else.
Pascal's Mugging is a thought experiment with exceptionally low probability, arbitrarily high stakes events. It poses a noteworthy counterargument to the standard framework of universally maximizing expected value, and therefore is philosophically interesting. That is after all the reason why Nick Bostrom and Eliezer Yudkowsky, two famous effective altruists, developed the idea. But to say that it poses an argument against other expected value calculations is a bit of a non sequitur - there is no clear reason not to say that in Pascal's Mugging, we shouldn't maximize expected value, but in existential risk where the situation is not so improbable and counterintuitive, we should maximize expected value. The whole point of Pascal's Mugging is to try to show that some cases of maximizing expected value are obviously problematic, but I don't see what this says about all the cases where maximizing expected value is not obviously problematic. If there were a single parsimonious decision theory that was intuitive and worked well in all cases including Pascal's Mugging, then you might abandon maximizing expected value in favor of it, but there is no such theory.
There's actual reasons that people like the framework of maximizing expected value; such as how it's invulnerable to Dutch Book Theorems and doesn't lead to intransitivity. In Pascal's Mugging, maybe we can accept losing these properties, because it's such a problematic case. But in other scenarios we will want to preserve them.
It's also worth noting that many of those working on existential risk don't rely on formal mathematical calculations at all, or believe that their cause is very high in probability anyway, as people at MIRI for instance have made clear.
But you are wrong about that. Valuing animal interests comparably to humans' is not a uniquely utilitarian principle. Numerous non-utilitarian arguments for this have been advanced by philosophers such as Regan, Norcross, and Korsgaard, and they have been received very well in philosophy. In fact, they are received so well that there is hardly a credible position which involves rejecting the set of them.
You might think that lots of animals just don't experience suffering, but so many EAs agree with this that I'm a little puzzled as to what the problem is. Sure, there's far more people who take invertebrate suffering seriously in a group of EAs than in a group of other people. But there's so many who don't think that invertebrates are sentient that, to be quite honest, this looks less like "I'm surrounded by people I disagree with" and more like "I'm not comfortable in the presence of people I disagree with."
Also, just to be clear although you never stated it explicitly: the idea that we should make serious sacrifices to others according to a framework of maximizing expected value does not imply utilitarianism. Choosing to maximize expected value is a question of decision theory where many moral theories often don't take a clear side, while the obligation to make significant sacrifices to the developing world has been advanced by non-utilitarian arguments from Cohen, Singer, Pogge, and others. These arguments, also, are considered compelling enough that there is hardly a credible position which involves rejecting the set of them.
Maybe I should have said "I'd prefer if you didn't try to convince me to stay". Moral philosophy isn't a huge interest of mine anymore, and I don't really feel like justifying myself on this. I am giving an account of something that happened to me. Not making an argument for what you should believe. I was very careful to say "in my view" for non-trivial claims. I explicitly said "Prioritizing animals (particularly invertebrates) relied on total-view utilitarianism (for me)." So I'm not interested in hearing why prioritizing animals does not necessarily rely on total view utilitarianism.