I don't intend to convince you to leave EA, and I don't expect you to convince me to stay. But typical insider "steel-manned" arguments against EA lack imagination about other people's perspectives: for example, they assume that the audience is utilitarian. Outsider anti-EA arguments are often mean-spirited or misrepresent EA (though I think EAs still under-value these perspectives). So I provide a unique perspective: a former "insider" who had a change of heart about the principles of EA.
Like many EAs, I'm a moral anti-realist. This is why I find it frustrating that EAs act as if utilitarianism is self-evident and would be the natural conclusion of any rational person. (I used to be guilty of this.) My view is that morality is largely the product of the whims of history, culture, and psychology. Any attempt to systematize such complex belief systems will necessarily lead to unwanted conclusions. Given anti-realism, I don't know what compels me to "bite bullets" and accept these conclusions. Moral particularism is closest to my current beliefs.
Some specific issues with EA ethics:
- Absurd expected value calculations/Pascal's mugging
- Hypothetically causing harm to individuals for the good of the group. Some utilitarians come up with ways around this (e.g. the reputation cost would outweigh the benefits). But this raises the possibility that in some cases the costs won't outweigh the benefits, and we'll be compelled to do harm to individuals.
- Under-valuing violence. Many EAs glibly act as if a death from civil war or genocide is no different from a death from malaria. Yet this contradicts deeply held intuitions about the costs of violence. For example, many people would agree that a parent breaking a child's arm through abuse is far worse than a child breaking her arm by falling out of a tree. You could frame this as a moral claim that violence holds a special horror, or as an empirical claim that violence causes psychological trauma and other harms, which must be accounted for in a utilitarian framework. The unique costs of violence are also apparent through people's extreme actions to avoid violence. Large migrations of people are most associated with war. Economic downturns cause increases in migration to a lesser degree, and disease outbreaks to a far lesser degree. This prioritization doesn't line up with how bad EAs think these problems are.
Once I rejected utilitarianism, much of the rest of EA fell apart for me:
- Valuing existential risk and high-risk, high-reward careers rely on expected value calculations
- Prioritizing animals (particularly invertebrates) relied on total-view utilitarianism (for me). I value animals (particularly non-mammals) very little compared to humans and find the evidence for animal charities very weak, so the only convincing argument for prioritizing farmed animals was their large numbers. (I still endorse veganism, I just don't donate to animal charities.)
- GiveWell's recommendations are overly focused on disease-associated mortality and short-term economic indicators, from my perspective. They fail to address violence and exploitation, which are major causes of poverty in the developing world. (Incidentally, I also think that they undervalue how much reproductive freedom benefits women.)
The remaining principles of EA, such as donating significant amounts of one's money and ensuring that a charity is effective in achieving its goals, weren't unique enough to convince me to stay in the community.
I wanted to pose a question (that I found plausible), and now you’ve understood what I was asking, so my work here is pretty much done.
But I can also, for a moment longer, stay in my role and argue for the other side, because I think there are a few more good arguments to be made.
It’s true that I hadn’t considered the “online charisma” of the situation, but I don’t feel like Option B is what I’d like to argue for. Neither is Option A.
Option A looks really great until we consider the cost side of things. Several people with a comprehensive knowledge of economics, history, and politics investing hours of their time (per person leaving) on explaining things that must seem like complete basics to these experts? They could be using that time to push their own boundaries of knowledge or write a textbook or plan political activism or conduct prioritization research. And they will. Few people will have the patience to explain the same basics more than, say, five or ten times.
They’ll write FAQs, but then find that people are not satisfied when they pour out their most heartfelt irritation with the group only to be linked an FAQ entry that only fits their case so so.
It’s really just the basic Eternal September Effect that I’m describing, part of what Durkheim described as anomie.
Option B doesn’t have much to do with anything. I’m hoping to lower the churn rate by helping people predict from the outset whether they’ll want to stick with EA long term. Whatever tone we’ll favor for forum discussions is orthogonal to that.
That’s also why a movement with a high churn rate like that would be doomed to having discussions only on a very shallow and, for many, tedious level.
Also what Fluttershy said. If you imagine me as some sort of ideologue with fixed or even just strong opinions, then I can assure you that neither is the case. My automatic reaction to your objections is, “Oh, I must’ve been wrong!” then “Well, good thing I didn’t state my opinion strongly. That’d be embarrassing,” and only after some deliberation I’ll remember that I had already considered many of these objections and gradually update back in the direction of my previous hypothesis. My opinions are quite unusually fluid.
Other social movements end up like feminism, with oppositions and toxoplasma. Successful social movements don’t happen by default by not worrying about these sorts of dynamics, or I don’t think they do. That doesn’t mean that my stab at a partial solution goes in the correct direction, but it currently seems to me like an improvement.
Let’s exclude the last example or it’ll get recursive. How would you realize that? I’ve been a lurker in a very authoritarian forum for a while. They had some rules and the core users trusted the authorities to interpret them justly. Someone got banned every other week or so, but they were also somewhat secretive, never advertised the forum to more than one specific person at a time and only when they knew the person well enough to tell that they’d be a good fit for the forum. The core users all loved the forum as a place where they could safely express themselves.
I would’ve probably done great there, but the authoritarian thing scared me on a System 1 level. The latter (about careful advertisement) is roughly what I’m proposing here. (And if it turns out that we need more authoritarianism than I’ll accept that too.)
The lying problem thing is a point in case. She didn’t identify with the movement, just picked out some quotes, invented a story around them, and later took most of it back. Why does she even write something about a community she doesn’t feel part of? If most of her friends had been into badminton and she didn’t like it, she wouldn’t have caused a stir in the badminton community accusing it of having a lying or cheating problem or something. She would’ve tried it for a few hours and then largely ignored it, not needing to make up any excuse for disliking it.
It’s in the nature of moral intuitions that we think everyone should share ours, and maybe there’ll come a time when we have the power to change values in all of society and have the knowledge to know in what direction to change them and by how much, but we’re only starting in that direction now. We can still easily wink out again if we don’t play nice with other moral systems or don’t try to be ignored by them.
What’s the formal definition of “compromise”? My intuitive one included Pareto improvements.
I counted this post as a loud, public bang.
I don’t think so, or at least when put into less extreme terms. I’d love to get input on this from an expert in social movements or organizational culture at companies.
Consultancy firms are known for their high churn rates, but that seems like an exception to me. Otherwise high onboarding costs (which we definitely have in EA), a gradual lowering of standards, minimization of communication overhead, and surely many other factors drive a lot of companies toward rather hiring with high precision and low recall than the other way around and then investing greatly into retaining the good employees they have. (Someone at Google, for example, said “The number one thing was to have an incredibly high bar for talent and never compromise.” They don’t want to get lots of people in, get them up to speed, hope they’ll contribute something, and lose most of them again after a year. They want to rather grow more slowly than get diluted like that.)
We probably can’t interview and reject people who are interested in EA, so the closest thing we can do is to help them decide as well as possible whether it’s really what they want to become part of long-term.
I don’t think this sort of thing, from Google or from EAs, would come off as pathetic.
But again, this is the sort of thing where I would love to ask an expert like Laszlo Bock for advise rather than trying to piece together some consistent narrative from a couple books and interviews. I’m really a big fan of just asking experts.
What I wrote in... (read more)