I don't intend to convince you to leave EA, and I don't expect you to convince me to stay. But typical insider "steel-manned" arguments against EA lack imagination about other people's perspectives: for example, they assume that the audience is utilitarian. Outsider anti-EA arguments are often mean-spirited or misrepresent EA (though I think EAs still under-value these perspectives). So I provide a unique perspective: a former "insider" who had a change of heart about the principles of EA.
Like many EAs, I'm a moral anti-realist. This is why I find it frustrating that EAs act as if utilitarianism is self-evident and would be the natural conclusion of any rational person. (I used to be guilty of this.) My view is that morality is largely the product of the whims of history, culture, and psychology. Any attempt to systematize such complex belief systems will necessarily lead to unwanted conclusions. Given anti-realism, I don't know what compels me to "bite bullets" and accept these conclusions. Moral particularism is closest to my current beliefs.
Some specific issues with EA ethics:
- Absurd expected value calculations/Pascal's mugging
- Hypothetically causing harm to individuals for the good of the group. Some utilitarians come up with ways around this (e.g. the reputation cost would outweigh the benefits). But this raises the possibility that in some cases the costs won't outweigh the benefits, and we'll be compelled to do harm to individuals.
- Under-valuing violence. Many EAs glibly act as if a death from civil war or genocide is no different from a death from malaria. Yet this contradicts deeply held intuitions about the costs of violence. For example, many people would agree that a parent breaking a child's arm through abuse is far worse than a child breaking her arm by falling out of a tree. You could frame this as a moral claim that violence holds a special horror, or as an empirical claim that violence causes psychological trauma and other harms, which must be accounted for in a utilitarian framework. The unique costs of violence are also apparent through people's extreme actions to avoid violence. Large migrations of people are most associated with war. Economic downturns cause increases in migration to a lesser degree, and disease outbreaks to a far lesser degree. This prioritization doesn't line up with how bad EAs think these problems are.
Once I rejected utilitarianism, much of the rest of EA fell apart for me:
- Valuing existential risk and high-risk, high-reward careers rely on expected value calculations
- Prioritizing animals (particularly invertebrates) relied on total-view utilitarianism (for me). I value animals (particularly non-mammals) very little compared to humans and find the evidence for animal charities very weak, so the only convincing argument for prioritizing farmed animals was their large numbers. (I still endorse veganism, I just don't donate to animal charities.)
- GiveWell's recommendations are overly focused on disease-associated mortality and short-term economic indicators, from my perspective. They fail to address violence and exploitation, which are major causes of poverty in the developing world. (Incidentally, I also think that they undervalue how much reproductive freedom benefits women.)
The remaining principles of EA, such as donating significant amounts of one's money and ensuring that a charity is effective in achieving its goals, weren't unique enough to convince me to stay in the community.
What I wrote in response to the OP took me maybe half an hour. If you want to save time then you can easily make quicker, smaller points, especially if you're a subject matter expert. The issue at stake is more about the type of attitude and response than the length. What you're worried about here applies equally well against all methods of online discourse, unless you want people to generally ignore posts.
The purpose is not to satisfy the person writing the OP. That person has already made up their mind, as we've observed in this thread. The purpose is to make observers and forum members realize that we know what we are talking about.
Okay, so what kinds of things are you thinking of? I'm kind of lost here. The Wikipedia article on EA, the books by MacAskill and Singer, the EA Handbook, all seem to be a pretty good overview of what we do and stand for. You said that the one sentence descriptions of EA aren't good enough, but they can't possibly be, and no one joins a social movement based on its one sentence description.
The addition of new members does not prevent old members from having high quality discussions. It only increases the amount of new person discussions, which seems perfect good to me.
I'm not. But the methodology you're using here is suspect and prone to bias.
Or they end up successful and achieve major progress.
If you want to prevent oppositions and toxoplasma, narrowing who is invited in accomplishes very little. The smaller your ideological circle, the finer the factions become.
No social movement has done things like this, i.e. trying to save time and effort for outsiders who are interested in joining by pushing off their interest, at the expense of its own short term goals. And no other social movement has had this level of obsessive theorizing about movement dynamics.
By calling out such behavior when I see it.
That sounds like a great way to ensure intellectual homogeneity as well as slow growth. The whole side of this which I ignored in my above post is that it's completely wrong to think that restricting your outward messages will not result in false negatives among potential additions to the movement. So how many good donors and leaders would you want to ignore for the ability to keep one insufficiently likeminded person from joining? Since most EAs don't leave, at least not in any bad way, it's going to be >1.
She's been with the rationalist community since early days as a member of MetaMed, so maybe that has something to do with it.
Movements really get criticized by people who are on the opposite spectrum and completely uninvolved. Every political faction gets its worst criticism from ideological opponents. Rationalists and EAs get most of their criticism from ideological opponents. I just don't see much of this hypothesized twilight zone criticism that comes from nearly-aligned people, and when it does come it tends to be interesting and worth listening to. You only think of it as unduly significant because you are more exposed to it; you have no idea of the extent and audience of much more negative pieces written by people outside the EA social circle.
I am not talking about not playing nice with other value systems. This is about whether to make conscious attempts to homogenize our community with a single value system and to prevent people with other value systems from making the supposed mistake of exploring our community. It's not cooperation, it's sacrificial, and it's not about moral systems, it's about people and their apparently precious time.
Stipulate any definition, the point will be the same; you should not be worried about EAs making too many moral trades, because they're going to be Pareto improvements.
Then you should be much less worried about loud public bangs and much more worried about getting people interested in effective altruism.
Companies experience enormous costs in training new talent and opportunity costs if their talent needs to be replaced. Our onboarding costs are very low in comparison. Companies also have a limited amount of talent they can hire, while a social movement can grow very quickly, so it makes sense for companies to be selective in ways that social movements shouldn't be. If a company could hire people for free then it would be much less selective. Finally, the example you selected (Google) is one of the more unusually selective companies, compared to other ones.
Lila has probably read those. I think Singer’s book contained something to the effect that the book is probably not meant for anyone who wouldn’t pull out the child. MacAskill’s book is more of a how-to; such a meta question would feel out of place there, but I’m not sure; it’s been a while since I read it.
Especially texts that appeal to moral obligation (which I share) signal that the reader needs to find an... (read more)