Many of the core claims made by effective altruists are radical trivialisms. They imply that we ought to behave differently, but the premises on which they rest are quite elementary and obvious.
Consider the core idea: one should strive for effectiveness when doing good. You should aim to do more good rather than less when giving to charity and choosing a career. When people criticize EA, they will often say something like “of course, the true idea of effective altruism is trivial enough—no one disagrees that one should strive to do more good rather than less. But behind the scenes, EA really is [contentious bundle of philosophical assumptions].”
Oh really? Everyone believes that you should strive to do more good rather than less in career choice and charitable donations? Maybe everyone believes it in the sense that if you ask them “good—do you favor more or less?” they will say “more.” But virtually no one attempts to consciously optimize for a maximally high-impact career. Almost no one judiciously investigates the effectiveness of different charities or is a priori cause neutral—interested in simply doing good as well as possible, whatever form that takes, even if it involves funding unsexy charities like the Schistosomiasis Control Initiative.
I will believe you when you say that the idea that we should give to maximally effective charities is trivial only if you’ve spent at least two seconds doing it—rather than giving your charitable donations to your local socialist magazine without giving any thought to effectiveness or to whatever random charity sounded cool when your friend told you about it. But in a sense, it is trivial. It’s trivial in that when you think about it, it seems obvious. Of course we should strive to do more good instead of less—of course there are strong moral reasons to take the best career you can rather than whichever one you feel like. If you can save more lives, or fewer lives, obviously you should save more.
A lot of the core EA claims are this way—they are trivial when you think about them, but almost no one acts in accordance with them. When an obvious principle demands a behavioral change that you do not carry out, you are failing to live up to your principles. When sound arguments demand non-standard behavior, most people won’t follow the arguments.
For one notable example, the imperative towards cost effectiveness in global health. Nearly everyone has the intuition that if you’re choosing between saving lives and doing other things that do less good, you should save lives. If you could either pull a child from a burning building or give soup to nine people, you should pull the child from the building.
Similarly, if you could save five lives or two, you should save five. If you combine that with the surprising fact that most ordinary people can save lives routinely, and yet instead spend their money doing less effective stuff, then it seems obvious that people should start giving to effective global charities instead of the random charities that tickle their fancy. It’s intuitively pretty clear that saving lives matters more than giving to the charities you like.
Longtermism is another good example of this dynamic. The core idea of Longtermism is very basic. The one-sentence argument for Longtermism is: the future could have lots of people, meaning in-expectation almost everyone affected by our actions resides in the far future. Thus, given that people in the far future don’t matter much less than present people, we should aim to help them.
That sounds really obvious. If future people matter and almost everyone our actions affect is a future person, then we should look carefully into doing things that are good for future people. For every present person, there could be quadrillions of future people—intuitively, it seems like a single individual matters less than their quadrillion descendants. But in practice, almost no one pays much attention to how their actions affect the far future! Before the Longtermists, how many people spent any time thinking about how their actions would influence the very far future? Future thinking, before Longtermism, meant thinking about the next 200 years, not the next 2 billion.
It’s not that non-Longtermists explicitly think that people after 200 years suddenly stop mattering (if they don’t, then nearly all that matters is what will happen after 200 years). It’s that they simply don’t give the matter much thought. The idea that we should change our behavior in major ways based on how it will affect far future people is weird—but weird doesn’t mean wrong!
And sure, with Longtermism there are more complexities than with the core EA idea. To decide if Longtermism is right, you have to decide on slightly thorny questions like whether we can do anything to reliably better the far future. But it would be suspicious if, despite 99.9999% of the morally important impacts of our actions being on how they affect the future, there wasn’t any way that should affect our behavior.
Someone once joked that effective altruism is the paramilitary wing of analytic philosophy (note to people reading this named Emile Torres, please do not take this out of context and make it sound like EAs endorse violence). What he meant by this was that philosophers spend time thinking about ideas and uncover lots of non-obvious things to be done. Effective altruism is about putting those weird ideas into practice, when the ideas are neglected because people don’t think about them, not because there are serious objections to them.
Animal welfare is another good example. Even if you’re not some tree-hugging hippie animal lover, probably you have the intuition that animals being in pain is a bad thing. If someone was torturing dogs or pigs in their basement, that would be really bad! If you can prevent lots of animals from being in pain at very small cost, then you should do that. As a matter of fact, you can do this—a single dollar can prevent more than 10,000 shrimp from experiencing intense pain in death or can prevent chickens from spending around 10 years in a cage.
Thinking that this is a valuable thing to spend money on doesn’t require any eccentric views about animals. If there were 10 chickens who’d be trapped in a cage for a year unless you spent a dollar freeing them, that would seem like a good use of a dollar. If you value an hour of your life at $20, then this would be equivalent to spending 18 seconds freeing a chicken from a cage that would have otherwise trapped it for a year. If there were 10,000 shrimp who’d slowly suffocate unless you spent 1 dollar to stun them, that would seem like a good use of a dollar.
When you combine that commonsensical normative premise with the surprising fact about our world that a dollar really can prevent chickens from spending years in a cage, that implies we should change our behavior. But this radical behavioral change comes about because of a surprising fact about our world, not a counterintuitive moral premise. The moral premise that it’s good to spend a dollar so that sentient beings aren’t trapped in a cage for years is super obvious!
When EA activities look weird, that’s typically downstream of surprising facts about the world, rather than counterintuitive normative claims. It is a surprising fact about our world that you can help so many people and animals so cheaply. It’s similarly surprising that of all the welfare in human history, in expectation almost all of it will be in the distant future. I really like the way Richard Y Chappell put this point when responding to a commenter who claimed it was counterintuitive to think we should fund humane pesticides:
Common intuition agrees: suffering is in itself bad to some extent, and hence worth reducing if you could do without undue cost or unintended side-effects. (Who would deny this?)
Common intuition agrees: scale matters. Two individuals suffering is twice as bad as just one, all else equal.
Crazy empirical premise: we cause (possibly unnecessary?) suffering to quadrillions of possibly-sentient beings.
Person who refuses to make logical inferences: “It’s *absurd* to think that the suffering of quadrillions of tiny beings could constitute a moral catastrophe! What are you, some kind of utilitarian?”
To prove EA rests on counterintuitive moral claims, you can’t just point to EA activities that sound weird. Weird doesn’t equal ethically counterintuitive. Norman Borlaug saved hundreds of millions of people by optimizing wheat production. That’s slightly odd behavior. But it wasn’t morally counterintuitive in any way—it was downstream of the surprising fact about our world that you can save hundreds of millions of people by optimizing wheat production. When the world is surprising, so too should our behavior be. I also like the way Ajeya Cotra put it:
Many dudes come into EA because they’re into moneyball and then realize they can moneyball the good — but I was a little girl who liked to take care of babies, and I realized there were way too many babies who needed me, and I had to get smart about it if I wanted to save them all.
The core moral principle that one should do more good rather than less is intuitive. Often EAs recommend surprising behavior because in our world, the best ways to do good are slightly strange. But if you wouldn’t condemn Norman Borlaug for working on optimizing wheat production, you shouldn’t criticize EAs for helping shrimp or giving out vitamin A supplements. Sometimes doing good ends up being a bit weird.
