Hide table of contents

TLDR: We fail to win converts for three avoidable reasons: we're unpleasant to argue with (especially when we're right and we know it), we are overzealous about cause prioritization at the expense of people already committed to other paths, and we bring up weird-sounding ideas like shrimp welfare before newcomers have the framework to understand them. All three are unforced errors. 

Perhaps I am preaching to the choir here, but the precepts of EA are fairly obvious. If you have enough material abundance to flourish (which you almost certainly do), then you should give your extra money and labor to people (morally relevant beings) who don't have enough. More so, it is better to help more people than fewer people. And we know, with an extremely high degree of certainty, that some actions help more people than others. Ergo, you should do those actions, as opposed to the less effective ones. It is just a series of extremely intuitive and/or factually irreproachable claims.

I have often wondered why I sometimes meet SO MUCH resistance to these ideas in the wider world. At parties, dinners, talking to my friends, coworkers, classmates, parents' friends, or strangers, someone will mention "David believes in effective altruism!" and I am immediately pounced on. I have noticed, in the ensuing debates, a worrying pattern. The more someone has been previously exposed to other EAs and EA ideas, the MORE they are hostile to the philosophy. If someone's first comment is "what is Effective Altruism?" I know the conversation will probably go well. People who have never heard of EA are often just as flummoxed as I am when they realize their friends don't agree. But if someone goes "oh, so you think ChatGPT is going to become an immortal malevolent dictator?" or "oh, so you give all your money to stop shrimp farms?" then I know I am very unlikely to change any minds.

When I ask other EAs about why this might be the case, the most common explanation is that people are just uncomfortable with the implications of EA in their own life. If you take the idea seriously, then the average doctor / lawyer / consultant's life immediately appears fairly immoral. The people at these parties (who are disproportionately the three aforementioned professions) are thus highly motivated to avoid believing it. In the world of Upton Sinclair, "It is difficult to get a man to understand something, when his salary depends on his not understanding it."

I think this analysis is often basically correct. But I would add three more, fairly basic points. First, EAs are usually pretty unpleasant to argue with, for a variety of reasons that you probably can already guess, but some of which are not obvious until someone points them out to you. Second, relentless cause prioritization (which is very important and I am not opposed to!) can sometimes go too far, and can alienate people passionate about something that's not at the tippy top of the Super Important list. Third, some of EA's less intuitive positions are at too great an inferential distance from the wider public, and just make us look like a kookoo bananas cult if people come across them before they have the philosophical framework to understand them.

Let's take these ad seriatim:

EAs are unpleasant to argue with

This should come as no surprise. Myself included, the proportion of impatient, blunt, contrarian edgelords with gifted child syndrome and a strong dose of Moral Righteousness in EA is approximately 2 standard deviations above average. This is not a novel observation. It has been said before. Many times. I have met many people who insist on arguing against EA for half an hour before revealing they really just thought the EAs at their undergraduate university were all churlish boors, and that's why they don't like the idea. I am sometimes tempted to point out that this is hardly an epistemically responsible way to form opinions, but then I realize they would probably think I was being a churlish boor myself if I said that. So I bite my lip, apologize on behalf of the Platonic Concept of Altruism, and try to prove that not all EAs are like that.

Besides the larger, ineffable currents of culture that make EAs disproportionately possess a certain constellation of character flaws, I think there is another reason we are unpleasant to argue with: we are right. And we know it.

Many ethical questions, especially ones that are fundamentally axiological, are genuinely debatable. They are not really a question of fact. It's very easy for me to have a nuanced discussion with someone who believes that X Very Important Thing is a priority over incommensurate Y Very Important Thing, when the difference between us is essentially an irresolvable gestalt about what constitutes True Human Flourishing. Reasonable people can disagree.

The problem with many of the naive objections to EA is that they are not matters of opinion. A favorite objection of the uninitiated is "but how can you KNOW that doing X is better than doing Y? I don't think it's possible to know, so you can choose arbitrarily between X and Y." It is not a matter of opinion, for example, that giving $50,000 to train a single seeing eye dog is less effective at combating blindness than curing literally 1,000 people of cataracts. To question the studies that support this as a matter of fact is to question the entire edifice of science.

The issue with these EA objectors is that they are under the misapprehension that they are sharing an opinion. They (correctly) believe that there is often no "objective" foundation in a discussion of ethics. But they do not recognize that their claim ("how can you KNOW that doing X is better?") has strayed out of ethics and into epistemology, where there ARE objectively correct answers. We are no longer discussing opinions, we are discussing facts, but our interlocutor hasn't realized it.

The issue is, the objector can TELL, because people are good at knowing this kind of thing, that the EA talking to them simply believes they are wrong. From the objector's perspective, the EA is not respecting their valid, debatable opinion. Instead of engaging with them as an equal, the EA is condescendingly explaining to them why they're wrong.

I wish I could say I had discovered an answer to this issue. I have met with some limited success by eliminating the more obvious blunders from my strategy (e.g., avoiding phrases like "that's not correct" like the plague). I know intellectually that once people feel like you've listened to them, heard them, and respected them, they are much more likely to change their minds. In practice, it can be hard to do this the 50th time you've heard someone call strategic allocation questions a "false dichotomy." If I ever figure out a strategy that I find works, I'll write another post about it.

For now, I think it's important to recognize this Opinion / Fact issue as a major failure mode when convincing a new person of EA. If you notice that you believe the discussion is about facts, but your interlocutor thinks it's about opinions, tread very carefully.

Fanatic cause prioritization can be counterproductive.

I was at EAGxAmsterdam recently when I talked to two young EAs from a university group. They were both training to become medical doctors. As I'm sure most of you know, medical doctors are a famous example of poor performance compared to the counterfactual. If you weren't doing this work, someone functionally identical would be, and the bottleneck for new doctors has nothing to do with how many people apply to medical school. If you want the world to be different because you lived in it, becoming a doctor is not the best strategy. This is true.

However, these people were already in their 2nd year of medical school. If you were going to tell them not to become doctors, the ideal time would have been 3 years ago. Nonetheless, they described to me how they were considering leaving their university EA group because every time they asked for advice, someone told them to drop out of school and study something else.

Cause prioritization is important. But it needs to come with a moderating dose of realism. It is EXCEEDINGLY unlikely that you are going to convince someone in their 2nd year of medical school to drop out and start a career in AI alignment or whatever. You need to work with the constraints of the person in front of you. Tell them how they can most effectively leverage their medical degree to make a difference. THAT is the true EA way: making the largest difference given the constraints of the real world.

There is a similar issue with the community's current obsession with existential risk. Even arguendo that existential risk is truly the ONLY thing that EAs should be focused on in a perfectly-optimized cause-prioritized world, if the only thing you ever talk about is existential risk, everyone working on different problems in the world will find absolutely no guidance or succor in the EA community. This means that everyone from cancer researchers to biodiversity conservationists to people working on global poverty will have less opportunity to be effective.

Take biodiversity conservation, my own field. Some (not all! but definitely some) of the conservationists I know are convinced they would simply never be happy if they weren't working with animals. I truly believe functionally nothing could ever change their minds. I think most EAs, even those who think existential risk is by far the most important cause area, would prefer these Guaranteed Conservationists be doing their work effectively, as opposed to wasting huge amounts of money on scope-insensitive projects.

Unfortunately, that is what's currently happening in large parts of conservation. I sat through a talk at Oxford recently where someone talked about their research using 100,000 GBP to dig ditches in a field in an extremely nature-poor area of England, all to create better drainage conditions for a particular kind of non-endangered wildflower. That amount of money could have bought a hundred acres of highly-endangered forest in West Africa outright, and hired a team of guards to protect it for years. There is SO MUCH low-hanging fruit in conservation for an EA revolution. But when I point conservationists towards EA orgs, even those who would never, ever, ever change professions, they are often told to quit their jobs and go get a degree in AI safety, or else find a way to study zoonotic diseases and work on biosecurity.

Instead of being converted to EA, these conservationists are turned off from the entire project, and never have the opportunity to learn why that 100,000 GBP would be better spent in West Africa.

Cause prioritization is important, but EAs need to do a better job of realizing that it is not practical to only offer 4-5 options for aspiring effective altruists. We need to figure out a way to welcome the conservationists, medical doctors, etc. The alternative is those fields never becoming enlightened in the Ways of EA, which I think we can all agree is a bad outcome, against our interests and the interests of the world.

Don't forget about inferential distance!

I was at a dinner last month when someone mentioned my affiliation with Effective Altruism. I was in the midst of explaining the basic premise ("a drowning child or pair of shoes?") when someone brought up shrimp welfare. "Effective Altruists care more about shrimp than anything else," was the accusation. The person I was explaining EA to looked at me amused and a little worried. "Is this true?"

How to explain? If you answer with something that sounds technical and evasive like, say, a tripartite definition of "priority" ("important, neglected, tractable!"), you're going to lose many people. And someone who hasn't been primed on why animal welfare is important, what exactly makes something a morally relevant being, scope insensitivity, the physiology of shrimp, the nature of consciousness, etc., is going to be very unlikely to take seriously the argument that the suffering of a zillion krill is a higher priority than, say, the homeless population of Cambridge.

The problem is that there are too many steps between what the potential convert already takes for granted and some of EA's more inaccessible initiatives, like shrimp welfare. There is simply no way you can take someone from 0 to 100 in the course of a single dinner.

The first and most obvious failure mode that arises here is EAs bringing up things like shrimp welfare or paperclip maximizers to the uninitiated entirely unprompted. This seems to me like a massive unforced error. Sometimes, I think certain EAs like to revel in the apparent strangeness of their beliefs. They enjoy the fact that they know they have strong arguments, but their interlocutors would struggle to comprehend them. It makes them feel smart and knowledgeable by comparison. Other times, I think the EAs are genuinely passionate about what they've chosen to dedicate their lives / careers / next 5 years to, and forget all the steps you need to take before you can understand why what they're doing isn't as absolutely batshit as it sounds.

The second failure mode is when someone is exposed to the stranger corners of EA before they're ready to understand them, whether by the media or a hostile actor. This is the position I usually find myself in, trying to start with the basics and ending up having to defend increasingly esoteric-sounding philosophical positions to avoid lying or saying things I disagree with. It has cost me many a potential ally.

The best strategy that I have thus far found is a "not all EAs" type of approach. E.g., "Not all EAs agree about the shrimp thing," and then trying to steer the conversation as quickly as possible back towards safer, intuitive examples, like malaria nets, lead poisoning, factory farms, biosecurity, etc.

The second failure mode is hard to avoid. You can't control what someone reads about EA before you meet them, obviously, and if someone is really, really pissed about FTX and is now biased against EA forever, there's really not much you can do about it. But the FIRST failure mode is extremely easy to avoid. If a position sounds ridiculous to someone who doesn't already possess a large set of unconventional axioms, DON'T BRING IT UP! Focus on getting someone to understand why they should give to charity first, why they should give to effective charities second, and why that effective charity might be the Shrimp Welfare Project 13,566th, if that's what you believe.

Who cares?

To end with the obvious, it is extremely important to convince other people of EA. Growing the community is probably one of the most effective things you can do. Even if you just plant the seed in someone's mind, that action might have the highest ROI of Net Good of anything else you do in your life. The upside bargain is enormous. It is extremely important that every time you shoot your shot to create a new EA, you avoid all the obvious failure modes. Don't be a churlish boor, be realistic about people's passions and don't alienate people by insisting everything they do is less important than your preferred intervention, and don't bring up esoteric philosophical stuff before people are ready, even if you're right and it's important!

6

1
0

Reactions

1
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities