If you don't have good evidence one thing is better than another, don't pretend you do

by Robert_Wiblin21st Dec 20157 comments

37

Criticism (EA Movement)EA Messaging
Frontpage

One of the most common criticisms of people involved in EA from people who are not, is that we come across as arrogant and overconfident about our ideas even when we know very few concrete facts about the alternatives.

I think that is a fair criticism.

One of the most common ways this manifests is that we act as if we are confident that the projects we support are better than alternatives that are radically different.

How does the Against Malaria Foundation compare to lobbying for more effective US aid, developing better curricula for US students, basic cancer research, let alone publishing papers about global catastrophic risks?

I don't know with much confidence, and I've been hearing people try to answer these questions for years. A lot depends on specific details I can't currently keep up with.

As an example, if someone says they know about and are giving to one of the others in that list, some humility is called for. Here are some reasonable things that you can say:

  • I suspect lobbying for legislation in Congress isn't likely enough to work at the moment, but I'm happy to be convinced otherwise.
  • I suspect research into catastrophic risks is about as likely to have a positive as negative effect, so have not been convinced to support it yet.
  • I don't know anything about US education policy, so wouldn't feel comfortable giving to that currently. I also suspect focussing on very poor countries offers better leverage, as there are a lot of players in the US education space.
  • I'd rather give to an intervention I understand well, and GiveWell's recommendations allow me to do that. Maybe you should read GiveWell's research and see how persuasive you find it compared to what you already know?
In addition to being more polite, this is also likely to lead you to learn more, and is more intellectually honest than blustering through pretending to have answers we don't yet have.
7 comments, sorted by Highlighting new comments since Today at 3:03 AM
New Comment

I'm really glad you're writing about this. I think this is an important criticism of the way the EA movement and a lot of individuals within it (myself very much included) often come off. I think I'd suggest a different focus for what to say in these situations (although it's compatible with many of your suggestions).

In particular, most of these suggestions seem fairly focused on stating/advocating for your own position while accurately expressing your uncertainty. Instead, when you don't have good evidence that your view is correct, I think the most important thing to do is to focus on asking questions. I think this is the most important bit of humility. It's also likely to lead to more learning. And, if you really don't know much about their alternative (or have good evidence that yours is better) you're not likely to convince anybody anyway.

I think a common mistake I make is to express humility by continuing to advocate for my position while making my uncertainty more explicit.(1) People often don't read this as humility, though, because I'm not acting as if I believe they might have evidence that they're right and I don't sound curious about their alternative and about whether I'm wrong.

When I'm emotionally invested in a topic (particularly value-laden issues like EA) I often struggle to remember to be in learning-mode instead of persuasion-mode -- even when I'm genuinely curious about what the other person has to say. FWIW, one strategy I've personally used in this situation is to try to mentally keep track of the amount of time I'm spending explaining my position versus listening to theirs. If i don't have good evidence about their position, hearing their take is usually more interesting (even if my system 1 sometimes forgets this).

A caveat

One failure mode with my approach, though, is that there can be a fine line between trying to learn about someone's position (which comes off as humble) and interrogating them (which does not). When I'm trying to make sure I'm not interrogating someone, the question I usually ask myself is:

  • "Did I ask this question because I think they will have a good answer or because I think they will not have a good answer?"

(1)I had to learn the hard way that, at least for me, this doesn't actually come off as less confident. Instead it comes off as more confident AND better calibrated. Which is an improvement but doesn't lend itself to coming off as humble or to making others feel comfortable expressing disagreement.

Strongly agree!

In addition to learning and humility, even if you just want to persuade someone of something, it's best to start off by understanding their current position.

HowieL - cannot agree more with this - we need to figure out how to make humility more of a core tenet even at the cost of "personal efficiency" in communication.

I recall an instance where I saw a new visitor to an EA meetup being questioned on their choices before being asked to express their thought process for their own choices. I think in being hyper rational, we have to acknowledge and appreciate other modes of thinking and decision making as well as how they might come about

I'll try and pen a few more of these thoughts in the coming week in the forum

Thanks for posting this. It is very easy to come off as arrogant when promoting Effective Altruism, but this article helps by providing specific examples of what you can reasonably say.

I realize this is a higher-level discussion, but I am curious-by research into catastrophic risks do you mean AI specifically? Because I would be disheartened if you suspected that research on asteroid deflection, probabilities of high energy physics catastrophes, how to prevent global totalitarianism, how to prevent nuclear conflict, how to reduce nuclear stockpiles, how to ramp up conventional or alternative food supplies in a catastrophe, how to make global cooperation in a catastrophe more likely, prioritization within GCR, etc. are about as likely to have positive as negative effect.

I don't believe any of those things, but it's most plausible with AI and war prevention.

I agree with this kind of humility wholeheartedly. Although I think part of the problem is inseparable from what has to be called the righteous belief of most effective altruists that they are not propounding one way of doing good, but the single best way - one at which any rational reflection must conclude. Of course, they might disagree about which particular intervention has the greatest impact, but that disagreement occurs within the agreed framework of effective altruism.