A Note on Framing Criticisms of Effective Altruism

by zdgroff24th Jul 20157 comments


Cross posted on zachgroff.com

This has been said, and better, by Rob Wiblin and others, but I'm restating it here to draw attention to its implications. Many criticisms of effective altruism (even the excellent ones in this Boston Review forum) go like this:

1) X is important (or Y is unimportant).
2) Effective altruism ignores X (or gives undue weight to Y).
3) Therefore, effective altruism is flawed.

If the goal is to criticize the fundamental idea of EA, then I think these criticisms miss the mark. If (1) is really true, then all that implies is that EA should focus on X. If the goal is to criticize the EA community, then these criticisms are more on point, though there's a question of whether a community is defined by what its members do or by the fundamental ideas that unite them.

But similar points have been made by others, so I'd like to focus on two reasons why the framing of these criticisms matters.

1) If they consciously adopted the role of internal critics to EA, those in favor of institutional reform, for instance, could be part of a serious debate within EA over whether focusing on policy advocacy is effective. This would not just be a more precise statement of the criticisms - it could improve EA more than an external criticism. More importantly, it could fill a gap in research on the effectiveness of policy advocacy, something that would be edifying not just for EAs but for anyone who cares about evidence.

2) Framing the debate in this way could lead those who favor a systemic approach to realize their significant ability to impact the world. People who call for focusing on institutions often, it seems, take it as an excuse not to do much to address global problems. If it turns out that we can do more by focusing more directly on institutions than we can by donating to charity, this should call for those who care about policy to follow the EA movement in making a serious personal commitment to change the world.
7 comments, sorted by Highlighting new comments since Today at 7:26 AM
New Comment

I really like this. What I would like even more of critics is to suggest a concrete organisation, that they think we should donate to instead of the current top charities.

Or concrete steps that we should take. I think one thing a lot of these critics think is that we shouldn't be donating - we should be doing more direct work. We need to hear what work exactly that is.

I don't get that criticism. I can always donate to help you do direct work. I don't see any way to criticize donating per se other than through non-consequentialist reasoning.

Edit: Unless they're criticizing the ratio of direct work to donations.

Unless they're criticizing the ratio of direct work to donations.

I think that's usually what it is. But there's usually no explicit argument for it.

And often (with many honorable exceptions), why the critics aren't doing that work...

Thank you for posting this! I think these are really great counter arguments, as well as a succinct description of many criticisms of EA. As we are rapidly gaining press, we are also gaining critiques and almost all of the ones I've seen are exactly this rationale.

What I keep waiting for someone to say, but haven't seen quite yet, is the response 'That's OK. You don't have to work on X to still identify as an Effective Altruist.' For example I know quite a few people in EA that care deeply about existential risk, but aren't particularly moved by the global poor. I know people who have said they like the 'effective' more than the 'altruist' because they really, really like optimizing things. I myself am not motivated by AI risk at all - I simply don't find it interesting or engaging, and I'm not entirely convinced it's a good way to spend my energy - but I still have great respect for those that do, and I still strongly identify as an EA.

I wonder if this desire for all or nothing acceptance of base principles may be because many people within EA strive to wipe out cognitive dissonance, which my argument sort of feels like. However I worry if in our avoidance of cognitive dissonance we fall into the trap of dualistic thinking. I found myself wandering back to Effective Altruism is a Question. The last paragraph being the most pertinent:

I can imagine a hypothetical future in which I don’t agree with the set of people that identify with the 'EA movement'. But I can’t imagine a future where I’m not trying to figure out how to answer the question 'How can I do the most good?'

In other words we as a community should be more open to the idea that not everyone has to buy into every idea or tenant within EA. We do have to all agree that we are trying to do the most good. Indeed it is the continual debate about how to go about doing the most good that is ultimately what makes us most effective.

Which is why I love your response, which I would probably summarize as 'It is good that EA is flawed because we have things to strive for, come help us make it better!'

The biggest gap that I see is people claim that "X is more important" as a general cause but they don't demonstrate that a given amount of dollars, labor or activism will go further for X than it will for other causes. Even if something is important, that's not sufficient to demonstrate that it's the most important cause to work for.