Hide table of contents

In reaction to a recent article that straw-manned effective altruism seemingly without intent, I decided to write an article on some common misconception about effective altuism, and Tom Ash started two corresponding wiki pages, one on “Common objections to effective altruism” and one on “Common objections to earning to give.” I’ve copied this article into the first. If you have anything to add or correct, you’re invited to contribute it there, so that well-meaning journalists can more easily avoid such errors.

 

 

Effective altruism has seen much welcome criticism that has helped it refine its strategies for determining how to reach its goal of doing the most good—but it has also seen some criticism that is fallacious. In the following we want to correct some misconceptions that we’ve become aware of so far.

If Everyone Did That (“Kantian fallacy”)

Misconception

  1. “If we all followed such a ridiculous approach” as effective altruism, then all worthwhile causes outside “global health and nutrition” would cease to receive funding.1

Short Answers

  1. Top effective charities have limited room for more funding. At some point they’ll have absorbed so much money that additional donations will do much less good, so other charities will become top charities.

  2. If only 0.03% of the yearly donations only in the US were shifted to the top effective charities we know of, they would have no room for more funding.

  3. Once that point is reached, doing the most good will get slightly more expensive or slightly more risky, and all the other previously less effective or less proven interventions would successively get their funding.

  4. But yes, “funding for the arts” would have to wait until deadly diseases and extreme poverty are sufficiently under control.

Long Answers

In evaluating interventions, charity prioritization typically relies on the criteria of tractability, scalability, and neglectedness. The last two of these turn charity prioritization into an anti-inductive system, where some arguments along the lines of the categorical imperative become inapplicable: You recommend an underfunded intervention, then people follow your recommendation and donate to it, then it reaches its limits in scale, and finally you have to withdraw your recommendation as it is no longer underfunded.

Imagine you are organizing a large two-day effective altruism convention. There are several hotels close to the venue, one of which is very well known and soon fully booked. Panicked attendees keep asking you what they should do, so you call the other hotels in the vicinity. It turns out there is one that is even closer to the venue with scores of rooms left. So you send out a tweet and recommend that people check in there. They do, and promptly the hotel is also fully booked, and you have to do another round of telephone calls and update your recommendation. But it is fallacious to argue that your first recommendation, or the very act of making it, was wrong to begin with just because if everyone follows it, it’s no longer valid. That’s in the nature of the beast.

Because the “if everyone did that” argument so common, let’s give it a name: “Kantian fallacy.” The “buy low, sell high” rule of the stock market is not wrong just because if everyone bought low, the price would not be low, and if everyone sold high, the price would not be high. Advising against the overused typeface Papyrus is not wrong just because if no one used it, it would no longer be overused. Surprising someone with a creative present is not wrong just because if everyone made the same present, it would not be surprising.

The last analogy is less fitting than the previous ones, because we don’t actually want good interventions to be underfunded. When all the governments and foundations see how great deworming is and allocate so much funding to it that hardly a worm survives, then any recommendation for more donations for deworming has to be withdrawn in favor of more neglected interventions—but that’s a reason to party!

So what would recommendations look like when malaria, schistosomiasis, and poverty are eradicated or eliminated to the extent that other interventions become superior? Or what would happen when other bottlenecks thwart further effective donations in these areas?

That future is already sending its sunny rays into the past as foundations like Good Ventures and the Gates Foundation already have much more funding available than the currently known top charities could absorb. What happens is that doing good either becomes more expensive (when you trade off cost-effectiveness for scalability) or more risky (when you trade off certainty for other qualities). The latter is the more interesting and encouraging scenario. More on that in the next section.

Exclusive Focus on Interventions That are Easy to Study

Misconception

  1. “I mentioned bed nets because that is mostly what Effective Altruism amounts to: It measures what is immediately and visibly measurable, a problem known as the Streetlight Effect.”2

  2. “GiveWell has a particular fixation with global health and nutrition charities. It at least implicitly recommends that one should support charities only in those cause areas.”1

Short Answers

  1. Empirically, more effective altruists are eager to invest into highly speculative, high risk–high return interventions than prefer the safe investments of proven and repeatable medical interventions.

  2. This eagerness to accept financial and personal risks to effect superior positive impact has led to a flourishing culture of metacharities, prioritization research, medical research, political research and advocacy, research on global catastrophic risk, and much more within the movement.

  3. GiveWell in particular has long been been eager to expand to less easy-to-study interventions, so that it has been investigating (under the name Open Philanthropy Project) a wide range of global catastrophic risks, political causes, and opportunities for research funding since before effective altruism had a name.

Long Answers

Where the last answer required a new name for a fancy fallacy, this one is simply called “straw man.”

As a pioneer in the field of charity prioritization, GiveWell had a herculean task ahead of it and very limited resources in terms of money, time, and research analysts. (This was years before effective altruism had consolidated into a movement.) Since funneling more donations to above-average charities is already better than the status quo, the team quickly learned that as a starting point they had to focus narrowly on cause areas marked by extreme suffering, interventions with solid track-records of cost-effective and scalable implementation, and charities with the transparency and commitment to self-evaluation that would enable GiveWell to assess them. As it happens, these combinations were mostly found in the cause areas of disease and poverty. This decision was one of necessity at the time, but soon later, GiveWell managed to scale up its operations significantly, so that these restrictions no longer applied.

Some of the best and most cost-effective giving opportunities may well lie in areas or involve interventions that are harder to study. Hence GiveWell has been investigating these under the brand of the Open Philanthropy Project (initially known as “GiveWell Labs”) since 2011. (That’s still before effective altruism had a name or called itself a movement.) Much scientific research promises great cost-effectiveness; so do some interventions to avert global catastrophic risks and to solve political problems. Doing the most good may well mean investing somewhere in one of these areas—where exactly, Open Phil has set out to find out.

In 2012 the effective altruism movement got its name and consequently consolidated its efforts at doing the most good. Nowhere in the agenda of the movement it said that effective interventions needed to be easy to study or quantify. In fact opinions and preferences on what “the most good” means in practice vary (yet some are shared by almost everyone). According to a 2014 survey, about 71% of the EAs in the survey sample were interested in supporting poverty-related causes, but almost 76% were interested in metacharity including rationality and prioritization research, which are not easy to quantify. About one third to fourth were interested in each of antispeciesism, environmentalism, global catastrophic risk, political causes, and (nonexistential) far future concerns, most of which are hard to study. There is by no means an unwarranted bias toward interventions that are easy to study; if anything, there’s a surprising tendency towards speculative, high risk–high return interventions.

Finally, GiveWell not only doesn’t “implicitly recommends that one should support charities only in [the cause areas of global health and nutrition]” but (1) recommends a charity outside these areas, (2) writes on every charity review that did not lead to a recommendations that the “write-up should not be taken as a ‘negative rating’ of the charity” (emphasis in original), and (3) gives reasons for why a philanthropist may legitimately choose not to donate to their recommended charities right on their Top Charities page.

Consequentialism and Utilitarianism

Misconception

  1. Effective altruism depends on a utilitarian or consequentialist morality as implied in statements like, “not that [reducing corruption in the police force] could meet [effective altruism’s] utilitarian criteria for review.”2

Short Answers

  1. What’s so bad about wanting to maximize happiness and minimize suffering in the world?

  2. But there are also effective altruists with other moral systems, and effective altruism seems to follow from them as well.

Long Answers

Admittedly most effective altruists are utilitarians or consequentialists. If you want to maximize happiness and minimize suffering in the world (or maximize informed preference satisfaction), then it’s clear how effective altruism follows forcibly.

But how about deontology? Take Rawls (figures courtesy of UN, WHO, and World Bank):

  1. More than one in ten people don’t have access to safe drinking water.

  2. More than one in ten people suffer extreme hunger.

  3. More than one in nine people live in slums.

  4. Almost half the world’s population are at risk of malaria.

  5. Almost half the world’s population lives on less than the buying power of $2.50 per day.

  6. You have limited resources.

Imagine you’re in the original position behind the veil of ignorance and have to allocate our limited resources. Surely you’d make an admirable effective altruist.

This is beside the point, but a less corrupt police force will provide greater safety to the population and enjoy greater trust in return. The rich will no longer have recourse to bribing the police, so that poorer people are in a better position to trade and negotiate with them. The positive marginal impact on the happiness of the poor is likely to be greater than the marginal negative impact on the happiness of the rich. So there’s one of countless utilitarian cases for fighting corruption.

Top-Down and Elitist

Misconception

  1. “The defective altruism distribution plan ‘requires a level of expertise that few individuals have.’ Thus, over time, we would require a very centralized and top-down approach to marshal and manage social investment and charitable giving decisions in a manner acceptable to the proponents of this approach.”1

Short Answers

  1. That this is the case is a central grievance of the charity market, which effective altruism tries to remedy and without which the movement might not even be necessary.

Long Answers

It is one of the unfortunate truisms of the human condition that no market is perfect, but the charity market is particularly and abysmally imperfect. If someone wants to buy a solid state drive they might check, among other things, the price per gigabyte. $.96 per gigabyte? Rather expensive. $.38 per gigabyte? Wow, what a bargain! When people want to invest into a company, they check out the company’s earning over the past years, compare them to the stock price, and decide whether it’s a bargain or usury. Or if you have a headache, do you buy a homeopathic remedy that does nothing for $20 or Aspirin for $5?

I wasn’t there when it happened, but I imagine when the first effective altruists wanted to donate they called charities and were like “Hi, I like what you do and want to invest into your program. Could you give me your latest impact figures?” I imagine the responses ranged from “Our what?” over “You’re the first to ever ask for that” to “We have no idea.”

When the charities that run the programs don’t even know if they do anything good or anything at all in proportion to their cost, then how are donors supposed to find out? They would have to draw on the research of experts in the field and, to some extent, would have to become experts themselves.

Prioritization organizations want to change that. They flaunt a pot of money promised to the charities that make the best case for being highly effective. That way they incentivize transparency, self-evaluation, and optimization. Eventually, we hope, this will encourage a charity market that makes it much easier for everyone to recognize the charities with the most “bang for the buck.”


  1. Ken Berger and Robert M. Penna, “The Elitist Philanthropy of So-Called Effective Altruism,” 2013, accessed 2015-03-23, http://www.ssireview.org/blog/entry/the_elitist_philanthropy_of_so_called_effective_altruism. 

  2. Pascal-Emmanuel Gobry, “Can Effective Altruism Really Change the World?,” 2015, accessed 2015-03-23, http://theweek.com/articles/542955/effective-altruism-really-change-world. 

10

0
0

Reactions

0
0

More posts like this

Comments15
Sorted by Click to highlight new comments since: Today at 11:13 AM

Admittedly most effective altruists are utilitarians or consequentialists.

I actually doubt this. I suspect that a lot of effective altruists: i) have no moral philosophy / don't think about moral philosophy ii) use different moral philosophies at different times / take a case-based approach iii) don't know what these moral philosophies mean or haven't thought about their implications. A lot of the above would've been in the 2/3rds that picked consequentialism in the survey (which was asking too narrow a question with too few alternative answers) just because they've heard of it before.

How do you define EA in this case? If you include e.g. all 17k TLYCS-plegders, then I'd probably agree with your statement, but if we take people in the EA-fb-group (minus the spam accounts or accidentally added ones) or people who self-report as EA then it seems more likely to me that Telofy is right.

What if anything might be a better question and/or set of answers?

I've been thinking it'd be best to have the question ask what philosophy if any people lean to, and drop the parenthetical clause from the answer option "Consequentialist/utilitarian (or think that this covers the most important considerations)". It might also be good to encourage people more strongly to say something like 'don't know' or 'not familiar with the philosophies' if appropriate.

I don't think that a great deal of improvement can be made on the questions to make them such that people not already familiar with the theories could meaningfully answer them and so I would recommend keeping them limited to the explicit theories, and viewing them as targeting only people who do have a familiarity with the theories and can identify as leaning towards one or another- while encouraging people who aren't familiar with the theories to just select the "not sure" or "not familiar" option. (You could even make the question "Which of these moral theories do you lean towards?" conditional on them answering yes to "Are you familiar with any/all of these moral theories?")

I'm sceptical of the possibility of framing questions that can be meaningfully answered by people who don't already have explicit commitments vis-a-vis the moral theories. I've found that it's very difficult to propose questions about people's moral commitments that they will understand and actually have a view on. I've been involved in, I think, 5 surveys aimed at testing people's moral commitments so far and have found than however seemingly plain you make the language, people almost invariably understand it in deviant ways (because the theoretical questions are so divorced from how most people think about moral decision-making, unless they have already been inducted into this way of dividing up moral thought theoretically). Even questions like: two people disagree about whether such and such is right or wrong, must one of them be mistaken (the question Goodwin and Darley use) elicit completely different understandings to the one intended (e.g. people answer as though you asked whether the people were justified in making a mistake, not whether they were mistaken or not). Actual mistakes aside, I think that asking people whether "consequences" or "rules" guide their decision-making (or whatever) will likely be largely meaningless to people who haven't already thought through these theories explicitly. I think most people recognise both rules and consequences as considerations in certain circumstances, but won't have any stance on whether either one of these are criteria of rightness which is what's at stake in these questions. When I asked people about their moral views using in depth interviews, invariably, respondents happily shifted between seemingly incompatible positions sentence by sentence, so I think a simple tick box survey question will fail to capture this kind of variability, inconsistency and indeterminacy in people's actual moral positions (except in those who are already explicitly committed to some theory anyway).

I recognise this isn't a comprehensive case thus far, so I'm happy to elaborate further if people have objections.

One approach would be '1. do you have a moral philosophy?' '2. how do you describe your moral philosophy? _'

An alternative would be to use plain-language e.g. What guides your moral decisions? (the consequences of my actions/the rules i'm following/the person i want to be) with the ability to check multiple boxes.

the results would still have some doubt because people would be more likely to say that consequences are important to their decisions when they know they're being asked about effective altruism but that part is less avoidable.

"What guides your moral decisions? (the consequences of my actions/the rules i'm following" wouldn't distinguish between people with consequentialist or non-consequentialist intuitions, if they weren't familiar with philosophy.

If people said their moral decisions came from wanting to make as many people as possible happier, then that would reveal a pretty consequentialist intuition.

The complication is that the distinctive aspect of consequetialism is that it makes this the only motive or consideration, and it's hard to discover what the general public think about this as they're not used to breaking morality down into all its component factors to find an exhaustive list of their motives or considerations.

I'm not sure where the figure of 78% supporting GCR came from. In the survey report, the authors say: "far future, x­risk, and AI as one cluster...comes in third with 441". Is it possible you added two groups (eg AI and other x-risk) together? The groups weren't independent, so they can't be summed up without double counting.

The groups weren't independent.

Yes, they were very non-independent - IIRC one was close to being a subset of the other. I'll let anyone who wants to dig up the precise numbers.

Also, I'd urge changing this sentence:

"according to a 2014 survey, about 71% of effective altruists were interested in supporting poverty-related causes, but almost 78% were interested in global catastrophic risk reduction (including AI risk)"

I believe the survey analysis team said they'd try discouraging people from making these sorts of claims. I'd suggest instead saying "about 71% in the survey sample", or even better (and more usefully for your purposes) giving the absolute numbers, with a comment that EA is a small movement so these constitute a significant fraction.

Impressively thorough-looking piece though, I'm looking forwards to reading it! I think it's extremely useful to develop this sort of FAQ which people can link to.

Thanks Bernadette and Tom! I’ve corrected it and made it say “survey sample.” I think the percentages are still easier to read than the absolute numbers, but no strong preference.

I think Bernadette is correct. This should be fixed.

Also, immediately afterward it says:

About half were interested in metacharity work, prioritization research, and rationality education…

But the figure in the results and analysis pdf says:

If we redefine meta­charity to also include rationality and cause prioritization, it takes the top slot (with 616 people advocating for at least one of the three).

This is not "about half"; it's 616/813 ≈ 75%. But maybe I'm misinterpreting where these statistics are coming from?

(edit: It's "about half" if you use 616/1146, but the 1146 figure includes more than just EAs. Maybe this was the error?)

I was referring the the numbers from the table (in the old version) and I didn’t add them up, so not half in total but half each. Due to the Bernadette’s and Tom’s corrections the whole paragraph has changed a bit anyway, but I’ve also added an “each of” in the hope that it’ll make that clear. And 813 is the total I used.

Thanks for writing this, I think this is all useful stuff.

Another answer to the Exclusive Focus on Interventions That are Easy to Study critique is to grant that EAs may on the whole be falling victim to the streetlight effect, but that there is nothing in principle about EA that means that one should prefer highly measurable interventions and ignore speculative ones. So even if Pascal-Emmanuel Gobry sincerely believes that effective altruists, thus far, have in fact ignored interventions that we should expect to be very high impact (tackling corruption), this could in fact easily be accepted as highly effective by the EA metric and he could and should make an argument to that effect, even if the argument is highly speculative.

One beneficial thing about this response is that it accepts that EAs may tend towards the streetlight effect in some cases (and I think it's plausible that we sometimes do, which is compatible with us sometimes having the opposite tendency: being too credulous about speculative causes). Another is that we don't need to convince our critics that we don't fall victim to the streetlight effect for this response to work- we shift the onus back onto them to make the case for why their preferred scheme (Catholic work to reduce corruption, for example) should be expected to be so effective. Of course there are downsides to being too concessive to our opponents.

"according to a 2014 survey, about 71% of effective altruists were interested in supporting poverty-related causes, but almost 78% were interested in global catastrophic risk reduction (including AI risk)"

To be fair, we should note that we can't say on the strength of our survey that XX% of effective altruists at large (outside of our sample) were interested in poverty, AI etc. For all we know there could be twice as many AI risk EAs that we didn't survey. But I think that the finding that more than 300 EAs expressed support for AI, x-risk and environmentalism and almost as many for politics does succeed in supporting your point that EAs are very open to speculative causes and don't simply ignore anything that isn't easily measurable health interventions.

It seems likely that some EAs have fallen prey to the streetlight effect, but I don’t see anything of the sort in the general EA population, and rather a slight bias in the opposite direction, but that may be my own risk aversion. What might look like the streetlight effect is that unproven high risk–high return interventions are countless, and only very few of them will be cost-effective at all, and an even tinier portion will have superior cost-effectiveness, so that when it comes to donating as opposed to research, (1) EAs don’t know which of countless high risk–high return interventions to support and so “make do” with streetlight ones, and (2) the capital they do invest into their favorite high risk–high return interventions is spread thinly in the statistics because there is little consensus about them. But that only holds in regard to donating, not prioritization research or EA startup ideas.

I put “make do” in quotation marks because I actually put a lot of hope into the flow-through effects of deworming, bed nets, cash transfers, etc. for empowering the poor.

Thanks! I’ll add a note about the LW oversampling.

Update: There are a lot of people with a quantitative background in the movement (well, people like me), so they’ll probably have more fun studying interventions that are more cleanly quantifiable. But I think GiveWell, ACE, et al. do a good job of warning against exaggerated faith in individual cost-effectiveness estimates and probably don’t fall prey to it themselves.