Hide table of contents

Leif Wenar recently published a critique of effective altruism that seems to be getting a lot of hype. I don’t know why. There were a few different arguments in the piece, none of which were remotely convincing. Yet more strangely, he doesn’t object much to EA as a whole—he just points to random downsides of EA and is snarky. If I accepted every claim in his piece, I’d come away with the belief that some EA charities are bad in a bunch of random ways, but believe nothing that imperils my core belief in the goodness of the effective altruism movement or, indeed, in the charities that Wenar critiques.

I’m not going to quote Wenar’s entire article, as it’s quite long and mostly irrelevant. It contains, at various points, bizarre evidence-free speculation about the motivations of effective altruists. He writes, for instance, “Ord, it seemed, wanted to be the hero—the hero by being smart—just as I had. Behind his glazed eyes, the hero is thinking, “They’re trying to stop me.””

I’m sure this is rooted in Ord’s poor relationship with his mother!

At another point, he mistakes MacAskill’s statement that there’s been a lot of aid in poor countries and that things have gotten better for the claim that aid is responsible for the entirety of the improvement. These strange status games about credit and reward and heroism demonstrate a surprising moral shallowness, caring more about whether people take credit for doing things than what is done. He says, for instance, after quoting MacAskill saying it’s possible to save a life for a few thousand dollars:

But let’s picture that person you’ve supposedly rescued from death in MacAskill’s account—say it’s a young Malawian boy. Do you really deserve all the credit for “saving his life”? Didn’t the people who first developed the bed nets also “make a difference” in preventing his malaria?

Well, as a philosopher, Wenar should know that two things can both cause something else. If there’s a 9-judge panel evaluating an issue, and one side wins on a 5-4, each judge caused the victory, in the relevant, counterfactual sense—had they not acted, the victory wouldn’t have occurred. MacAskill wasn't talking about apportioning blame or brownie points—just describing one’s opportunity to do enormous amounts of good. Would Wenar object to the claim that it would be important to vote if you knew your candidate would be better and that your vote would change the election, on the grounds that you don’t deserve all the credit for it—other voters get some too?

Wenar’s objection also repeats the old objection that Sam Bankman Fried used EA principles to do fraud, so EA must be bad, ignoring, of course, the myriad responses that have been given to this objection. Alex Strasser has addressed this at length, as have I (albeit at less length than Strasser). Pointing that people have done fraud in the name of EA is no more an objection to EA than it would an objection to some charity to note that it happened to receive funds from Al Capone. Obviously one should not carry out fraud, should take common-sense norms seriously, as EA leaders have implored repeatedly for years.

The article takes random stabs at specific claims that have been made by EAs. Yet strangely, despite the obvious cherry-picking, where Wenar is attempting to target the most errant claims ever made by EAs, every one of his objections to those random out-of-context quotes ends up being wrong. For instance, he claims that MacAskill’s source for the claim that by “giving $3,000 to a lobbying group called Clean Air Task Force (CATF),” “you can reduce carbon emissions by a massive 3,000 metric tons per year,” is “one of Ord’s research assistants—a recent PhD with no obvious experience in climate, energy, or policy—who wrote a report on climate charities.” Apparently writing a nearly 500-page report on existential risks from climate change, in close collaboration with climate change researchers, and a 174-page report about climate charities doesn’t give one any “obvious experience in climate, energy, or policy.”

The article contains almost every objection anyone has given to EA, each with its own associated hyperlink, each misleadingly phrased. Most of them are just links to random hyperlinks involving downsides of some type of aid, claiming that EAs have never considered the downsides when often, they’ve considered them quite explicitly. It exhibits this thin veneer of deep wisdom, making claims like “aid was much more complex than “pills improve lives.”” Well, pills either do or don’t improve lives, and if they do, that seems good and worth knowing about! Now, maybe other things improve lives more, in which case we should do those things, but then you’re looking into comparing costs and benefits—just doing, pretty much, what EAs do, in terms of aid.

At other points, Wener obviously misunderstands what EAs are claiming. For instance, he quotes MacAskill saying “I want to be clear on what [“altruism”] means. As I use the term, altruism simply means improving the lives of others,” before saying:

No competent philosopher could have written that sentence. Their flesh would have melted off and the bones dissolved before their fingers hit the keyboard. What “altruism” really means, of course, is acting on a selfless concern for the well-being of others—the why and the how are part of the concept. But for MacAskill, a totally selfish person could be an “altruist” if they improve others’ lives without meaning to. Even Sweeney Todd could be an altruist by MacAskill’s definition, as he improves the lives of the many Londoners who love his meat pies, made from the Londoners he’s killed.

No competent reader or philosopher could have written that paragraph. If one reads the surrounding context, it’s obvious that MacAskill is not intending to do a conceptual analysis of the word altruism—he’s describing the way he uses it when he talks about effective altruism. MacAskill says:

As the phrase suggests, effective altruism has two parts, and I want to be clear on what each part means. As I use the term, altruism simply means improving the lives of others. Many people believe that altruism should denote sacrifice, but if you can do good while maintaining a comfortable life for yourself, that’s a bonus, and I’m very happy to call that altruism. The second part is effectiveness, by which I mean doing the most good with whatever resources you have.

Here, MacAskill is clearly not trying to define exactly what the term means in general—a famously difficult task for any word. He’s just explaining what effective altruism is about: doing good well. That’s what he’s advising people to do. One could figure this out by, for example, looking at the title of MacAskill’s book—Doing Good Better—or reading the surrounding context.

A lot of the article is like this—Wenar getting confused about some point and then claiming that the person who made it is an idiot or a liar or a fraud.

Much of the rest of the article, however, consists of just listing random downsides of some aid charities, claiming falsely that these downsides aren’t taken into account by effective altruists. I’m reminded of Scott Alexander’s piece steelmanning hitting oneself with a baseball bat for five hours:

“It’s a great way to increase your pain tolerance so that the little things in life don’t bother you as much.”

“It builds character!”

“Every hour you’re hitting yourself on the head with a bat is an hour you’re not out on the street, doing drugs and committing crime.”

“It increases the demand for bats, which stimulates the lumber industry, which means we’ll have surplus lumber available in case of a disaster.”

“It improves strength and hand-eye coordination.”

“It may not literally drive out demons, but it’s a powerful social reminder of our shared commitment for demons to be driven out.”

“It’s one of the few things that everyone, rich or poor, black or white, man or woman, all do together, which means it crosses boundaries and builds a shared identity.”

“It binds us to our forefathers, who hit their own heads with bats eight hours a day.”

“If we stopped forcing everyone to do it, better-informed rich people would probably be the first to abandon the practice. And then they would have fewer concussions than poor people, which would promote inequality.”

“It creates jobs for bat-makers, bat-sellers, and the overseers who watch us to make sure we bang for a full eight hours.”

“Sometimes people collapse of exhaustion after only six hours, and that’s the first sign that they have a serious disease, and then they’re able to get diagnosed and treated. If we didn’t make them bang bats into their heads for eight hours, it would take much longer to catch their condition.”

“Chesterton’s fence!””

Finding random downsides to things is easy. What distinguishes serious people raising serious critiques—you know, the people who work day in and day out weighing up the costs and benefits of aid, writing detailed reports that Wenar lies about—from unserious hacks is that they actually look in detail at comparisons of the costs and benefits, rather than going on google scholar, finding a few hyperlinks for downsides to certain aid programs, and declaring the serious researchers who spend their time analyzing these things errant. Wenar says, for instance:

In a subsection of GiveWell’s analysis of the charity, you’ll find reports of armed men attacking locations where the vaccination money is kept—including one report of a bandit who killed two people and kidnapped two children while looking for the charity’s money. You might think that GiveWell would immediately insist on independent investigations into how often those kinds of incidents happen. Yet even the deaths it already knows about appear nowhere in its calculations on the effects of the charity.

But we only have reports of it happening once. This is a bit like declaring, in response to a bank being robbed, that before supporting banks one should do a detailed statistical investigation into whether banks’ costs outweigh benefits—even if we only have one case of it. This is not serious—it’s just throwing up uncertainty so that those who don’t want to give can have the veneer of plausible deniability.

Wenar lists a lot of random downsides to aid. It’s true that there’s disagreement about the net effect of aid. But the well-targeted aid done by EA organizations generates virtually no controversy among serious scholars. As Karnofsky notes “We believe that the most prominent people known as “aid critics” do not give significant arguments against the sorts of activities our top charities focus on.”

Take, for instance, his claim that “Studies find that when charities hire health workers away from their government jobs, this can increase infant mortality.” Of course, the evidence that Givewell relies on comes from high-quality randomized control trials. It’s easy to point to random downsides to something—the question is whether the upsides outweigh. Which we know they do, based on the randomized control trials gathered by Givewell, looking at a wide variety of aggregate outcomes. The study is totally general—it just notes that sometimes aid programs hire workers who could provide other serives and that might be bad.

And these downsides aren’t enough to undermine the generally positive effect of aid. As Tarp and Mekasha write, in a detailed meta-analysis of the impact of aid on economic growth:

The new and updated results show that the earlier reported positive evidence of aid’s impact is robust to the inclusion of more recent studies and this holds for different time horizons as well. The authenticity of the observed effect is also confirmed by results from funnel plots, regression-based tests, and a cumulative meta-analysis for publication bias.

Now, growth isn’t everything, but it’s a decent indicator of how well things are going. And as one of my professors noted, when one compares the harms of aid to the benefits of, for instance, smallpox eradication, they are nearly undetectable. There is debate about whether aid at the margin does more harm than benefit, but it’s total effect is clearly positive. As MacAskill notes:

Indeed, even those regarded as aid sceptics are very positive about global health.5 Here’s a quote from Angus Deaton, from the same book that Temkin relies so heavily on:

Health campaigns, known as “vertical health programs,” have been effective in saving millions of lives. Other vertical initiatives include the successful campaign to eliminate smallpox throughout the world; the campaign against river blindness jointly mounted by the World Bank, the Carter Center, WHO, and Merck; and the ongoing— but as yet incomplete— attempt to eliminate polio (Deaton 2013 p.104-5).

Wenar elsewhere says “aid coming into a poor country can increase deadly attacks by armed insurgents.” This study is hilariously unconvincing—it describes that in the Phillippines there were a few attacks because “insurgents try to sabotage the program because its success would weaken their support in the population.” In other words, insurgencies a few times in the Phillippines targeted aid programs because the aid programs were so great that they feared they’d weaken their base of popular support. So that’s why it’s bad to give out antimalarial bednets to people that demonstrably save lives.

Wenar elsewhere says “GiveWell has said nothing even as more and more scientific studies have been published on the possible harms of bed nets used for fishing.” But Givewell has looked into this and concluded the claims are unconvincing. The reason they’re not concerned is that it’s not a huge problem. As Piper writes, in an article titled “Bednets are one of our best tools against malaria — but myths about their misuse threaten to obscure that:

But here’s the thing: The math on bednet effectiveness takes such uses into account. Studies that groups like GiveWell rely upon are conducted by distributing malaria nets and then measuring the resulting fall in mortality rates, so those mortality figures don’t assume perfect use.

Additionally, malaria distribution organizations like the Against Malaria Foundation survey households to make sure nets are still being used. They don’t just ask people whether the nets are in use — people might lie — but go in and check. They’ve found that 80 percent to 90 percent of nets are used as intended, hanging over beds, half a year after first deployment. This isn’t surprising, as people are highly motivated not to die of malaria and won’t put nets to secondary uses lightly.

Bednets would work even better if no one was ever desperate enough to use them for fishing, but no estimates of their effectiveness assume such perfect use. Our figures for the effectiveness of bednets all reflect their effectiveness under real-world conditions.

There’s not much evidence that unapproved uses are doing harm

What about harm to fisheries from people fishing with nets? Researchers have only recently started looking into this. No one has measured detrimental effects yet, though they could emerge later.

The insecticide in anti-malarial bednets also does not have negative effects on humans, because the dosages involved are so low. It’s unclear whether there are any harmful effects from fishing with nets. (And, it’s worth noting, there is one oft-forgotten positive effect from the use of bednets for fishing: People are fed.)

Dylan Matthews adds, in an article debunking a similar claim made by Marc Andreessen:

That mosquito nets are dangerous to people would be news to basically any public health professional who’s ever studied them. A systematic review by the Cochrane Collaboration, probably the most respected reviewer of evidence on medical issues, found that across five different randomized studies, insecticide-treated nets reduce child mortality from all causes by 17 percent, and save 5.6 lives for every 1,000 children protected by nets. That implies that the 282 million nets distributed in 2022 alone saved about 1.58 million lives. In one year.

Bednets and fishing nets

Andreessen’s objection is rooted in something that’s been true of bednets for decades: sometimes, people use them as fishing nets instead.

This has occasionally popped up as an objection to bednet programs, notably in a 2015 New York Times article. One related argument is that the diversion of nets toward fishing means they’re not as effective an anti-malaria program as they initially appear.

That’s simply a misunderstanding of how the research on bednets works. The scientists who study these programs, and the charities that operate them, are well aware that some share of people who get the nets don’t use them for their intended purpose.

The Against Malaria Foundation, for instance, a charity that funds net distribution in poor countries, conducts extensive “post-distribution monitoring,” sending surveyors into villages that get the nets and having them count up the nets they find hanging in people’s houses, compared to the number previously distributed. When conducted six to 11 months after distribution, they find that about 68 percent of nets are hanging up as they’re supposed to; the percent gradually falls over the years, and by the third year the nets have lost much of their effectiveness.

So does this mean that bednets are only 68 percent as effective as previously estimated? No. Studies of bednet programs do not assume full takeup, because that would be a dumb thing to assume. Instead, they evaluate programs where some villages or households randomly get free bednets, and compare outcomes (like mortality or malaria cases) between the treated people who got the nets and untreated people who didn’t.

For instance, take a 2003 paper evaluating a randomized trial of net distribution in Kenya (this was one of the papers included in the Cochrane review). The researchers’ own surveys show that about 66 percent of nets were used as intended. The researchers did not exclude the one-third of households not using the nets from the study. Instead, they simply compared death rates and other metrics in the villages randomized to receive nets to those metrics in villages randomized to not get them. That comparison already bakes in the fact that a third of households who received the nets weren’t using them.

So estimates like “bednets reduce child mortality by 17 percent” are already assuming that not everybody is using the nets as intended. This just isn’t a problem for the impact estimates.

But is it a problem for fisheries? Andreessen cites one recent article to make this case. It’s not clear to me he actually read it.

The authors start by acknowledging that bednets have saved millions of lives, and even that the use of nets for fishing makes sense for many people. It’s a free way to get food you need to survive in regions often reliant on subsistence farming. Moreover, the authors note that “The worldwide collapse of tropical inland freshwater fisheries is well documented and occurred before the scale-up of ITNs.” At worst, you can accuse nets of making an existing problem worse.

The bigger question the authors raise is that insecticides are toxic. That’s, of course, the point: They’re meant to kill mosquitoes. The question, then, is whether they are toxic to fish or humans when used for fishing. The authors’ conclusion is maybe, but we have no research indicating one way or another. “To our knowledge there is currently a complete lack of data to assess the potential risks associated with pyrethroid insecticide leaching from ITNs,” the authors conclude. They are not sure if the amount leaching from nets is enough to be toxic to fish; they’re not fully sure that the insecticide leaches into the water at all, though they suspect it does. Even less clear is how these insecticides might affect humans who then eat fish that might be exposed to them.

I could keep going through the piece, claim by claim, refuting the false claims about GiveWell’s having no data supporting deworming, for instance, though Givewell has already done that. But Wenar’s piece isn’t really about that—he doesn’t really care to defend, in any detail, any of the specific harms. They’re not what his argument is about—they’re just things he plucked from Google Scholar after five minutes of Googling. His broad point is just that there are downsides that EA hasn’t considered, which is a claim that’s easier to support when you ignore the way that EA studies are built to take into account the downsides and examples of them considering these downsides.

Everything has downsides. The world is about tradeoffs. For every speculative second-order downside to bednets, there are speculative second-order upsides from hundreds fewer children dying daily. Wenar’s piece is a recipe for complacency, for us throwing up our hands and saying “the world is complicated, nothing to see here.” He seems to think we should have an explicit bias against aid, writing:

Call the first the “dearest test.” When you have some big call to make, sit down with a person very dear to you—a parent, partner, child, or friend—and look them in the eyes. Say that you’re making a decision that will affect the lives of many people, to the point that some strangers might be hurt. Say that you believe that the lives of these strangers are just as valuable as anyone else’s. Then tell your dearest, “I believe in my decisions, enough that I’d still make them even if one of the people who could be hurt was you.”

Perhaps Wenar should have applied the “dearest test” before writing the article. He should have looked in the eyes of his loved ones, the potential extra people who might die as a result of people opposing giving aid to effective charities, and saying “I believe in my decisions, enough that I’d still make them even if one of the people who could be hurt was you.”

I agree you should apply this test, only if you’ll also be willing to look the person in the eye if you don’t do it, and say “I believe in my decision to not act, so that if you were a starving child, or a child who might get malaria, I’d do nothing and watch you die.” If you’re going to make people feel extremely distraught about potential risks, they should feel equally distraught about lost benefits, about the kids who die because of western apathy.

Making people imagine that the potential victims are their families would make them less likely to act. Most people wouldn’t donate if the beneficiaries were random strangers and the only people who could be harmed would be their close families. So Wenar’s approach is an excuse for complacency—for not acting, for regarding the possible speculative harms of aid to be far more salient than the demonstrable lives saved. As Richard Chappell says:

The overwhelmingly thrust of Wenar's article -- from the opening jab about asking EAs "how many people they’ve killed", to the conditional I bolded above -- seems to be to frame charitable giving as a morally risky endeavor, in contrast to the implicit safety of just doing nothing and letting people die.

I think that's a terrible frame. It's philosophically mistaken: letting people die from preventable causes is not a morally safe or innocent alternative (as is precisely the central lesson of Singer's famous article). And it seems practically dangerous to publicly promote this bad moral frame, as he is doing here. The most predictable consequence is to discourage people from doing "riskily good" things like giving to charity. Since he seems to grant that aid is overall good and admirable, it seems like by his own lights he should regard his own article as harmful. It's weird.

This is, I think, the entire point of Wenar’s article. He wants to make it so that every time you consider doing aid, you panic a little bit, even if it’s been vetted extensively, even if there have been a hundred randomized control trials showing how great the intervention is. He wants you not to act because of potential downsides, or at least to very seriously consider not doing it, no matter how good the evidence is for its effectiveness, because there might be downsides. That’s a terrible view. When children are dying and we have high-quality evidence that we can avert their death, pointing to random speculative, second-order harms is not enough to justify inaction in the face of avertable suffering and high-quality data.

Acting may be risky but not acting is much riskier. The mountain of child corpses, who coughed till their throats were raw, who experienced fevers of 105, is a moral emergency that demands action. Effective altruists are doing something about it—saving as many lives annually as stopping AIDS, a 9/11 every year, all gun violence, and Melanoma. Not doing anything because there are risks involved is just ascenting to status quo bias, where poor children die because no one cares enough to do anything. If you’re going to acting as morally risky, you should regard it as similarly risky to do nothing while children die by the millions.

Comments20
Sorted by Click to highlight new comments since: Today at 9:21 AM

A brief meta-comment on critics of EAs, and how to react to them:

We're so used to interacting with each other in good faith, rationally and empirically, constructively and sympathetically, according to high ethical and epistemic standards, that we EAs have real trouble remembering some crucial fact of life:

  • Some people, including many prominent academics, are bad actors, vicious ideologues, and/or Machiavellian activists who do not share our world-view, and never will
  • Many people engaged the public sphere are playing games of persuasion, influence, and manipulation, rather than trying to understand or improve the world
  • EA is emotionally and ideologically threatening to many people and institutions, because insofar as they understand our logic of focusing on tractable, neglected, big-scope problems, they realize that they've wasted large chunks of their lives on intractable, overly popular, smaller-scope problems; and this makes them sad and embarrassed, which they resent
  • Most critics of EA will never be persuaded that EA is good and righteous. When we argue with such critics, we must remember that we are trying to attract and influence onlookers, not trying to change the critics' minds (which are typically unchangeable).

This seems to me to be a self-serving, Manichean, and psychologically implausible account of why people write criticisms of EA.

The Wenar criticism in particular seems laughably bad, such that I find bad faith hypotheses like this fairly convincing. I do agree it's a seductive line of reasoning to follow in general though, and that this can be dangerous

"I have laboured carefully, not to mock, lament, or execrate human actions, but to understand them."

–Baruch Spinoza

Idk, I do just think that bad faith actors exist, especially in the public sphere. It's a mistake to assume that all critics are in bad faith, but equally it's naive to assume that it's never bad faith

Yarrow - I'm curious which bits of what I wrote you found 'psychologically implausible'?

It feels to me like black-and-white in-group/out-group thinking, where the out-group is evil, corrupt, deceptive, unintelligent, pathetic, etc. and the in-group is good, righteous, honest, intelligent, impressive, etc.

It actually isn’t my experience that people who identify as EAs interact "in good faith, rationally and empirically, constructively and sympathetically, according to high ethical and epistemic standards". EAs are, in my experience, quite human.

Thanks so much for this post, I appreciated the passionate yet clear reasoning. I find it rather hard to fathom why a renowned professor like this would make so many poor strawman arguments that you have laid out here. My guess is there's something ideological or emotional behind these kind of EA critiques,

JWS
25d11
4
0

My guess is there's something ideological or emotional behind these kind of EA critiques,

Something I've come across while looking into/responding to EA criticism over the last few months is that a lot of EA critics seem to absolutely hate EA[1], like with an absolutely burning zeal. And I'm not really sure why or what to do with it - feels like it's an underexplored question/phenomenon for sure.

 

  1. ^

    Or at least, what they perceive EA/EAs to be

This is a meta-level point, but I'd be very, very wary of giving any help to Hanania if he attempts (even sincerely) to position himself publicly as a friend of EA. He was outed as having moved in genuinely and unambiguously white supremacist political circles for years a while ago. And while I accept that repentance is possible, and he claims to have changed (and probably has become less bad), I do not trust someone at all who had to have this be exposed rather than publicly owning up and denouncing his (allegedly) past views of his own accord, especially since he referred to being exposed as some sort of somehow unfair attempt of critics to discredit him.

Whilst he has abandoned violent fascism (he says) he also seems still quite racist. It's not very long since he last referred to some black people he didn't like on twitter as "these animals" or something along those lines. (I can't recall what the black people in question had allegedly done: EDIT:  not that I think anything they'd done could really make that ok, I just don't want someone to say I'm being unfair  to say "black people" and not "black people who had done X") I don't think I am wildly out there in seeing him as still racist. Matt Yglesias, who no one could accuse of being mindlessly woke in all cases, reacted to Hanania's exposure as an (allegedly ex-) Nazi by saying that it was hardly news that Hanania is racist: https://twitter.com/mattyglesias/status/1687551506738786304?lang=en-GB

https://twitter.com/letsgomathias/status/1687543615692636160   (Just so people can get a sense of how very bad his views at least were, and could still be.) 

I'm actually finishing up an article on this exact topic! 

I'll explain more there, but I think the major reason is this: If Leif Weinar didn't hate EA, he wouldn't have bothered to write the article. You need a reason to do things, and hatred is one of the most motivating ones. 

It's also not a new thing - The Elitist Philanthropy of So-Called Effective Altruism - from 2013.

I'm not sure you have to do anything with it, generally groups that suggests money/influence should be shifted from A to B will get a negative response from the people it may affect or people who disagree with that direction of change. I tend to find energy spent on ideological EA critics is less valuable than good faith critics/people who are just looking for resources to help themselves to more good.

"If I accepted every claim in his piece, I’d come away with the belief that some EA charities are bad in a bunch of random ways, but believe nothing that imperils my core belief in the goodness of the effective altruism movement or, indeed, in the charities that Wenar critiques."

I agree - but I think Wenar does a very good job of pointing out specific weaknesses. If he alternatively framed this piece as "how EA should improve" (which is how I mentally steelman every EA hit-piece that I read), it would be an excellent piece. Under his current framing of "EA bad", I think it is a very unsuccessful piece.

I think these are his very good and perceptive criticisms:

  1. Global health and development EA does not adequately account for side-effects, unintended consequences and perverse incentives caused by different interventions in its expected-value calculations, and does not adequately advertise these risks to potential donors. Weirdly, I don't think I've come across this criticism of EA before despite it seeming very obvious. I think this might be because people are polarised between "aid bad" and "aid good", leaving very few people saying "aid good overall but you should be transparent about downsides of interventions you are supporting".
  2. The use of quantitative impact estimates by EAs can mislead audiences into overestimating the quality of quantitative empirical evidence supporting these estimates.
  3. Expected-value calculations rooted in probabilities derived from belief (as opposed to probabilities derived from empirical evidence) are prone to motivated reasoning and self-serving biases.  

I've previously discussed weaknesses of expected-value calculations on the forum and have suggested some actionable tools to improve them.

I think Givewell should definitely clarify what they think the most likely negative side-effects and risks of the programs they recommend are, and how severe they think the side-effects are.

Re 1, as Richard says: "Wenar scathingly criticized GiveWell—the most reliable and sophisticated charity evaluators around—for not sufficiently highlighting the rare downsides of their top charities on their front page.8 This is insane: like complaining that vaccine syringes don’t come with skull-and-crossbones stickers vividly representing each person who has previously died from complications. He is effectively complaining that GiveWell refrains from engaging in moral misdirection. It’s extraordinary, and really brings out why this concept matters." 

Re 2: I just don't think this is true.  EAs often note the uncertainty.  

3. This is true but constantly talked about EAs.  Furthermore, I don't know what the alternative is supposed to be--just ignore all non-quantifiable harms. 

The use of quantitative impact estimates by EAs can mislead audiences into overestimating the quality of quantitative empirical evidence supporting these estimates.

In my experience, this is not a winnable battle. Regardless of how many times you repeat that your quantitative estimates are based on limited evidence / embed a lot of assumptions / have high margins of error / etc., people will say you're taking your estimates too seriously.

On #1, how would you define "adequately account" and "adequately advertise"? I wasn't convinced that Wenar's specific GiveWell examples rose to a level of materiality that would justify these conclusions.

Even agreeing that EA GHD should be held to a higher standard because its effectiveness claims are much more explicit and specific, I also think "industry standards" are relevant to this point. If a criticism is no more valid of EA GHD than the charitable sector as a whole, critics need to say that.

Executive summary: The author argues that Leif Wenar's critique of effective altruism is unconvincing, misrepresents key claims, and promotes a dangerous bias against taking action to help others due to potential risks.

Key points:

  1. Wenar's article contains bizarre speculation about EA motivations and misrepresents statements by EA leaders.
  2. The article lists random downsides of aid without seriously weighing costs and benefits. High-quality evidence shows the most effective aid does far more good than harm.
  3. Wenar ignores how studies of aid effectiveness already account for potential downsides and imperfect usage of interventions like bednets.
  4. The article promotes a moral framing that discourages people from giving to charity by focusing on risks of action over inaction. The author argues inaction in the face of preventable suffering is far riskier.
  5. Effective altruist charities save millions of lives per year. Speculative second-order harms are not enough to justify inaction given the scale of good being done.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities