Hide table of contents

Leif Wenar thoughtfully critiqued EA in  "Poverty is No Pond" (2011) & just wrote a critique in WIRED. He is a philosophy professor at Stanford & author of Blood Oil.

Edit: 

My initial thoughts (which are very raw & will likely change & I will accordingly regret having indelibly inscribed on the Internet): 

Initially, after a quick read-through, my take is he does a great job critiquing EA as a whole & showing the shortfalls are not isolated incidents. But none of the incidents were news to me. I think there's value in having these incidents/critique (well) written in a single article. 

But, really, I'm interested in the follow-up piece / how to reform EA or else the alternative to EA / what’s next for the many talented young people who care, want to do good, & are drawn to EA. I'd love to hear y'all's thoughts on this.

Edit: Share your Qs for Leif here.

Edit: Archive link to article.

Edit (4.5.24): See also GiveWell's comment and On Leif Wenar's Absurdly Unconvincing Critique Of Effective Altruism.

I've updated toward thinking there's probably not much reason to read the article. 

My impression is that Leif has a strong understanding of EA and thoughtful critiques of it, both as a set of tools and a question (and of course specific actions / people). I feel there's a significant difference between the WIRED article and my conversations with him. In conversation, I think he has many thoughtful comments, which I'd hoped the WIRED article would capture. I shared the article out of this hope, though in reality it's heavy on snark and light on substance, plus (I agree with many of you) contains strawmanning and misrepresentations. I wish for his substantive thoughts to be shared and engaged with in the future. But, in the meantime, thank you to everyone who shared your responses below, and I'm sorry it was likely a frustrating and unfruitful read and use of time.

Thank you, M, for sharing this with me & encouraging me to connect.

Comments60
Sorted by Click to highlight new comments since: Today at 11:35 AM

I found it a bit hard to discern what constructive points he was trying to make amidst all the snark. But the following seemed like a key passage in the overall argument:

Making responsible choices, I came to realize, means accepting well-known risks of harm. Which absolutely does not mean that “aid doesn’t work.” There are many good people in aid working hard on the ground, often making tough calls as they weigh benefits and costs. Giving money to aid can be admirable too—doctors, after all, still prescribe drugs with known side effects. Yet what no one in aid should say, I came to think, is that all they’re doing is improving poor people’s lives.

... This expert tried to persuade Ord that aid was much more complex than “pills improve lives.” Over dinner I pressed Ord on these points—in fact I harangued him, out of frustration and from the shame I felt at my younger self. Early on in the conversation, he developed what I’ve come to think of as “the EA glaze.”... Ord, it seemed, wanted to be the hero—the hero by being smart—just as I had. Behind his glazed eyes, the hero is thinking, “They’re trying to stop me.”

Putting aside the implicit status games and weird psychological projection, I don't understand what practical point Wenar is trying to make here. If the aid is indeed net good, as he seems to grant, then "pills improve lives" seems like the most important insight not to lose sight of. And if someone starts "haranguing" you for affirming this important insight, it does seem like it could come across as trying to prevent that net good from happening. (I don't see any reason to personalize the concern, as about "stopping me" -- that just seems blatantly uncharitable.)

It sounds like Wenar just wants more public affirmations of causal complexity to precede any claim about our potential to do good? But it surely depends on context whether that's a good idea. Too much detail, especially extraneous detail that doesn't affect the bottom line recommendation, could easily prove distracting and cause people (like, seemingly, Wenar himself) to lose sight of the bottom line of what matters most here.

So that section just seemed kind of silly. There was a more reasonable point mixed in with the unreasonable in the next section:

GiveWell still doesn’t factor in many well-known negative effects of aid... Today GiveWell’s front page advertises only the number of lives it thinks it has saved. A more honest front page would also display the number of deaths it believes it has caused.

The initial complaint here seems fine: presumably GiveWell could (marginally) improve their cost-effectiveness models by trying to incorporate various risks or costs that it sounds like they currently don't consider. Mind you, if nobody else has any better estimates, then complaining that the best-grounded estimates in the world aren't yet perfect seems a bit precious. Then the closing suggestion that they prominently highlight expected deaths (from indirect causes like bandits killing people while trying to steal charity money) is just dopey. Ordinary readers would surely misread that as suggesting that the interventions were somehow directly killing people. Obviously the better-justified display is the net effect in lives saved. But we're not given any reason to expect that GiveWell's current estimates here are far off.

Q: Does Wenar endorse inaction?

Wenar's "most important [point] to make to EAs" (skipping over his weird projection about egotism) is that "If we decide to intervene in poor people's lives, we should do so responsibly—ideally by shifting our power to them and being accountable for our actions."

The overwhelmingly thrust of Wenar's article -- from the opening jab about asking EAs "how many people they’ve killed", to the conditional I bolded above -- seems to be to frame charitable giving as a morally risky endeavor, in contrast to the implicit safety of just doing nothing and letting people die.

I think that's a terrible frame. It's philosophically mistaken: letting people die from preventable causes is not a morally safe or innocent alternative (as is precisely the central lesson of Singer's famous article). And it seems practically dangerous to publicly promote this bad moral frame, as he is doing here. The most predictable consequence is to discourage people from doing "riskily good" things like giving to charity. Since he seems to grant that aid is overall good and admirable, it seems like by his own lights he should regard his own article as harmful. It's weird.

(If he just wants to advocate for more GiveDirectly-style anti-paternalistic interventions that "shift our power to them", that seems fine but obviously doesn't justify the other 95% of the article.)

I was disappointed GiveDirectly wasn't mentioned given that seems to be more what he would favour. The closing anecdote about the surfer-philosopher donating money to Bali seems like a proto-GiveDirectly approach but presumably a lot less efficient without the infrastructure to do it at scale.

I think his take on GiveDirectly is likely to be very similar—he would point to the fraud and note that neither them or any of their evaluators took into account the harms caused by the beneficiaries of that fraud in their calculations. And I don’t think that that would be an unfair criticism (if delivered with a bit less snark).

1)I think it is pretty unclear how much harm was actually done here, other than the loss of money for the people who would otherwise have received it, who would also have gotten 0 money if GiveDirectly didn't exist. (That doesn't mean zero harm, since it's worse to think you'll get money and not receive it.) As far as I can tell from the link, the money was stolen by local GiveDirectly staff, not armed militias or governments that might have spent on it buying guns or improving their ability to extort more money from others. (There might even have been some indirect gain for locals in having the money reach the Congo at all, also. It could easily have been spent locally Harmful things also have secondary effects that don't necessarily have the same sign as the primary thing.) It's possible that if they'd given more details of how the fraud was carried out, more harms would be evident though. (Which is why I say "unclear" not "it seems like there wasn't that much"). 

2) It seems like it was a tiny fraction of GD's giving that year, so the bad effects would have to be super-large in order for it to make much difference to the overall value of GD's work. (I guess one possible response is that where you find one bad unintended consequence there might be others.) 

I agree with both your points. I think the thrust of Leif’s argument, rather, is that no work was done to clarify the extent of those harms. They just say “we apologise to people counting on this” and quote statistics on how bad the militias in the area are.

On (2), I hope it was clear to anyone reading the article that Leif would like EAs to think in a negative-utilitarian way. I sincerely doubt he cares what proportion of the overall value of GiveDirectly’s work it was if a harm was done.

"Negative utilitarian" isn't the right term here. Negative utilitarianism is the view that you should minimize total suffering. It doesn't say your not allowed to cause some suffering in doing so, so long as you take the action that reduces suffering the most on net. The "benefits" of Give Directly's work are a mixture of suffering reduction and positive stuff, and the harms of the theft are also a mixture of suffering and positive benefits blocked. NU is the view that you should only care about suffering and not the positive benefits in assessing whether GD does more good than harm .It's not a view about not doing harm instrumentally. (And in fact, any sensible negative utilitarian will recognize that increasing positive happiness actually usually also decreases suffering for that person, since it helps prevent boredom etc.) 

Insofar as Wenar is claiming that you should never do anything that is even an indirect cause of harms committed by other people, even if it's a net benefit, I think that is just not at all convincing, for reasons both I and Richard Y Chappell have given elsewhere: it would paralyze all action by anyone ever, and it doesn't have the common sense support of "don't do evil things, even to achieve good outcomes". I suppose someone could argue the harm was direct here though, since it was GD's own staff who stole the money? 

On the other hand, if his claim is just that GD might be doing more harm than good, then the specifics of how much money was stolen v. how much money GD gives out are relevant. If his claim is that GiveWell should incorporate harms more into the stuff they right up about the charities, again, the actual importance of the harms is relevant, since GiveWell can't write up everything, and should include/exclude stuff from right-ups on the basis of how important that stuff is. If his claim is just that GiveWell needs to take the caveats about indirect harms they already include in long detailed reports and display them prominently in summaries, again, the level of the harms seems important, because the summary should be giving the most important stuff. 

Same, Oscar! I hope to ask him about this

We want to share a few thoughts that might help clarify our approach.

Our research incorporates potential downsides or unintended consequences, such that our recommendations take into account factors like potential side effects. Most (if not all) of the information in Leif Wenar’s WIRED piece is something that anyone could find on our website. However, we believe some of what is published in the piece is misleading or inaccurate. Here is a lightly edited version of the material that GiveWell sent to WIRED in response to a request for comment on Leif Wenar's piece. 

JWS
1mo49
11
3

I wasn't really impressed or persuaded by it to be honest, actually the more I read it the worse it got. I agree with you that he'd make a very interesting 80k podcast guest if he and Rob got the chance to go at it hammer and tongs, but I don't think the current team are up for adversarial conversations (even well-mannered ones) that it would probably turn out to be.

Poverty is No Pond is right to point out the vast complexity of understanding the potential impacts that aid interventions might have, but I think this empirical and moral cluelessness applies to all actions all the time, and he doesn't seem to apply them to his friend Aaron. There's a funny section in the linked piece where he excoriates GiveWell for their hedging language - yet in Poverty is No Pond he laments that more of those in the aid space aren't more humbly agnostic. Even if we are clueless about how the world works, we must still act in one way or another. He doesn't seem to give any evidence for his suggestions about from epistemic nihilism and personal/lived experience.

I don't think pointing out that bednets can be used for fishing is a very strong argument against them, and I don't necessarily trust Wenar's assessment of the evidence here when compared to AMF, GiveWill, and others in the space. Just as he uses the "It’s difficult to get a man to understand something, said Upton Sinclair, when his salary depends on his not understanding it." line to criticse Ord, I can use it to criticise him here, and much of his view on the value of quantification and empirical research.

A lot of the piece is written in a very hostile, and honestly child-like manner to me. The parts about MacAskill are blazingly hostile, to the point that it comes off as much more hit-piecy than truth-seeking. Like the whole bit about 'altruism' just seems to underline how little Wenar really understands about EA. I don't think the SBF is insightful or adds much to what's been reported on elsewhere.

By the bit it got to longtermism it started to look increasingly ahistorical, pure conjecture, and vitriol, and I lost interest in reading beyond a quick skim. Maybe I've been harsh, but I think that it probably deserves to be treated harshly.

My understanding is he's not at all an advocate for epistemic nihilism (nor just basing decisions on anecdotes like those he shared). (Though the post leaves me a little epistemically depressed.) I think he (like me) thinks we can do better & in the post is arguing that EA is not managing to do better. And, my impression is he is genuinely trying to figure out how we can do better.

Can I ask what your connection to Leif is, are you in contact with him directly/indirectly in some way?

I did use the term 'epistemic nihilism' as a turn of phrase, but I don't think it's entirely unwarranted. I think the acid-test that Leif's applying to GiveWell, if applied to literally any other choice, would lead that way. He certainly doesn't provide any grounding for any of the alternatives.

As much as I'd also be keen for dialogue and improvement, the level of vitriol combined with flat-out mistakes/misrepresentations in the article[1] really doesn't make me see Leif as a good-faith interlocutor here.

  1. ^

    At least from my point-of-view. They're either caused by his anger or even wilful misrepresentation.

I'm just a student at Stanford (help run the EA club) & a few weeks ago I emailed him asking to chat, which he kindly agreed to do. (It was basically a cold email after chatting with a friend at Stanford about Poverty is No Pond.) We had a good conversation & he came across a very kind & genuine & we agreed to talk again next week (after spring break & this piece was published). 

"As much as I'd also be keen for dialogue and improvement, the level of vitriol combined with flat-out mistakes/misrepresentations in the article really doesn't make me see Leif as a good-faith interlocutor here."

This is really understandable, though my impression from talking with him is that he is actually thinking about all this in good-faith. I also found the piece unsatisfactory in that it didn't offer solutions, which is what I meant to allude to in saying "But, really, I'm interested in the follow-up piece..."

Thanks for sharing your thoughts, btw :)

JWS
1mo18
7
0
1

I think it's really great you reached out to him, and I hope things are going well at Stanford and that you're enjoying spring break :) And I think if you're interested in pursuing his ideas, go and talk to him and don't necessarily feel like you have to 'represent EA' in any meaningful way.

I think Poverty is No Pond is a thoughtful piece of criticism, even if I disagree with some of the arguments/conclusions in it. But The Deaths of Effective Altruism is a much worse piece imo, and I don't know how to square its incredible hostility with the picture of a genuine and good-faith person you talked about. Like some of it seems to come from a place of deep anger, and making simple mistakes or asking questions that could have been answered with some easy research or reflection.

I may raise some of these points more specifically in the 'Questions for Leif Post', but again I think you should ask your own questions rather than my own!

Reading through it, the vitriolic parts are mostly directed at MacAskill. The author seems to have an intense dislike for MacAskill specifically. He thinks MacAskill is a fraud/idiot and is angry at him being so popular and influential. Personally, I don't think this hatred is justified, but I have similar feelings about other popular EA figures, so I'm not sure I can judge that much.  

I think if you ignore everything directed at MacAskill, it comes off as harsh but not excessively hostile, and while I disagree with plenty of what's in there, it does not come across as bad faith to me. 

I cannot really speak to how good or honest Will's public-facing stuff about practical charity evaluation is, and I find WWOTF a bit shallow outside of the really good chapter on population ethics where Will actually has domain expertise. But the claim that Will is hilariously incompetent as a philosopher is, frankly, garbage. As is the argument for it that Will once defined altruism in a non-standard way. Will regularly publishes in leading academic philosophy journals. He became the UK equivalent of a tenured prof super young at one of the world's best universities. Also, frankly, many years ago I actually discussed technical philosophy with Will once or twice, and, like most Oxford graduate students in philosophy, he knows what he's doing.

I am still somewhat worried that Wenar has genuinely good criticism of GiveWell, but that part of the article was somewhat of a mark against it's credibility to me even if all the other bad things it says about Will are true. (Note: I'm not conceding they are true.)

Could you explain which parts you thought were 'thoughtful' and 'great' ?

Appreciate the question, Larks, & wish I'd noted this initially!

(Aside/caveat: I'm a bit pressed for time so not responding as fully as I'd like but I'll do my best to make time to expand in the coming days.)

"great"

he does a great job critiquing EA as a whole & showing the shortfalls are not isolated incidents.

I think a lot of criticisms of EA as applied highlight specific incidents of a miscalculation or a person who did something objectionable. But I think Leif made an effort to show these shortfalls to be a pattern, as opposed to one of incidents. And, as a result, I'm currently trying to figure out if there is indeed a pattern of shortcomings, what those patterns are, and how to update or reform or what to do in light of them. 

I'm tentatively leaning toward thinking there are some patterns, thanks to Leif and others, but I feel pretty clueless about the last bit (updates/reforms/actions). 

"thoughtful"

Leif Wenar thoughtfully critiqued EA in  "Poverty is No Pond" (2011)

Technically, "thoughtfully" was in reference to Poverty is No Pond. :) The above re pattern of shortcomings was the main reason I linked the piece. And, more importantly, I want to brainstorm with y'all (& Leif) how to update or reform or what to do in light of any patterns of shortcomings.

I do think the article's style & snark undercuts Leif's underlying thoughtfulness. When I chatted with him (just once for an hour) a few weeks ago, he showed the upmost kindness, earnestness, & thoughtfulness, with no snark (though I was aware that this post would be tonally different).

 

Unrequested rhetorical analysis: All the snark does make me feel his primary rhetorical purpose was to discourage talented, thoughtful, well-intentioned young people from engaging with EA, as opposed to change the minds of those already engaging with EA (& likely frequenting this forum). idk maybe I'll come to endorse this aim in the future but in the past I definitely haven't, as evidenced by the hundreds of hours I've spent community building. 

So, to clarify, discouraging awesome people from engaging with EA was not my rhetorical purpose in this linkpost. Rather, it was to spark a discussion & brainstorm with y'all about:

  1. Do folks agree EA's shortfalls form a pattern & are not one off incidents? (And, if so, what are those shortfalls?)
  2. How can we (as individuals or collectively) update or reform / what ought we do differently in light of them?

In general, I think it's important to separate EA as in the idea from EA as in "a specific group of people". You might hate billionaires, MacAskill and GiveWell, but the equal consideration of similar interests can still be an important concept.

Just because you never met them, it doesn't mean that people like GiveDirectly recipients are not "real, flesh-and-blood human", who experience joys and sorrows as much as you do, and have a family or friends just as much as you have.

Tucker Carlson when writing a similar critique of effective altruism even used "people" in scare quotes to indicate how sub-human he considers charity beneficiaries to be, just because they happened to be born in a different country and never meet a rich person. Amy Schiller says that people you don't have a relationship with are just "abstract objects".

I see EA as going against that, acting on the belief that we are all real people, who don't matter less if we happen to be born in a low income country with no beaches.

As for your questions:

  1. Do folks agree EA's shortfalls form a pattern & are not one off incidents? (And, if so, what are those shortfalls?)

Yeah folks agree that EA has many shortfalls, to the point that people write about Criticism of Criticism of Criticism. Some people say that EA focuses too much on the data, and ignores non-RCT sources of information and more ambitious change, other people say that it focuses too much on speculative interventions that are not backed by data, based on arbitrary "priors". Some say that it doesn't give enough to non-human animals, some say it shouldn't give anything to non-human animals.

Also, in general anything can call itself "EA", and some projects that have been associated with "EA" are going to be bad just on base rates.

2. How can we (as individuals or collectively) update or reform / what ought we do differently in light of them?

I'd guess it depends on your goals. I think donating more money is increasingly valuable if you think the existing donors are doing a bad job at it. (Especially if you have the income of a Stanford Professor)

Also, suggestions for individuals

Tucker Carlson when writing a similar critique of effective altruism even used "people" in scare quotes to indicate how sub-human he considers charity beneficiaries to be, just because they happened to be born in a different country and never meet a rich person. Amy Schiller says that people you don't have a relationship with are just "abstract objects".

 

I think you completely misinterpreted these people as saying the opposite of what they were actually saying. Here's what Carlson said right before his comment:

Every time someone talks about "effective altruism" or helping people he's never met and never will meet, and the consequence of that help will never be recorded, and he doesn't even care what those consequences are -- that is the most dangerous person in the world.

And what Schiller said right after her comment:

There's this like flattening and objectification of people that comes with giving philosophies that really see people as only their vulnerability, only their need only their desperation, that you then the hero donor can save for just $1 a day...

Their criticism of EA is precisely that they think EAs can't see people far away as "real, flesh-and-blood human", just numbers in a spreadsheet. I think that sentiment is inaccurate and that following it with "and that's why donating money to people far away is problematic!" makes no sense, but we should at least try to represent the criticism accurately.

Their criticism of EA is precisely that they think EAs can't see people far away as "real, flesh-and-blood human", just numbers in a spreadsheet.

Yes, I'm accusing them of precisely the thing they are accusing EA of.

To me it's clearly not a coincidence that all three of them are not recommending to stop using numbers or spreadsheets, but they are proposing to donate to "real" humans that you have a relationship with.

following it with "and that's why donating money to people far away is problematic!" makes no sense

I think it makes complete sense if they don't think these people are real people, or their responsibility.

Tucker dismisses charity to "people he's never met and never will meet", Schiller is more reasonable but says that it's really important to "have a relationship" with beneficiaries, Wenar brings as a positive example the surfer who donates to his friends.

If either of them endorsed donating to people in a low income country who you don't have a relationship with, I would be wrong.

Very meta observation: in the context of a linkpost with low net positive karma, the primary message conveyed by a downvote may be "I don't think posting this added any value to the Forum." The article's author is a Stanford prof, and Wired is not a small-potatoes publication. There seems to be value in people being aware of it and being given the option to read if they see fit. It appears to have enough substance that there's decent engagement in the comments. To the extent that one wishes to convey that the article itself is unconvincing, I would consider the disagree button over the downvote button.

Thanks for sharing this, Arden.

There's a lot of competition the "frontpage" regarding linked articles and direct posts by forum participants. I can understand why people would  think this article should not be displacing other things. I do not understand this fetishization of criticism of EA.

For comparison, a link to an article by Peter Singer on businesses like Humanitix with charities in the shareholder position with some commentary that benefit charities got 16 cumulative karma. I don't understand why every self-flagellating post has to be a top post.

Fair, but there was (and arguably still is) a disconnect here between the net karma and the number of comments (was about 0.5 karma-per-comment (kpc) when I posted my comment), as well as the net karma and the evidence that a number of users actually decided the Wenar article was worth reading (based on their engagement in the comments). I think it's likely there is a decent correlation between "should spend some on the frontpage" / "should encourage people to linkpost this stuff" on the one hand and "this is worth commenting on" / "I read the linkposted article."

The post you referenced has 0 active comments (1 was deleted), so the kpc is NaN and there is no evidence either way about users deciding to read the article. Of course, there are a number of posts that I find over-karma'd and under-karma'd, but relatively few have the objective disconnects described above. In addition, there is little reason to think your post received (m)any downvotes at all -- its karma is 16 on 7 votes, as opposed to 33 on 27 for the current post (as of me writing this sentence). So the probability that its karma has been significantly affected by a disagree-ergo-downvote phenomenon seems pretty low. 

It's quite striking and disturbing to me that someone who appears to have some genuine expertise in the area (at least, has published in an OUP book) has such an intensely negative view of GiveWell. (Though his polemical tone overall makes me take this as a weaker signal that GiveWell actually is bad than I otherwise would.) I am not able to really evaluate GiveWell's work for myself, but I had formed a vague sense that it was quite careful and thorough (although they'd maybe held on to deworming as the evidence turned against it) (Though I don't think I ever thought there was much chance they had in fact found the very best charities rather than just very good ones.) Now I am worried that they are much less careful than I thought, and maybe I don't have that much reason to think my donations have been net good :(

I do think with this sort of thing the most productive response is to be laser-focused on the empirical details of the critique of specific interventions or practical evaluation method, and ignore issues of tone, broad philosophical criticisms of utilitarianism or whether we are being unfairly singled out when the critique applies to loads of other people/things too.

I think his fundamental objections are philosophical (i.e. he's more annoyed by the rhetoric of GiveWell et al than the inevitable limitations to their research). Most of the details he picks out are weaknesses GiveWell themselves highlighted and others are general foreign aid critiques, some of which apply less to small orgs distributing nets and pills than other types of aid programme and organization.

The wider idea that GiveWell and similar RCT-oriented analysis usually misses second order effects especially when undertaken by people with little experience of the developing world is valid but not novel: a more nuanced critique would note that many of these second order effects absent from GiveWell figures are positive and most of them are comparatively small. Criticising a charity evaluator for not estimating how many future deaths are likely from bandit attacks on charities' offices is more dramatic than questioning what proportion of nets would otherwise have been sold to families by local shops anyway, but it's not a more useful illustration of their methodological limitations.  

I actually broadly agree with his general argument that EA overestimates the importance of being smart and analytical and underestimates local/sector knowledge, but I'm not sure there's much in there that's actionable insight (and you could probably apply half the fully general aid criticisms he applies to EA to the surfer-helping-his-friends-in-Indonesia example which he actually likes too)

Yeah, I'm not really bothered by objections at the abstract level of "but you can never account for every side effect", since as you say there's no real reason to think they are net negative. (I get the feeling that he thinks "that's not the point you evil, ends-justifies-the-means utilitarians, if your doing harm you should stop, whether or not it leads to some greater good!" But I think that's confusing the view that you shouldn't deliberately do bad things for the greater good, with the implausible claim that no one should ever doing anything that was a necessary step in a bad thing happening. The latter would just recommend never doing anything significant, or probably even insignificant, ever.)

What does bother me is if:

  1. GiveWell is not properly accounting for or being honest about negative side effects that it actually does know about.

  2. GiveWell is overstating the evidence for the reliability/magnitude of the primary effectd the interventions they recommend are designed to bring about.

I got the impression he was also endorsing 1 and 2, though it's not exactly giving a detailed defence of them.

I think his argument is mainly "aid is waaayyyy more unpredictable and difficult to measure than your neat little tables crediting yourselves with how efficient you are at saving lives suggest", with GiveWell ironically getting the biggest bashing because of how explicit they are about highlighting limitations in their small print. Virtually all the negative side effects and recommendation retractions he's highlighted come straight from their presentations of their evidence on their website. He's also insistent they need to balance the positives of lifesaving against harm from nets being redeployed for fishing, but ironically the only people I've seen agree with him on that point are EAs.

It's less an argument they're not properly accounting for stuff and more that the summaries with the donation button below sound a lot more confident about impact than the summaries with the details for people that actually want to read them. I'm reminded of Holden's outspoken criticism of big NGOs simplifying their message to "do x for $y per month" being "donor illusion" back in the day... 

I guess I feel "what are they supposed to do, not put their bottom-line best estimate in the summary?". Maybe he'd be satisfied if all the summaries said "our best guess is probably off by quite a lot, but sadly this is unavoidable, we still think your donations will on average do more good if you listen to us than if you try to find the best choice yourself"?

If you haven't read it already, you might find "Poverty is No Pond" (2011) interesting. He discusses his critiques EA's approach to global development & GiveWell in more detail.

This just seemed to be a list of false claims about things GiveWell forgot to consider, a series of ridiculous claims about philosophy, and no attempt to compare the benefits to the costs.  Yes, lots of EA charities have various small downsides, most of which are taken into account, but those are undetectable compared to the hundreds of thousands of lives saved.  He suggests empowering local people, which is a good applause line, but it's vague.  Most local people are not in a position to do high quality comparisons between different interventions.  

Relevant to the discussion is a recently released book by Dirk-Jan Koch who was Chief Science Officer in the Dutch Foreign Ministry (which houses their development efforts). The book explores the second order effects of aid and their implications for an effective development assistance: Foreign Aid And Its Unintended Consequences.

In some ways, the arguments of needing to focus more on second-order effects are similar to the famous 'growth and the case against randomista development' forum post.

The west didn't become wealthy through marginal health interventions, why should we expect this for Sierra Leone or Bangladesh?

Second-order effects are important and should be taken into as much consideration as the first-order effects. But arguing that second-order effects are more difficult to predict, and we therefore shouldn't do anything falls prey to the Copenhagen Interpretation of Ethics.

"This bravado carries over into the blunt advice that MacAskill gives throughout the book. For instance, are you concerned about the environment? Recycling or changing your diet should not be your priority, he says; you can be “radically more impactful.” By giving $3,000 to a lobbying group called Clean Air Task Force (CATF), MacAskill declares, you can reduce carbon emissions by a massive 3,000 metric tons per year. That sounds great.

Friends, here’s where those numbers come from. MacAskill cites one of Ord’s research assistants—a recent PhD with no obvious experience in climate, energy, or policy—who wrote a report on climate charities. The assistant chose the controversial “carbon capture and storage” technology as his top climate intervention and found that CATF had lobbied for it. The research assistant asked folks at CATF, some of their funders, and some unnamed sources how successful they thought CATF’s best lobbying campaigns had been. He combined these interviews with lots of “best guesses” and “back of the envelope” calculations, using a method he was “not completely confident” in, to come up with dollar figures for emissions reductions. That’s it.

Strong hyping of precise numbers based on weak evidence and lots of hedging and fudging. EAs appoint themselves experts on everything based on a sophomore’s understanding of rationality and the world. And the way they test their reasoning—debating other EAs via blog posts and chatboards—often makes it worse. Here, the basic laws of sociology kick in. With so little feedback from outside, the views that prevail in-group are typically the views that are stated the most confidently by the EA with higher status. EAs rediscovered groupthink."

Yeah, MacAskill and EA deserve the roasting on this one. FP's report from 6 years ago was a terrible basis for the $1/ton figure. MacAskill should have never used it in WWOTF. The REDD+ section proved to be widely inaccurate; it underestimates cost by at least 10x, 200x if you account for permanence. The nuclear power advocacy BOTEC was even worse. And FP and GG still reference it!

jackva
1mo17
3
0
1
2

I agree with you that the 2018 report should not have been used as primary evidence for CATF cost-effectiveness for WWOTF (and, IIRC, I advised against it and recommended an argument more based on landdscaping considerations with leverage from advocacy and induced technological change). But this comment is quite misleading with regards to FP's work as we have discussed before:

  1. I am not quite sure what is meant with "referencing it", but this comment from 2022 in response to one of your earlier claims already discusses that we (FP) have not been using that estimate for anything since at least 2020. This was also discussed in other places before 2022.
  2. As discussed in my comment on the Rethink report you cite, correcting the mistakes in the REDD+ analysis was one of the first things I did when joining FP in 2019 and we stopped recommending REDD+ based interventions in 2020.  Indeed, I have been publicly arguing against treating REDD+ as cost-effective ever since and the thrust of my comment on the RP report is that they were still too optimistic.

(For those in the comments, you can track prior versions of these conversations in EA Anywhere's cause-climate-change channel).

  1. Last time I checked, GG's still linked to FP's CATF BOTEC on nuclear advocacy. Yes, I understand FP no longer uses that estimate. In fact, FP no longer publishes any of its BOTECs publicly. However, that hasn't stopped you from continuing to assert that FP hits around $1/ton cost-effectiveness, heavily implying CATF is one such org, and its nuclear work being the likely example of it. The BOTEC remains in FP's control, and it has yet to include a disclaimer. Please stop saying you can hit $1/ton based on high speculative EV calcs with numbers pulled out of thin air. It is not credible and is embarrassing to those of us who work on climate in EA.

  2. I never intended to assert that FP still endorses REDD+. Merely to point out that the 2018 FP analysis of REDD+ (along with CCS and nuclear advocacy) was a terrible basis for Will to use in WWOTF for the $1/ton figure. While FP no longer endorses REDD+, FP's recent reports contain all the same process errors that Lief points out about the 2018 report - lack of experience, over-reliance on orgs they fund, best guesses, speculation.

There is a lot in this article that I disagree with. However, I think the following quote is very true[1] and we should take the issue seriously, particularly professional community builders like myself. 

At their best, EAs are well-meaning people who aspire to rigorous analysis. But EA doesn’t always bring out their best. 

  1. ^

    Although I think it's probably true of just about every movement. The real question is whether EA is relatively bad at bringing out the best in well-meaning people. I don't think this is the case currently, but we shouldn't rest on our laurels. 

I really like the framing in Does EA bring out the best in me? (and symmetrically, "When I'm interacting with the effective altruism community, do I help bring out the best in others?")

But as written in this article, it doesn't seem to mean anything more than "I don't like EA". For anything to always bring out its best, it would need to be unrealistically consistent (as you mention in the footnote)

This entire thing is just another manifestation of academic dysfunction 

(philosophy professors using their skills and experience to think up justifications for their pre-existing lifestyle, instead of the epistemic pursuit that justified the emergence of professors in the first place).

It started with academia's reaction to Peter Singer's Famine, Affluence, Morality essay in 1972, and hasn't changed much since. The status quo had already hardened, and the culture became so territorial that whenever someone has a big idea, everyone with power (who already optimized for social status) had an allergic reaction to the memetic spread rather than the epistemics behind the idea itself.

I can see why this piece's examples and tone will rankle folks here. But speaking for myself, I think its core contention is directionally correct: EA's leading orgs' and thinkers' predictions and numeric estimates have an "all fur coat and no knickers" problem -- putative precision but weak foundations. My entry to GiveWell's Change Our Mind contest made basically the same point (albeit more politely).

Another way to frame this critique is to say it's an instance of the Shirky principle: institutions will try to preserve the problem to which they are the solution. If GiveWell (or whoever) tried to clear up the ambiguous evidence underpinning its recommendations by funding more research (on the condition that the research would provide clear cost-benefit analyses in terms of lives saved per dollar), then what further purpose would the evaluator have once that estimate came back?

There are very reasonable counterpoints to this. I just think the critique is worth engaging with.

I looked at the eval for SMC, and it seems they relied largely on a Cochrane meta-analysis and then tried to correct down for a smaller effect in subsequent RTCs. If even relying on the allegedly gold standard famously intervention-skeptical Cochrane and then searching for published discomformation isn't reliable, how can anyone ever be reasonably confident anything works?

As I argue in the SMC piece, not just any RCT will suffice, and today we know a lot more about what good research looks like. IMO, we should (collectively) be revisiting things we think we know with modern research methods. So yes, I think we can know things. But we are talking about hundreds of millions of dollars. Our evidentiary standards should be high.

Related: Keving Munger on temporal validity https://journals.sagepub.com/doi/10.1177/20531680231187271

I guess my pessimism is partly "if the gold standard of 2012 was total garbage, even from an organisation-Cochrane-that has zero qualms about saying there's not much evidence for popular interventions-why should I trust that our 2024 idea of what good research looks like isn't also wildly wrong? I wasn't criticising you by the way-it's good you're holding GiveWell to account! I was just expressing stress/upset about the idea that we're all wasting our time or making a fool of ourselves.

Some research evaluations last over time! But Munger's 'temporal validity' argument really stuck with me: the social world changes over time, so things that work in one place and time could fail in another for reasons that have nothing to do with rigor, but changing context.

In general, null results should be our default expectation in behavioral research: https://www.bu.edu/bulawreview/files/2023/12/STEVENSON.pdf

However, per https://eiko-fried.com/antidotes-to-cynicism-creep/#6_Antidotes_to_cynicism_creep

More broadly, for me personally, the way forward is to incentivize, champion, and promote better and more robust scientific work. I find this motivating and encouraging, and an efficient antidote against cynicism creep. I find it intellectually rewarding because it is an effort that spans many areas including teaching science, doing science, and communicating science. And I find it socially rewarding because it is a teamwork effort embedded in a large group of (largely early career) scientists trying to improve our fields and build a more robust, cumulative science.

I mean, I guess that is sort of encouraging, if you personally are a scientist, since it suggests you can do good work yourself. But it doesn't offer me much sense that I who am not a scientist will ever in fact be able to trust very much outside established theory in the hard sciences, unless you think better methodology is going to become used nearly always by the big reputable orgs and journals. (I mean I already mostly didn't have trust, but I kind of hoped GiveWell were relying on the minority of actually solid stuff.) 

Obviously, 'don't trust anything' could just be the right conclusion, and people should say it if it's true! But it's hard not to get disheartened about giving, if the messages is "don't trust any research before c.2015, or also a lot of it afterward, even from the most apparently reliable and skeptical sources, and also, even good research produced now often has little external validity, so probably don't trust that the good current stuff tells you much about what will happen going forward, either". 

[comment deleted]1mo1
0
0

I am very surprised to read that GiveWell doesn't at all try to factor in deaths caused by the charities when calculating lives saved. I don't agree that you need a separate number for lives lost as for lives saved, but I had always implicitly assumed that 'lives saved' was a net calculation.

The rest of the post is moderately misleading though (e.g. saying that Holden didn't start working at Open Phil, and the EA-aligned OpenAI board members didn't take their positions, until after FTXFF had launched).

The "deaths caused" example picked was pretty tendentious. I don't think it's reasonable to consider an attack at a facility by a violent criminal in a region with high baseline violent crime "deaths caused by the charity" or to extrapolate that into the assumption that two more people will be shot dead for every $100,000 donated. (For the record, if you did factor that into their spreadsheet estimate, it would mean saving a life via that program now cost $4776 rather than $4559)

I would expect the lives saved from the vaccines to be netted out against deaths from extremely rare vaccine side effects (and the same with analysis of riskier medical interventions), but I suspect the net size of that effect is 0 to several significant figures and already factored into the source data.

I don’t think you incorporate the number at face value, but plausibly you do factor it in in some capacity, given the level of detail GiveWell goes into for other factors

I think if there's no credible reason to assign responsibility to the intervention, there's no need to include it in the model. I think assigning the charity responsibility for the consequences of a crime they were the victim of is just not (by default) a reasonable thing to do.

It is included in the detailed write-up (the article even links to it). But without any reason to believe this level of crime is atypical for the context or specifically motivated by e.g. anger against the charity, I don't think anything else needs to be made of it.

I don't agree that you need a separate number for lives lost as for lives saved, but I had always implicitly assumed that 'lives saved' was a net calculation.

Interesting! I think the question of whether 1 QALY saved (in expectation) is canceled out by the loss of 1 QALY (in expectation) is a complicated question. I tend to think there's an asymmetry between how good well-being is & how bad suffering is, though my views on this have oscillated a lot over the years. I'd like GiveWell to keep the tallies separate because I'd prefer to make the moral judgement depending on my current take on this asymmetry, rather than have them default to saying it's 1:1.

I tend to think there's an asymmetry between how good well-being is & how bad suffering is

This isn't relevant if you think GiveWell charities mostly act to prevent suffering. I think this is certainly true for the health stuff, and arguably still plausible for the economic stuff.

This is an important point. People often confuse harm/benefit asymmetries with doing/allowing asymmetries. Wenar's criticism seems to rest on the latter, not the former. Note that if all indirect harms are counted within the constraint against causing harm, almost all actions would be prohibited. (And on any plausible restriction, e.g. to "direct harms", it would no longer be true that charities do harm. Wenar's concerns involve very indirect effects. I think it's very unlikely that there's any consistent and plausible way to count these as having disproportionate moral weight. To avoid paralysis, such unintended indirect effects just need to be weighed in aggregate, balancing harms done against harms prevented.)

I don’t think it can be separated neatly. If the person who has died as a result of the charity’s existence is a recipient of a disease reduction intervention, then they may well have died from the disease instead if not for the intervention.

I share your view that the criticism of seeming precision in EA is directionally correct, though attacking the cost-effectiveness of anti-malaria interventions sounds like it's honing in on the least controversial predictions and strongest evidence base!

I'm less convinced the Shirky principle applies here. I don't think clearing up ambiguous evidence for SMC would leave GiveWell or any other research org short of purpose, I think it would leave them in a position where they'd be able to get on with evaluating other causes, possibly with more foundation money headed their way to do so. For malaria specifically I also don't think it's possible to eliminate the uncertainty even with absurd research budgets, because background malaria prevalence and seasonal patterns vary so much by region and time (and are themselves endogenous with respect to prevention strategies used) so comparisons between areas require plugging assumptions into a model, and there will always be some areas where it has more or less effect.

Share your questions for Leif here

Many good points:

  • Use of expected value when error bars are enormously wide is stupid and deceptive

  • EA has too many eggs in the one basket that is GiveWell's research work

  • GiveWell under-emphasises the risks of their interventions and overstates their certainty of their benefits

  • EA is full of young aspiring heroes who think they're the main character in a story about saving the world

  • Longtermism has no feedback mechanism and so is entirely speculative, not evidence-based

  • Mob think is real (this forum still gives people with more karma more votes for some reason)

But then:

  • His only suggestions for a better way to reallocate power/wealth/opportunity from rich to poor are: 1. acknowledging that it's complex and 2. consulting with local communities (neither are new ideas, both are often already done)

  • Ignores the very established, non-EA-affiliated body of development economists using RCTs; Duflo and Banerjee won the Nobel memorial economics prize for this and Dan Karlan who started Innovations for Poverty Action now runs USAID. EA might be cringe but these people aren't.

just fyi Dean Karlan doesn't run USAID, he's Chief Economist. Samatha Power is the (chief) administrator of USAID.

Well Leif Wenar seems to have written a hatchet job that's deliberately misleading about EA values, priorities, and culture. 

The usual anti-EA ideologues are celebrating about Wired magazine taking such a negative view of EA.

For example, leader of the 'effective accelerationist' movement 'Beff Jezos' (aka Guillaume Verdon) wrote this post on X, linking to the Wenar piece, saying simply 'It's over. We won'. Which is presumably a reference to EA people working on AI safety being a bunch of Luddite 'decels' who want to stop the glorious progress towards ASI replacing all of humanity, and this Wenar piece permanently discrediting all attempts to slowing AI or advocating for AI safety.

So, apart from nitpicking everything that Wenar gets wrong, we should pay attention to the broader cultural context, in which he's seen as a pro-AI e/acc hero for dissing all attempts at promoting AI safety and responsible longtermism.

I'm not sure where is the best place to share this, but I just received a message from GD that made think of Wenar's piece: John Cena warns us against giving cash with conditions | GiveDirectly (by Tyler Hall)
Ricky Stanicky is a comedy about three buddies who cover for their immature behavior by inventing a fictitious friend ‘Ricky’ as an alibi. [...]

When their families get suspicious, they hire a no-name actor (played by John Cena) to bring ‘Ricky’ to life, but an incredulous in-law grills Ricky about a specific Kenyan cash transfer charity he’d supposedly worked for. Luckily, actor Ricky did his homework on the evidence.

So I just replied GD asking:
Did John Cena authorize you to say things like “Be like John Cena and give directly”? Or this is legally irrelevant?

D’you notice that you’re using a fraudster as an example?
Even if one accepts that what Cena’s character (Stanicky-Rod) is true, he’s misleading other people; so the second thing that should come to mind when one reads your message is “so what makes me confident that GD is not lying to me, too?”

At least add some lines to assure your donors (maybe you see them more as customers?) are not being similarly fooled.

I'm possibly biased, but I do see that as an instance of an EA-adjacent collaborator failing to put himself in the donors shoes. But I guess it might be an effective ad, so it's all for the best?

[comment deleted]1mo3
0
1
Curated and popular this week
Relevant opportunities