Hide table of contents

A few notes to start: 

  • This is my first Forum post.
  • I received an early copy of the book from a colleague. Though I agree with some of what is included in it, this post does not serve as a holistic endorsement of the book or the ideas therein.
  • This post is written in a personal capacity. The views expressed do not represent those of Effective Altruism DC (EA DC) or any other organization with which I am affiliated.
  • My thanks to Manuel Del Río Rodríguez for his post from 17th January 2023, before the book was released: "Book Critique of Effective Altruism".

 

Why I Read The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism:

  1. Criticism—such as this—is a gift. I care deeply about addressing the issues that are discussed in this volume, and I believe the contributors' perspectives are valuable for me when I am reflecting on how I think about and work on them (especially relative to other communities), as well as how our work is perceived. When I can engage with thoughtful criticism with openness and deep reflection, I learn, grow as a person, and make better decisions. I appreciate the time that the contributors put into this book and that they also care about making the world as good a place as it can be. 
  2. I have read and enjoyed past works by many of the authors included in the volume, not least Adams, Crary, Gruen, and Srinivasan.
  3. I have been engaging with [what later became] EA since 2011 (having first learned about it by reading The Life You Can Save in 2009), but I did not begin full-time community building until 2022. I began this work because I hold (and still do) that EA ideas, funding, and work have had positive impacts and will continue to have positive impacts across numerous axes. I also hold (and still do) that unnecessary harm has occurred along the way and that we need to do better, particularly in professionalizing affiliated organizations and making the community more diverse, equitable, and inclusive. I believe this book can help us do good better.

[Added at 19:00 on 8 February] Many of the critiques found in the book do not reflect how most engaged EAs interpret the ideas or the community; rather than inditing the author(s) for this, try to empathize with how they may have come to this conclusion and follow their arguments from there.

I encourage you to read the book and to share your perspectives (or a summary) with your local group, EA Anywhere, and/or on this Forum post (or in a post of your own).

 

The remainder of this post includes the book summary, chapter titles, and reviews from the publisher, Oxford University Press. 

 

Book Summary:

The Good It Promises, the Harm It Does is the first edited volume to critically engage with Effective Altruism (EA). It brings together writers from diverse activist and scholarly backgrounds to explore a variety of unique grassroots movements and community organizing efforts. By drawing attention to these responses and to particular cases of human and animal harms, this book represents a powerful call to attend to different voices and projects and to elevate activist traditions that EA lacks the resources to assess and threatens to squelch. The contributors reveal the weakness inherent within the ready-made, top-down solutions that EA offers in response to many global problems-and offers in their place substantial descriptions of more meaningful and just social engagement.

 

Table of Contents:

Foreword - Amia Srinivasan

Acknowledgments

About the Contributors

Introduction - Carol J. Adams, Alice Crary, and Lori Gruen

  1. "How Effective Altruism Fails Community-Based Activism" - Brenda Sanders
  2. "Effective Altruism's Unsuspecting Twenty-First Century Colonialism" - Simone de Lima
  3. "Anti-Blackness and the Effective Altruist" - Christopher Sebastian
  4. "Animal Advocacy's Stockholm Syndrome" - Andrew deCoriolis, Aaron S. Gross, Joseph Tuminello, Steve J. Gross, and Jennifer Channin
  5. "Who Counts? Effective Altruism and the Problem of Numbers in the History of American Wildlife Conservation" - Michael D. Wise
  6. "Diversifying Effective Altruism's Long Shots in Animal Advocacy: An Invitation to Prioritize Black Vegans, Higher Education, and Religious Communities" - Matthew C. Halteman
  7. "A Christian Critique of the Effective Altruism Approach to Animal Philanthropy" - David L. Clough
  8. "Queer Eye on the EA Guys" - pattrice jones
  9. "A Feminist Ethics of Care Critique of Effective Altruism" - Carol J. Adams
  10. "The Empty Promises of Cultured Meat" - Elan Abrell
  11. "How "Alternative Proteins" Create a Private Solution to a Public Problem" - Michele Simon
  12. "The Power of Love to Transform Animal Lives: The Deciption of Animal Quantification" - Krista Hiddema
  13. "Our Partners, The Animals: Reflections from a Farmed Animal Sanctuary" - Kathy Stevens
  14. "The Wisdom Gained from Animals who Self-Liberate" - Rachel McCrystal
  15. "Effective Altruism and the Reified Mind - John Sanbonmatsu
  16. "Against "Effective Altruism"" - Alice Crary
  17. "The Change We Need" - Lori Gruen
  18. Coda—"Future-Oriented Effective Altruism: What's Wrong with Longtermism?" - Carol J. Adams, Alice Crary, and Lori Gruen

Index

 

Reviews:

"The story of Effective Altruism is told here not by its proponents, but by those engaged in liberation struggles and justice movements that operate outside of Effective Altruism's terms. There is every possibility that Effective Altruists will ignore what these voices have to say. That would be a deep shame, and what's more, a betrayal of a real commitment to bring about a better world." -- Amia Srinivasan, Chichele Professor of Social and Political Theory at All Souls College, Oxford


"Effective Altruism has made big moral promises that are often undermined by its unwillingness to listen attentively to the voices of its detractors, especially those from marginalized communities. In this vital, stimulating volume, we hear from some of the most important of these voices on some of the most important criticisms of Effective Altruism, including its racism, colonialism, and technocratic rationalism. This book is essential, inviting reading for both Effective Altruists and their critics." -- Kate Manne, Associate Professor at the Sage School of Philosophy, Cornell University


"What could possibly go wrong when a largely white and male alliance of academics, business and nonprofit arrivistes, and obscenely rich donors reduce complex situations to numbers and plug those numbers into equations that claim to offer moral and strategic clarity about how we should live in a suffering world? In this book, dissenting activists and academics speak passionately and plainly about what has gone wrong--and provide an armamentarium for those keen to free action and imagination from the alliance's outsized grip on the work of liberation." -- Timothy Pachirat, author of Every Twelve Seconds: Industrialized Slaughter and the Politics of Sight

39

0
0

Reactions

0
0

More posts like this

Comments43
Sorted by Click to highlight new comments since: Today at 2:01 PM

Disclosure: I work at an animal advocacy organisation funded by ACE and EA Funds.

I finished reading this book. It's almost entirely on animal advocacy. I think the book would benefit quite a lot if the authors focused on narrow and specific claims and provided all the evidence to make really strong cases for these claims. Instead many authors mention many issues without getting really deeper than pre-existing debates on the topic. I can't say I have seen much new material, but I already work on animal advocacy so I keep reading about this topic all the time. Maybe it's good to collect existing criticisms into a book format.

 I think the strongest criticism in the book that gets repeated quite a lot is the problem of measurability bias in animal advocacy. I keep thinking about this too and I hope we find better ways to prioritise interventions in animal advocacy.  Here's Macaskill talking about measurability bias sometime ago:

"here’s one thing that I feel gets neglected: The value of concrete, short-run wins and symbolic actions. I think a lot about Henry Spira, the animal rights activist that Peter Singer wrote about in Ethics into Action. He led the first successful campaign to limit the use of animals in medical testing, and he was able to have that first win by focusing on science experiments at New York’s American Museum of Natural History, which involved mutilating cats in order to test their sexual performance after the amputation. From a narrow EA perspective, the campaign didn’t make any sense: the benefit was something like a dozen cats. But, at least as Singer describes it, it was the first real win in the animal liberation movement, and thereby created a massive amount of momentum for the movement.

I worry that in current EA culture people feel like every activity has to be justified on the basis of marginal cost-effectiveness, and that that the fact that an action would constitute some definite and symbolic, even if very small, step towards progress — and be the sort of thing that could provide fuel for a further movement — isn’t ‘allowable’ as a reason for engaging in an activity. Whereas in activism in general these sorts of knock-on effects would often be regarded as the whole point of particular campaigns, and that actually seems to me (now) like a pretty reasonable position (even if particular instances of that position might often be misguided)."

Yet I think the authors in this book jump too quickly from "You can't measure all the impacts" to "Support my favourite thing".

I think animal advocates have been trying these symbolic gestures for years and it's not clear how much they're now helping farmed animals, and it doesn't seem like much (but counterfactuals are tricky). There's still a lot of this kind of work going on, because non-EA advocates are willing to support it.

Furthermore, EAA has been supporting some of these, like the Nonhuman Rights Project and Sentience Politics, just not betting huge on them. NhRP was previously an ACE standout charity for several years, although now I think they basically only get Movement Grants from ACE, which are smaller. There's work to try to get octopus farming banned before it grows. I think we still support bans on fur farming, foie gras, etc. when good opportunities arise. And corporate welfare campaigns also have symbolic value and build momentum, and the fact that they're so often successful and even often against pretty big targets probably helps with the momentum.

And we have taken big bets on plant-based substitutes and cultured meat for years, with little apparent impact for animals so far.

At first glance, I was worried that a lot of it would be low-quality criticisms that attack strawmen of EA, and the other comments in this thread basically confirm this.

It's astounding how often critics of EA get basic things about EA wrong. The two most salient ones that I see are:

  • "EA is founded on utilitarianism" - no, it's not. EA is loosely based on utilitarianism, but as Erich writes, EA is compatible with a broader range of ethical frameworks, particularly beneficentric ones.
    • Corollary: if you morally value other things besides welfare (e.g. biodiversity), come up with some way to trade off those moral goods against each other and compare interventions using your all-things-considered metric.
  • "EA only cares about things that can be easily measured" - again, no. It's great when there's empirical studies of cost-effectiveness, but we all recognize that's not always possible. In general, the EA movement has become more open to acting on greater uncertainty. What ultimately matters is estimating impact, not measuring it. Open Phil had back-of-the-envelope calculations for a vast set of cause areas in 2014. EAs have put pages and pages of effort into trying to estimate the impact of things that are hard to measure, like economic growth and biosecurity interventions.

To be fair, my responses to these criticisms above still assume that you can quantify good done. But in principle, you can compare the impact of interventions without necessarily quantifying them (e.g. using ordinal social welfare functions); it's just a lot easier to make up numbers and use them.

On the one hand, I really wish people who want to criticize EA would actually do their fucking homework and listen to what EAs themselves have said about the things they're criticizing. On the other hand, if people consistently mistakenly believe that EA is only utilitarianism and only cares about measurable outcomes, then maybe we need to adjust our own messaging to avoid this.

So if you're wondering why EAs are reluctant to engage with "deep criticisms" of EA principles, maybe it's because a lot of them miss the mark.

I think it's wrong to think of it as criticism in the way EA thinks about criticism which is "tell me X thing I'm doing is wrong so I can fix it" but rather highlight the set of existing fundamental disagreements. I think the book is targeted at an imagined left-wing young person who the authors think would be "tricked" into EA because they misread certain claims that EA puts forward. It's a form of memeplex competition. Moreover, I do think some of the empirical details talking about the effect ACE has on the wider community can inform a lot of EAs with coordinating with wider ecosystems in cause areas and common communication failure modes. 

Well, no. I gather that the goal of these criticisms is to "disprove EA" or "argue that EA is wrong". To the extent that they attack strawmen of EA instead of representing it for what it is and arguing against that, they've failed to achieve that goal.

FWIW, I read your comment as agreeing with zchuangs. They say that the book aims to convince its target audience by highlighting fundamental differences, and you say it aims to disprove EA. Highlighting (and I guess more specifically, sometimes at least, arguing against) fundamental principles of EA seems like it's in the "disproving EA" bucket to me.

(I agree with both of these perspectives, but I only read a single essay, which zchuang called one of the weird ones, so take that with a grain of salt.)

Eva
1y22
9
1

I would really like to read a summary of this book. The reviews posted here (edit: in the original post) do not actually give much insight as to the contents. I'm hoping someone will post a detailed summary on the forum (and, as EAs love self-criticism, fully expect someone will!).

Thanks very much, Kyle, for starting this thread. I put the penultimate draft of my essay up at PhilPapers.org, for anyone who might be interested in reading Chapter 6, "Diversifying EA Long Shots: An Invitation to Prioritize Black Vegans, Higher Education, and Religious Communities." 

Abstract, keywords, and free download are available here: https://philpapers.org/rec/HALQEA-2 . Please note that the pagination mostly tracks with the published version, but there are limits to my rinky-dink MS-Word formatting efforts to make the penultimate draft (which is slightly different in places) read as much like the published version as possible. :)

Writing this essay was a fun learning experience for me (though I'm sure you'll see I'm still on the learning curve) and I especially appreciated the always careful, always kind critical feedback I received from members of the EA community including JD Bauman, Caleb Parikh, Dominic Roser, and Zak Weston. 

Feedback welcome here but I confess to being old, not very online lively, and more of a reader and a ruminator than a forum commenter. Much gratitude for the opportunity to post!

Update: There was a printing error in the original pdf I posted, which led to pp. 82-83 appearing twice while pp. 84-85 were omitted. I have corrected the error. Apologies to anyone who may have downloaded the incomplete version.

I want to adhere to forum norms and maintain a high quality in my posts, but this is tempting me to throw all that out the window. Of course, I will read a summary if one is provided, but going over these chapter titles, this book could just as well be a caricature of wokeness. Prioritizing Black Vegans? Queer Eye on the EA Guys? The celebratory quote complaining about white males getting it all wrong? Not to mention chapter 11 sounds seriously reminiscent of degrowthers. “Sure, alternative proteins ended factory farming, but they didn’t overthrow capitalism.”

My priors on this having any value to the goal of doing the most good are incredibly low.

  1. The Black Vegans one is about different consumer price elasticities between racial groups along various axises.
  2. Queer Eye on the EA Guys is about different measures of animal suffering and coordination between EA and animal activists broadly.
  3. Chapter 11 I also expected to be about degrowthers but it's about regulatory capture and Jevon's paradox.

Moreover, I think naming conventions for left-wing texts just have this effect. It depends on how the audience pattern-matches I guess. Also Queer Eye on the EA Guys is just a funny pun. It's an interesting read for me personally at least. I don't think it changed any of my opinions or actions.

So I'm tempted to read it because I like to engage with criticism that someone has spent a long time writing, but having read the article they wrote to preface it (https://blog.oup.com/2022/12/the-predictably-grievous-harms-of-effective-altruism/) I imagine I'm gonna hear the flaws and be like "no these are features, not bugs" .

From the above article:

To grasp how disastrously an apparently altruistic movement has run off course, consider that the value of organizations that provide healthy vegan food within their underserved communities are ignored as an area of funding because EA metrics can’t measure their “effectiveness.” 

Yes seems reasonable not to fund stuff until you know effectiveness (though I've now read this example in the book and they might be cost effective since they seem to be well attended)

 Or how covering the costs of caring for survivors of industrial animal farming in sanctuaries is seen as a bad use of funds.

Yeah seems right

Or how funding an “effective” organization’s expansion into another country encourages colonialist interventions that impose elite institutional structures and sideline community groups whose local histories and situated knowledges are invaluable guides to meaningful action.

So while I think that it's possible to overquanify, yeah I probably am skeptical that local histories are going to outcompete an effective intervention.

So I guess I predict I'm gonna think "a couple of useful examples, boy these people don't like us, yeah we didn't want to do that anyway, okay yeah no that's an actual fair critcism"

And if we don't respond, it will be all "you didn't read our book you don't like criticism"

And if I respond like that it will be "you haven't really engaged with it".

So yeah, unsure how to respond. I guess, do I think that if I read and wrote a response it would be interestingly engaged with? No, not really - the books tone is combative, as if it's doing the least possible work to engage but wants to say it tried. 

So yeah, I have little confidence that reading this book will start an actual discussion. Maybe I'll talk to the authors on twitter xxox

"...are ignored as an area of funding because EA metrics can’t measure their 'effectiveness.'"

Yes seems reasonable not to fund stuff until you know effectiveness

The EA movement doesn't ignore interventions that can't be easily measured, though. As I stated in another comment, what matters is being able to estimate impact, not being able to measure it directly e.g. through RCTs.

"Or how funding an 'effective' organization’s expansion into another country encourages colonialist interventions that impose elite institutional structures and sideline community groups whose local histories and situated knowledges are invaluable guides to meaningful action."

So while I think that it's possible to overquanify, yeah I probably am skeptical that local histories are going to outcompete an effective intervention.

I actually think that local knowledge, indigenous knowledge, etc. can be helpful for informing the design of interventions, but they're best used as an input to the scientific method, not a substitute for it.

Aside: I think these paragraphs are examples of opportunities for "EA judo": one can respond, as I did, that they're disagreeing with the reasoning methods that EA allegedly uses without disagreeing with the core principle that doing good effectively is important.

That seems like a reasonable assessment. I do think the authors would be willing to discuss their pieces, but I do not know how worthwhile it would be (though, admittedly, I think a public debate could make for an interesting event). 

I read "A Christian Critique of the Effective Altruism Approach to Animal Philanthropy" as a sampling. I picked it simply because it piqued my interest. I don't know whether it's representative of the book as a whole. Some thoughts ...

This essay is clearly not aimed at me, since it critiques EA from the point of view of Christian ethics, and while there are definitely Christian EAs, I personally find Christianity (and by extension, Christian ethics) highly implausible. I also find deontology and consequentialism sounder than virtue ethics. So it's no surprise that I find the author's worldview in the essay unconvincing, but the essay also presents arguments that don't really rely on Christianity being true, which I'll get to in a bit.

The essay proceeds roughly along these lines:

  • EA is founded on utilitarianism, and utilitarianism has issues.
  • In particular, there are problems when applying EA to animal advocacy.
  • The author's Christian ethical framework is superiour to the EA framework when deciding where to donate money to help animals.

Now, as for what I think about it ...

  • First, on utilitarianism.
    • The essay states: "Effective Altruism is founded on utilitarianism, and utilitarianism achieves simplicity in its consideration of only one morally relevant aspect of a situation. [...] The heart of what's wrong with Effective Altruism is a fundamental defect of utilitarianism: there are important morally relevant features of any situation that are not reducible to evaluating the results of actions and are not measurable or susceptible to calculation. This makes it inevitable that features of a situation to which numbers can be assigned are exaggerated in significance, while others are neglected."
      • This is the key argument that the essay presents against EA: utilitarianism is wrong since it dismisses non-welfarist goods, and therefore EA is wrong since it's a subset of utilitarianism.
    • I take this to be an argument about the philosophy of EA, not about the way it's practiced. But IMO it's false to say that EA is founded on utilitarianism (assuming we take "founded" to mean "philosophically grounded in" rather than "established alongside"). I think the premises EA relies on are weaker than that; they're something more like beneficentrism: "The view that promoting the general welfare is deeply important, and should be amongst one's central life projects."
      • This ends up mattering, because it means that EA can be practiced perfectly well while accepting deontic constraints, or non-welfarist values. I reckon you just need to think it's good to promote the good (this works for many different, though not all, definitions of the good), and to actually put that into practice and do it effectively.
    • There's no point in re-litigating the soundness of utilitarianism here, but though I lean deontological, as mentioned I find consequentialism (and utilitarianism) more plausible than Christian and/or virtue ethics. Anyway, I think even if utilitarianism were wrong or bad, EA would still be good and right, on grounds similar to beneficentrism.
  • Second, on measuring and comparing.
    • The essay argues that, though EAs love quantifying and measuring things, and then comparing things in light of that, this is a false promise: "All [EA] is doing is taking one measurable feature of a situation and representing it as maximal effectiveness. A Christian ethical analysis of making decisions about spending money, or anything else, would always be concerned to bring due attention to all the ethical moving parts."
    • With animals in particular, it's extremely hard to compare different kinds of good, and we should take a pluralistic approach to doing so: "How do you decide between supporting an animal sanctuary offering the opportunity for previously farmed animals to live out the remainder of their lives in comfort, or a campaign to require additional environmental enrichment in broiler chicken sheds, or the promotion of plant-based diets? Each is likely to have beneficial impacts on animals, but they are of very different kinds. The animal sanctuary is offering current benefits to the particular group of animals it's looking after. If successful, the broiler chicken campaign is likely to affect many more animals, but with a smaller impact on each. If the promotion of plant-based diets is successful on a large scale, it could reduce the demand for broiler chickens together with other animal products, but it might be hard to demonstrate the long-term effects of a particular campaign."
    • For example, giving to the farm sanctuary provides a lot of good that isn't easily measurable: "People have the experience of coming to a farmed animal sanctuary and encountering animals that are not being used in production systems. They have an opportunity to recognize the particularities of the animals' lives, such as what it means for this kind of animal to flourish. This encounter might well be transformative in the person's understanding of their relationship with farmed animals." And a farm sanctuary may be better allow humans to develop their virtue: "It would be hard to measure the effectiveness of that kind of education and character development in Effective Altruism terms."
      • As an aside, here's an issue I have with virtue ethics. I think it's perverse to think that doing something good for an animal (or human) is good because it allows one to develop one's virtue. Surely it's good to save animals from the horrific suffering they're subjected to in factory farms for the sake of the animals themselves, and the important thing here is what happens to them, what's good and bad for those whose suffering cries out that we do something?
      • So when I read: "If [...] you take the shortcut of just getting people to buy plant-based meat because it tastes good or costs less, as soon as either of those things change in a particular context and it becomes advantageous for people to behave in ways that result in bad treatment of animals, they have no reason to do otherwise." I can't help but think, Well, if I'm a pig in a factory farm, I probably don't give a fuck whether people stop eating meat because they prefer the taste of Impossible Pork or because they Saw The Light, I just want to get out of my shit-filled seven-by-two-feet gestation crate!
      • (Of course, if getting people to See The Light is the best way of getting fewer sows in gestation crates, I think EAs would happily endorse that strategy! That's just an empirical question. But it's quite a different thing to say that getting people to See The Light is better even though it leads to more pigs in gestation crates.)
    • Next, the author presents the systemic change argument against EA. In particular, the essay argues that EA's focus on measurements and data (1) causes EAs to be short-sighted, focusing on small, measurable wins at the expense of large, hard-to-measure wins, and (2) causes EAs to ignore or miss harder-to-measure second-order effects.
      • (The author does write that EAs could just do the better thing if there's a better thing to do. But this won't help, because EA's definition of "better" is lacking: it still dismisses (writes the author) all non-welfarist goods.)
      • I don't want to rehash that debate here as it's already been discussed at length elsewhere.
  • Third, the author presents an alternative to EA.
    • Don't get your hopes up, though. "The bad news is that there is no simple alternative Christian procedure for identifying the best options for giving."
    • Nonetheless, the author ventures three thoughts ...
      • First, you should trust your judgment: "Do not be tempted by claims of Effective Altruism or any other scheme to offer an objective rational basis for your decision. This is complicated stuff. It is much more complicated than any decision-making system can deal with. Your own commitments are likely to be a better initial basis for decision-making than any claimed objective system."
        • This seems basically like "trust your intuition / don't listen to others" to me, but I think people's intuition is often wrong and inconsistent, that listening to others allows you to form better views, and that if you care about achieving some goal (e.g. helping animals), you really should look at the evidence and use reason (though your intuitions are also evidence).
      • Second, remember that the most salient cause isn't necessarily the best: "It is easy to get the public to be concerned about big fluffy animals like pandas that they’ve seen in nature documentaries and who live far away. It is harder to get people interested in the farmed animals who live in warehouses not far away but hidden from view."
        • I, and I'd imagine all EAs, agree with this one! I also think it's in tension with the first suggestion: often people's commitments and personal judgments are closely connected with what they've been exposed to, because why wouldn't they be?
      • Third, don't ask for too much: "It is unhelpful to think that you are searching for the single most effective way your money can be used. Instead, you are looking for a good way to support a project that aligns with your priorities, is well-run, and looks like it has a good chance of achieving its goals."
        • I guess this may be true (though depressing) if it's true that we're clueless and can't compare causes. For reasons mentioned above, I think we can (and must) compare, but I get why the author ends up here given their other beliefs.

Going back to relying just on intuition and not listening to others would also seem pretty unvirtuous (unwise/imprudent) to me, but (without having read the chapter), I doubt the author would go that far, given his advice to look "for a good way to support a project that aligns with your priorities, is well-run, and looks like it has a good chance of achieving its goals". I would also guess he doesn't mean you should never question your priorities (or moral intuitions) or investigate where specific lines of moral reasoning lead.

I think he's mostly skeptical about relying primarily on one particular system, especially any simple one, because it would be likely to miss so much of what matters and so cause harm or miss out on doing better. But I think this is something that has been expressed before by EAs, including people at Open Phil, typically with respect to worldview diversification:

(E.g. the train to crazy town) https://80000hours.org/podcast/episodes/ajeya-cotra-worldview-diversification/

https://forum.effectivealtruism.org/posts/8wWYmHsnqPvQEnapu/getting-on-a-different-train-can-effective-altruism-avoid

https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous

"Alexander Berger: And I think part of the perspective is to say look, I just trust philosophy a little bit less. So the fact that something might not be philosophically rigorous…I’m just not ready to accept that as a devastating argument against it." https://80000hours.org/podcast/episodes/alexander-berger-improving-global-health-wellbeing-clear-direct-ways/

However, it seems EAs are willing to give much greater weight to philosophical arguments and the recommendations of specific systems.

On virtue ethics (although to be clear, I've read very little about virtue ethics, so may be way off), another way we might think about this is that the virtue of charity, say, is one of the ways you capture others mattering. You express and develop the virtue of charity to help others, precisely because other people and their struggles matter. It's good for you, too, but it's good for you because it's good for others, like how satisying your other-regarding preferences is good for you. Getting others to develop the virtue of charity is also good for them, but it's good for them because it's good for those that stand to be helped.

The argument you make against virtue ethics is also similar to an argument I'd make against non-instrumental deontological constraints (and I've also read very little about deontology): such constraints seem like a preoccupation with keeping your own hands clean instead of doing what's better for moral patients. And helping others abide by these constraints, similar to developing others' virtues, seems bad if it leads to worse outcomes for others. But all of this is supposed to capture ways others matter.

And more generally, why would it be better (or even sometimes obligatory) to do something that's worse for others overall than an alternative?

Going back to relying just on intuition and not listening to others would also seem pretty unvirtious (unwise/imprudent) to me, but (without having read the chapter), I doubt the author would go that far, given his advice to look "for a good way to support a project that aligns with your priorities, is well-run, and looks like it has a good chance of achieving its goals". I would also guess he doesn't mean you should never question your priorities (or moral intuitions) or investigate where specific lines of moral reasoming lead.

I think he's mostly skeptical about relying primarily on one particular system, especially any simple one, because it would be likely to miss so much of what matters and so cause harm or miss out on doing better.

Yeah that makes sense to me. My original reading was probably too uncharitable. Though when I read zchuang's observation further up

I think the book is targeted at an imagined left-wing young person who the authors think would be "tricked" into EA because they misread certain claims that EA puts forward. It's a form of memeplex competition.

I now feel like maybe the author isn't warning readers about the perils of focusing on a particular worldview, but specifically on worldviews like EA, that often take one measure and optimise it in practice (even if the philosophy permits a pluralistic view on value).

It does seem like their approach would have the effect of making people defer less, or biases them towards their original views and beliefs, though? Here's the full paragraph:

First, you have more reason to trust your judgments than you assume. What motivates you to give to make things better for animals? What kinds of mistreatment of animals are you most concerned about? Of the many kinds of activities benefitting animals, which are you most drawn to? Reflect on your priorities as a starting point. Do not be tempted by claims of Effective Altruism or any other scheme to offer an objective rational basis for your decision. This is complicated stuff. It is much more complicated than any decision-making system can deal with. Your own commitments are likely to be a better initial basis for decision-making than any claimed objective system.

And on this  ...

On virtue ethics (although to be clear, I've read very little about virtue ethics, so may be way off), another way we might think about this is that the virtue of charity, say, is one of the ways you capture others mattering. You express and develop the virtue of charity to help others, precisely because other people and their struggles matter. It's good for you, too, but it's good for you because it's good for others, like how satisying your other-regarding preferences is good for you. Getting others to develop the virtue of charity is also good for them, but it's good for them because it's good for those they'll help.

Yeah sure, though I don't think this really gets around the objection (at least not for me -- it's based on intuition, after all). Even if you build character in this way in order to help ppl/animals in the future, it's still the case that you're not helping the animals you're helping for their own sake, you're doing it for some other reason. Even if that other reason is to help other animals in the future, that still feels off to me.

The argument you make against virtue ethics is also similar to an argument I'd make against non-instrumental deontological constraints (and I've also read very little about deontology): such constraints seem like a preoccupation with keeping your own hands clean instead of doing what's better for moral patients. And helping others abide by these constraints, similar to developing others' virtues, seems bad if it leads to worse outcomes for others. But all of this is supposed to capture ways others matter.

I think this is a pretty solid objection, but I see two major differences between deontology and virtue ethics (disclaimer: I haven't read much about virtue ethics either so I could be strawmanning it) here:

  1. Deontological duties are actually rooted in what's good/bad for the targets of actions, whereas (in theory at least) the best way of building virtue could be totally disconnected from what's good for people/animals? (The nature of the virtue itself could not be disconnected, just the way you come by it.) E.g. maybe the best way of building moral character is to step into a character building simulator rather than going to an animal sanctuary? It feels like (and again I stress my lack of familiarity) a virtue ethicist comes up with what's virtuous by looking at the virtue-haver (and of course what happens to others can affect that, but what goes on inside the virtue-haver seems primary), whereas a deontologist comes up with duties by looking at what's good/bad for those affected (and what goes on inside them seems primary).
  2. Kantianism in particular has an injunction against using others as mere means, making it impossible to make moral decisions without considering those affected by the decision. (Though, yeah, I know there are trolley-like situations where you kind of privilege the first-order affected over the second-order affecteds.)

Edit: Also, with Kant, in particular, my impression is that he doesn't go, "I've done this abstract, general reasoning and came to the conclusion that lying is categorically wrong, so therefore you should never lie in any particular instance", but rather "in any particular instance, we should follow this general reasoning process (roughly, of identifying the maxim we're acting according to, and seeing if that maxim is acceptable), and as it happens, I note that the set of maxims that involve lying all seem unacceptable". Not sure if I'm communicating this clearly ...

I would expect that living your life in a character building simulator would itself be unvirtuous. You can't actually express most virtues in such a setting, because the stakes aren't real. Consistently avoiding situations where there are real stakes seems cowardly, imprudent, uncharitable, etc.. Spending some time in such simulators could be good, though.

On Kantianism, would trying to persuade people to not harm animals or to help animals mean using those people as mere means? Or, as long as they aren't harmed, it's fine? Or, as long as you're not misleading them, you're helping them make more informed decisions, which respects and even promotes their agency (even if your goal is actually not this, but just helping animals, and you just avoid misleading in your afvocacy). Could showing people factory farm or slaughterhouse footage be too emotionally manipulative, whether or not that footage is respresentative? Should we add the disclaimer to our advocacy that any individual abstaining from animal products almost certainly has no "direct" impact on animals through this? Should we be more upfront about the health risks of veganism (if done poorly, which seems easy to do)? And add various other disclaimers and objections to give a less biased/misleading picture of things?

Could it be required that we include these issues with all advocacy, to ensure no one is misled into going vegan or becoming an advocate in the first place?

I would expect that living your life in a character building simulator would itself be unvirtuous. You can't actually express most virtues in such a setting, because the stakes aren't real. Consistently avoiding situations where there are real stakes seems cowardly, imprudent, uncharitable, etc.. Spending some time in such simulators could be good, though.

Yes, I imagined spending some time in a simulator. I guess I'm making the claim that, in some cases at least, virtue ethics may identify a right action but seemingly without giving a good (IMO) account of what's right or praiseworthy about it.

On Kantianism, ...

There are degrees of coercion, and I'm not sure whether to think of that as "there are two distinct categories of action, the coercive and the non-coercive, but we don't know exactly where to draw the line between them" or "coerciveness is a continuous property of actions; there can be more or less of it". (I mean by "coerciveness" here something like "taking someone's decision out of their own hands", and IMO taking it as important means prioritising, to some degree, respect for people's (and animals') right to make their own decisions over their well-being.)

So my answer to these questions is: It depends on the details, but I expect that I'd judge some things to be clearly coercive, others to be clearly fine, and to be unsure about some borderline cases. More specifically (just giving my quick impressions here):

On Kantianism, would trying to persuade people to not harm animals or to help animals mean using those people as mere means? Or, as long as they aren't harmed, it's fine? Or, as long as you're not misleading them, you're helping them make more informed decisions, which respects and even promotes their agency (even if your goal is actually not this, but just helping animals, and you just avoid misleading in your advocacy).

I think it depends on whether you also have the person's interests in mind. If you do it e.g. intending to help them make a more informed or reasoned decision, in accordance with their will, then that's fine. If you do it trying to make them act against their will (for example, by threatening or blackmailing them, or by lying or withholding information, such that they make a different decision than had they known the full picture), then that's using as a mere means. (A maxim always contains its ends, i.e. the agent's intention.)

Could showing people factory farm or slaughterhouse footage be too emotionally manipulative, whether or not that footage is representative?

Yeah, I think it could, but I also think it could importantly inform people of the realities of factory farms. Hard to say whether this is too coercive, it probably depends on the details again (what you show, in which context, how you frame it, etc.).

Should we add the disclaimer to our advocacy that any individual abstaining from animal products almost certainly has no "direct" impact on animals through this?

Time for a caveat: I'd never have the audacity to tell people (such as yourself) in the effective animal advocacy space what's best to do there, and anyway give some substantial weight to utilitarianism. So what precedes and follows this paragraph aren't recommendations or anything, nor is it my all-things-considered view, just what I think one Kantian view might entail.

By "direct impact", you mean you won't save any specific animal by e.g. going vegan, you're just likely preventing some future suffering -- something like that? Interesting, I'd guess not disclosing this is fine, due to a combination of (1) people probably don't really care that much about this distinction, and think preventing future suffering is ~just as good, (2) people are usually already aware of something like this (at least upon reflection), and (3) people might have lots of other motivations to do the thing anyway, e.g. not wanting to contribute to an intensively suffering-causing system, which make this difference irrelevant. But I'm definitely open to changing my mind here.

Should we be more upfront about the health risks of veganism (if done poorly, which seems easy to do)?

I hadn't thought about it, but it seems reasonable to me to guide people to health resources for vegans when presenting arguments in favour of veganism, given the potentially substantial negative effects of doing veganism without knowing how to do it well.

Btw, I'd be really curious to hear your take on all these questions.

What I have in mind for direct impact is causal inefficacy. Markets are very unlikely to respond to your purchase decisions, but we have this threshold argument that the expected value is good (maybe in line with elasticities), because in the unlikely event that they do respond, the impact is very large. But most people probably wouldn't find the EV argument compelling, given how unlikely the impact is in large markets.

I think it's probably good to promote health resources to new vegans and reach them pretty early with these, but I'd worry that if we pair this information with all the advocacy we do, we could undermine ourselves. We could share links to resources, like Challenge22 (they have nutritionists and dieticians), VeganHealth and studies with our advocacy, and maybe even say being vegan can take some effort to do healthfully and for some people it doesn't really work or could be somewhat worse than other diets for them (but it's worth finding out for yourself, given how important this is), and that seems fine. But I wouldn't want to emphasize reasons not to go vegan or the challenges with being vegan when people are being exposed to reasons to go vegan, especially for the first time. EDIT: people are often looking for reasons not to go vegan, so many will overweight them, or use confirmation bias when assessing the evidence.

I guess the other side is that deception or misleading (even by omission) in this case could be like lying to the axe murderer, and any reasonable Kantian should endorse lying in that case, and in general should sometimes endorse instrumental harm to prevent someone from harming another, including the use of force, imprisonment, etc. as long as it's proportionate and no better alternatives are available to achieve the same goal. What the Health, Cowspiracy and some other documentaries might be better examples of deception (although the writers themselves may actually believe what they're pushing) and a lot of people have probably gone vegan because of them.

Misleasing/deception could also be counterproductive, though, by giving others the impression that vegans are dishonest, or having lots of people leave because they didn't get resources to manage their diets well, which could even give the overall impression that veganism is unhealthy.

I don't read academic criticism a lot, so what's the context of a book like this? Is it normal? What does it, imply, if anything?

Disclaimer: I read it a while ago and this is reproduction fast from memory. I also have bad memory of some of the weirder chapters (the Christianity one for instance). These also do not express my personal opinions but rather steelmans and reframings of the book.

I'm from the continental tradition and read a lot of the memeplex (e.g. Donna Harraway, Marcuse, and Freire). I'll try to make this short summary more EA legible:

1. The object level part of its criticisms draw upon qualitative data from animal activists who take higher risk of failure but more abolitionist approaches. The criticism is then the marginal change pushed by EA makes abolition harder because of the following: (a) lack of coordination and respect for the animal rights activists on the left and specifically the history there, (b) how funding distorts the field and eats up talent and  competes against the left (c) how they have to bend themselves to be epistemically scrutable to EA. 

An EA steelman example of similar points of thinking are EAs who are incredibly anti-working for OpenAI or Deepmind at all because it safety washes and pushes capabilities anyways. The criticism here is the way EA views problems means EA will only go towards solution that are piecemeal rather than transformative. A lot of Marxists felt similarly to welfare reform in that it quelled the political will for "transformative" change to capitalism. 

For instance they would say a lot of companies are pursuing RLHF in AI Safety not because it's the correct way to go but because it's the easiest low hanging fruit (even if it produces deceptive alignment). 

2. Secondarily there is a values based criticism in the animal rights section that EA is too utilitarian which leads to: (a) preferencing charities that lessen animal suffering in narrow senses and (b) when EA does take risks with animal welfare it's more technocratic and therefore prone to market hype with things like alternative proteins.

A toy example that might help is that something like cage free eggs would violate (a) because it makes the egg company better able to dissolve criticism and (b) is a lack of imagination on the part of ending egg farming overall and sets up a false counterfactual.

3. Thirdly, on global poverty it makes a few claims:

a. The motivation towards quantification is a selfish one citing Herbert Marcuse's arguments on how neoliberalism has captured institutions. Specifically, the argument criticises Ajeya Cotra's 2017 talk about effective giving and how it's about a selfish internal psychological need for quantification and finding comfort in that quantification.

b. The counterfactual of poverty and possible set of actions are much larger because it doesn't consider the amount of collective action possible. The author sets out types of consciousness raising examples of activism that on first glance is "small" and "intractable" but spark big upheavals (funnily names Greta Thundberg among Black social justice activists which offended my sensibilities).

c. EA runs interference for rich people and provide them cover and potential political action against them (probably the weakest claim of the bunch).

I think a lot of the anti-quantification type arguments that EAs thumb their noses at should be reframed because they are not as weak as they seem nor as uncommon in EA. For instance, the arguments on SPARC and other sorts of community building efforts are successful because they introduce people to transformative ideas. E.g. it's not a specific activity done but the combination of community and vibes broadly construed that leads to really talented people doing good.

3. Longtermism doesn't get much of a mention because of publishing time. There's just a meta-criticism that the switch over from neartermism to longtermism reproduces the same pattern of thinking but also the subtle intellectual. E.g. EAs used to say things were too moonshot with activism and systemic change but now they're doing longtermism.

I feel like a lot of cruxes of how you receive these criticisms are dependent on what memeplex you buy into. I think if people are pattern-matching to Torres type hit pieces they're going to be pleasantly surprised. These are real dyed in the wool leftists. It's not so much weird gotchas that are targeted at getting retweets from twitter beefs and libs it's for leftist students and seems to be more targeted towards the animal activism side and specific instances of left animal activists and EA clashes at parts.

An EA steelman example of similar points of thinking are EAs who are incredibly anti-working for OpenAI or Deepmind at all because it safety washes and pushes capabilities anyways. The criticism here is the way EA views problems means EA will only go towards solution that are piecemeal rather than transformative. A lot of Marxists felt similarly to welfare reform in that it quelled the political will for "transformative" change to capitalism. 

For instance they would say a lot of companies are pursuing RLHF in AI Safety not because it's the correct way to go but because it's the easiest low hanging fruit (even if it produces deceptive alignment). 

I want to address this point not to argue against the animal activist's point, but rather because it is a bad analogy for that point. The argument against working for safety teams at capabilities orgs or RLHF is not that they reduce x-risk to an "acceptable" level, causing orgs to give up on further reductions, but rather than they don't reduce x-risk.

This is a fantastic comment. And there's an EA who's able to interpret the continental/lefty/Frankfurt memeplex for a majority analytical/decoupling/mistake-theory audience, I think this could be a very high-impact thing to do on the Forum! Part of why EA is bad at dealing with criticism like this is (imo) that a lot of the time we don't really understand what the critics are saying, and as you point out: "I feel like a lot of cruxes of how you receive these criticisms are dependent on what memeplex you buy into."

Definitely going to spend a lot of my weekend reading the articles and adding to collaborative review that's going around.

One major thing you do bring up in this review is that it is a very lefty-oriented piece of criticism. To me, this is just confirming my priors that EA needs to eventually recognise this is where its biggest piece of pushback is going to come from, both in the intellectual space and what ordinary peoples opinions of EA will be informed by (especially the younger the age profile). While we might be able to 'EA-Judo' away from some of the criticisms, or turn others into purely empirical disagreements where we are ready to update, there are others where the movement might have to be more openly about 'Cause X reduces suffering, decreases x-Risk, and decreases the chance of a global revolution against capitalism and that's ok'. (So, personal note, reviewing lefty critiques of EA has just shot up my list of things I want to post about on the Forum)

Thanks for this excellent elucidation!

  • A majority of the pieces are not written in academic form, even though most include citations from academic sources. The most obviously academic pieces are 9 by Adams, 15 by Sanbonmatsu, and 16 by Crary. 
  • I would categorize the book as largely "normal". It pulls from a group of writers whose backgrounds and writing styles vary.
  • The highest-level takeaways (not my own views, except when "I/I'd" is included"):
    • EA is missing relevant data due to its over-reliance on quantifiable data
    • Effective does not equal impactful
    • Lack of localized knowledge and interventions reduces sustainability, adoption (trust), and overall impact 
    • The lack of diversity, equity,  and inclusion in the community produces worse outcomes and less impact. The same is said regarding considerations of [racial] justice.
    • EA neglects engagement with non-EA movements and actors; in addition to worse EA outcomes, it harms otherwise positive work. In short, EA undervalues solidarity. 
      • I'd liken this to something along the lines of "EA doesn't play nicely with the other kids in the sandbox". 
    • EA is too rigid and does not fair well in complex situations
    • EA lacks compassion/is cold, and though it is commonly argued this improves outcomes, it is more harmful than not
    • EA relies upon and reifies systems that may be causing disproportionate harm; it fails to consider that radical changes outside of its scope may be the most impactful
    • EA is an egotistical philosophy and community; it speaks and acts with certainty that it shouldn't

Oh, I've read the first two chapters. And what it implies is that they do not like EAs encroachment into the animal welfare space.

Yes, a lot of the first volume focuses on animal welfare. Though this volume is focused on animal welfare, I do think many of the takeaways I included might be echoed by critics in other cause areas.

If you are interested in contributing to a book review, please either send me a message on the Forum or an email.

I have some interest in this, although I'm unsure whether I'd have time to read the whole book — I'm open to collaborations. 

If you would like to contribute to one or several sections, that would also be helpful!

Upvote if you liked this essay: 

"How Effective Altruism Fails Community-Based Activism" - Brenda Sanders

Put comments on this essay here. 

There is a comment to downvote so I stay karma neutral.

Why not just ask people to use agreement votes instead of upvoting/downvoting?

Is there any substantial engagement with the problem of wild animal suffering in the essays in the book?

Yes, there is one essay that argues against wild animal welfare interventions and argues in favour of traditional wildlife conservationism.

The chapter by Michael D. Wise? 

I quickly skimmed it, and perhaps my reading here is uncharitable, but it did not actually seem to say anything substantial about the problem at all, merely offering (in themselves interesting) historical reflections about the problematic nature of conceiving human-non-human animal relationship in terms of property or ownership, and general musings on the chasm that separates us from the lived experience of beings very different from us.

Yes, it's that one. I didn't find it very persuasive either. There isn't any other content on wild animal welfare in  the book.

If there is anyone who ends up making a reading group discussion guide or a list of discussion prompts (whether it's comprehensive or not!), I'd love to check it out and add it to my collection of EA syllabi!

Curated and popular this week
Relevant opportunities