Hide table of contents

Alice Crary is University Distinguished Professor at the Graduate Faculty, The New School for Social Research in New York City. In June 2020, she gave a public talk at Oxford University on why one should not be an 'effective altruist'. The transcript is here: https://www.oxfordpublicphilosophy.com/blog/letter-to-a-young-philosopher-dont-become-an-effective-altruiststrong-strong .

I have summarized her talk below. If there is something unclear from my summary, I ask that readers take a look at the link to understand the argument in more detail before commenting.

I myself am a committed EA, but I found the reasoning in this talk perhaps more compelling than any other broad external critique of EA that I have read before.

Effective altruism

Effective altruism (EA) is a movement founded on a commitment to "do the most good", created by philosophers like William MacAskill and others. It is misguided, and one can see this most clearly when combining an "institutional" and "philosophical" critique, as below.

The "institutional critique"

 EA seems to cause actual damage by misprioritizing giving towards important charitable causes in areas like animal welfare, anti-racist public health, food access programs, and other causes. This has been reported by people within the charity sector in those areas.

In and of themselves, they're subjective opinions by people who, while they may be considered subject matter experts, are certainly not dispassionate or disinterested, in that they're already leaders of charitable organizations with particular approaches. But this might be worth coming back to after understanding the philosophical critique.

The "philosophical critique"

EA focuses on "doing the most good", taking a "god's eye moral epistemology" that ignores one's own standpoint. In other words, it evaluates a universe-level abstract moral target without consideration of one's particular moral obligations. If there exist particularistic moral obligations for specific individuals, these are missed by the "god's eye moral epistemology" of EA. EA aims to do "the most good" the most efficient way possible. But by taking the "god's eye view" you miss the particularistic moral obligations which individuals have, which, by dint of being part of the world of good that can be done, must be part of "the most good". You've paid attention to the demand on an individual for benevolence, but you haven't paid attention to the other moral obligations that are present.

To add my own thoughts to this part: This might be compelling even if you are a consequentialist. In spite of consequentialism, it's difficult to deny at least some particularistic moral obligations that individuals have - the duty to care for one's own children; the duty to repay monetary or social debts owed; the duty to treat others with equity and fairness. If you concede that much, you might concede the need to grapple with particularistic moral obligations even if you are a consequentialist, and you might concede that EA's current approach of trying to understand "the most good" has not grappled with those obligations.

The "combined critique"

 The philosophically myopic nature of EA can explain where they've gone wrong institutionally. If you consider the obligations on individuals and groups outside of simple benevolence and wellbeing, such as justice, fairness, and equity, you discover you need to grapple with social phenomena that require some "particular modes of affective response" to see clearly. 

Alongside feminist and critical race theorists, you discover that to properly grapple with concerns of justice, you need to understand the nature of social structures and relations in our current world, which seem oppressive and unjust. By ignoring these concerns, EA hasn't just made a philosophical mistake, but one that misguides it substantially on the particular moral demands of our time. This causes the substantial errors observed in the "institutional critique".

21

0
0

Reactions

0
0

More posts like this

Comments8
Sorted by Click to highlight new comments since: Today at 2:19 PM

This has been previously discussed at some length here.

I spoke to someone today who was planning to write a critique of this paper, so I won't steal her thunder — but I still have a few thoughts on the paper/the points of the paper as paraphrased.

Epistemic status: Just rambling a bit because the post made me think.

In spite of consequentialism, it's difficult to deny at least some particularistic moral obligations that individuals have - the duty to care for one's own children; the duty to repay monetary or social debts owed; the duty to treat others with equity and fairness. 

This critique would have more teeth for me if it mapped onto anything I recognized from the actual EA community.

Many people don't have strong moral theories at all; they'd answer questions about morality if you asked them, but they don't go about their days wondering about what the best thing to do is in a given situation. And yet, I think that most people in this category are basically "good people": they care about their children, repay their debts, treat other people decently in most cases, etc.

Most people in EA don't have their "maximize impact" mode on at all times. There are lots of parents in the community; they didn't decide not to have kids because it would let them donate more, and (as far as I know) they don't neglect  their children now to have more time for work. That's because we can have more than one goal; it's entirely possible to endorse a moral theory but not attempt to maximize the extent to which you fulfill that theory in your every action.

If you asked basically anyone in EA whether parents have an obligation to care for their children, I think they'd say "yes". But EA isn't really focused on personal lives and relationships — for the most part, it aims to help people use spare resources that aren't already allotted for other obligations. You aren't obligated to pursue a particular career, so choosing one with EA in mind may not violate any obligations. You aren't obligated to support a particular charity... and so on.

I always like to refer back to Holden Karnofsky when I hear arguments of this type:

"In general, I try to behave as I would like others to behave: I try to perform very well on “standard” generosity and ethics, and overlay my more personal, debatable, potentially-biased agenda on top of that rather than in replacement of it. I wouldn’t steal money to give it to our top charities; I wouldn’t skip an important family event (even one that had little meaning for me) in order to save time for GiveWell work."

Right action also includes acting, when appropriate, in ways reflective of the broad virtue of justice, which aims at an end—giving people what they are owed—that can conflict with the end of benevolence. If we are responsive to circumstances, sometimes we will act with an eye to others’ well-being, and sometimes with an eye to other ends.

I'd be very interested in seeing what someone's approach to maximizing (justice times X) + (benevolence times Y) would look like. I see EA as the project of "trying to get as much good as possible under certain definitions of 'good'", and I could be convinced that justice is something that can be part of a reasonable definition (unlike, say, the glorification of a deity).

That said, if the argument here is that justice needs to be part of any moral theory that isn't "confused" or "empty", it seems like Crary is picking a fight with many different branches of philosophy that are perfectly capable of defending themselves (so, as a non-philosopher who doesn't speak this language well, I'll bow out on this point).

For EA to make space for these individuals, it would have to acknowledge that their moral and political beliefs pose threats to its guiding principles and that these principles themselves are contestable. To acknowledge this would be to concede that EA, as it is currently conceived, might need to be given up.

At this point, I realized I was really confused and perhaps misinterpreting Crary. EA's principles are certainly contestable, just as any set of moral principles is contestable — this is an area of debate as old as the concept of debate. Does Crary believe that the moral theories she favors don't exclude any views of values? Is she arguing that a valid moral theory will necessarily be a big-tent sort of thing that gives everyone a say? (Will MacAskill's work on moral uncertainty + putting some weight on a wide range of moral theories might align with this, though I haven't read it closely.)

Thanks for sharing! One thing I didn't notice in the summary: The talk seemed specifically focused on the impact of EA on the animal advocacy space (which I found mildly surprising and interesting, since these critiques pattern match much more to global health/equity/justice concerns)

This article seems to basically boil down to "take a specific view of morality that the author endorses, which heavily emphasises virtue, justice, systemic change and individual obligations, and is importantly not consequentialist, yet also demanding enough to be hard to satisfice on".

Then, once you have taken this alternate view, observe that this wildly changes your moral conclusions and opinions on how to act, and much of what EA stands for.

You can quibble about "the article claims to be challenging the fundamental idea of EA, yet EA is compatible with any notion of the good and capable of doing this effectively". But I personally think that EA DOES have a bunch of common moral beliefs, eg the importance of consequentialism, impartial views of welfare, the importance of scope and numbers, and to some degree utilitarianism. And that EA beliefs are robust to people not sharing all of these views, and to pluralistic views like others in this thread have argued (eg, put in the effort to be a basically decent person according to common sense morality and then ruthlessly optimise for your notion of the good with your spare resources). But I think you also need to make some decisions about what you do and do not value, especially for a moral view that's demanding rather than just "be a basically decent person", and her view seems fairly demanding?

I'm a bit confused about EXACTLY what the view of morality here described is - it pattern matches onto virtue ethics, and views on the importance of justice and systemic change? But I definitely think it's quite different from any system that I subscribe to. And it doesn't feel like the article is really trying to convince me to take up this view, just taking it as implicit. And it seems fine to note that most EAs have some specific moral beliefs, and that if you substantially disagree with those then you have different conclusions? But it's hardly a put down critique of EA, it's just a point that tradeoffs are hard and you need to pick your values to make decisions.

The paragraph of the talk that felt most confusing/relevant:

This philosophical critique brings into question effective altruists’ very notion of doing the “most good.” As effective altruists use it, this phrase presupposes that the rightness of a social intervention is a function of its consequences and that the outcome involving the best consequences counts as doing most good. This idea has no place within an ethical stance that underlies the philosophical critique. Adopting this stance is a matter of seeing the real fabric of the world as endowed with values that reveal themselves only to a developed sensibility. To see the world this way is to leave room for an intuitively appealing conception of actions as right insofar as they exhibit just sensitivity to the worldly circumstances at hand. Accepting this appealing conception of action doesn’t commit one to denying that right actions frequently aim at ends. Here acting rightly includes acting in ways that are reflective of virtues such as benevolence, which aims at the well-being of others. With reference to the benevolent pursuit of others’ well-being, it certainly makes sense to talk about good states of affairs. But it is important, as Philippa Foot once put, “that we have found this end within morality, forming part of it, not standing outside it as a good state of affairs by which moral action in general is to be judged” (Foot 1985, 205). Right action also includes acting, when appropriate, in ways reflective of the broad virtue of justice, which aims at an end—giving people what they are owed—that can conflict with the end of benevolence. If we are responsive to circumstances, sometimes we will act with an eye to others’ well-being, and sometimes with an eye to other ends. In a case in which it is not right to improve others’ well-being, it makes no sense to say that we produce a worse result. To say this would be to pervert our grasp of the matter by importing into it an alien conception of morality. If keep our heads, we will say that the result we face is, in the only sense that is meaningful, the best one. There is here simply no room for EA-style talk of “most good.”

Thanks for your remarks. I'm looking forward to her full article being published, because I agreed that as it is, she's been pretty vague.  The full article might clear up some of the gaps here.

From what you and others have said, the most important gap seems to be "why we should not be consequentialists", which is much bigger than just EA! I think there is something compelling; I might reconstruct her argument something like:

  1. EAs want to do "the most good possible".
  2. Ensuring more systemic equality and justice is good.
  3. We can do things that ensure systemic equality and justice; doing this is good (this follows from 2), even if it's welfare-neutral.
  4. If you want to do "the most good" then you will need to do things that ensure systemic equality and justice, too (from 3).
  5. Therefore (from 1 and 4) it follows that EAs should care about more than just welfare.
  6. You can't quantify systemic equality and justice.
  7. Therefore (from 5 and 6) if EAs want to achieve their own goals they will need to move beyond quantifications.

Probably consequentialists will reply that (3) is wrong; actually if you improve justice and equality but this doesn't improve long-term well-being, it's not actually good. I suppose I believe that, but I'm unsure about it.

I think what you've written is not an argument against consequentialism, it's about trying to put numbers on things in order to rank the consequences?

Regardless, that wasn't how I interpreted her case. It doesn't feel like she cares about the total amount of systemic equality and justice in the world. She fundamentally cares about this from the perspective of the individual doing the act, rather than the state of the world, which seems importantly different. And to me, THIS part breaks consequentialism

I am responding to the newer version of this critique found [here] (https://www.radicalphilosophy.com/article/against-effective-altruism).

Someone needs to steel man Crary's critique for me, because as it stands I find it very weak. The way I understand this article:

  1. The institutional critique - Basically claims 2 things: a) EA's are searching for their keys only under the lamppost. This is a great warning for anyone doing quantitate research and evaluation. EA's are well aware of it and try to overcome the problem as much as possible; b) EA is addressing symptoms rather than underlying causes, i.e. distributing bed-nets instead of overthrowing corrupt governments. This is fair as far as it goes, but the move to tackling underlying causes does not necessarily require abandoning the quantitative methods EA champions, and it is not at all clear that we shouldn't attempt to alleviate symptoms as well as causes.

  2. The philosophical critique - Essentially amounts to arguing that there are people critical of consequentialism and abstract conceptions of reason. More power to them, but that fact in itself does not defeat consequentialism, so in so far as EA relies on consequentialism, it can continue to do so. A deeper dive is required to understand the criticisms in question, but there is little reason for me to assume at this point that they will defeat, or even greatly weaken, consequentialist theories of ethics. Crary actually admits that in academic circles they fail to convince many, but dismisses this because in her opinion it is "a function of ideological factors independent of [the arguments'] philosophical credentials".

  3. The composite critique - adds nothing substantial except to pit EA against woke ideology. I don't believe these two movements are necessarily at odds, but there is a power struggle going on in academia right now, and it is clear which side Crary is on.

  4. EA's moral corruption - EA is corrupt because it supports global capitalism. I am guilty as charged on that count, even as I see capitalism's many, many flaws and the need to make some drastic changes. Still, just like democracy, it is the best of evils until we come up with something better. Working within this system to improve the lives of others and solve some pressing worldwide problems seems perfectly reasonable to me.

As an aside I will mention that attacking "earning to give" without mentioning the concept of replicability is attacking nothing at all. When doing good try to be irreplaceable, when earning money on Wall Street, make sure you are completely replaceable, you might earn a little less but you will minimize your harm.

Finally, it is telling that Crary does not once deal with longtermist ideas.

[comment deleted]3y4
0
0