Here is my entry for the EA criticism contest being held this month. Originally, this essay was linked to the EA forum by someone else yesterday, but that post was eventually deleted (not sure why). So I'm personally reposting now, as it's encouraged to engage with the EA forum in the contest guidelines. 

A note: This is very much a strident critical piece about the utilitarian core of EA, so know that going in. I definitely do not expect the average reader here to agree with my points, but my hope would be that what's helpful or interesting about this essay is you can see how a critic might think about the movement, especially how they can be off-put by the inevitable "repugnancy" of utilitarianism, and, in the ideal situation, you might walk away more favorable to the idea that dilution of utilitarianism out of EA is helpful for broadening the scope of the movement and making it appeal to more people. So please "judge" the piece along those lines. I expect criticism in response, of course. (However, I won't be posting many replies just because this is your space and I don't want to bogart it. I may reply occasionally if I think it'd be especially helpful/relevant.)

Here's an excerpt to get a sense of it:


How can the same moral reasoning be correct in one circumstance, and horrible in another? E.g., while the utilitarian outcome of the trolley problem is considered morally right by a lot of people (implying you should switch the tracks, although it’s worth noting even then that plenty disagree) the thought experiment of the surgeon slitting innocent throats in alleys is morally wrong to the vast majority of people, even though they are based on the same logic. Note that this is exactly the same as our intuitions in the shallow pond example. Almost everyone agrees that you should rescue the child but then, when this same utilitarian logic is instead applied to many more decisions instead of just that one, our intuitions shift to finding the inevitable human factory-farms repugnant. This is because utilitarian logic is locally correct, in some instances, particularly in low-complexity ceteris paribus set-ups, and such popular examples are what makes the philosophy attractive and have spread it far and wide. But the moment the logic is extended to similar scenarios with slightly different premises, or the situation itself complexifies, or the scope of the thought experiment expands to encompass many more actions instead of just one, then suddenly you are right back at some repugnant conclusion. Such a flaw is why the idea of “utility monsters” (originally introduced by Robert Nozick) is so devastating for utilitarianism—they take us from our local circumstances to a very different world in which monsters derive more pleasure and joy from eating humans than humans suffer from being eaten, and most people would find the pro-monsters-eating-humans position repugnant.

To give a metaphor: Newtonian physics works really well as long as all you’re doing is approximating cannon balls and calculating weight loads and things like that. But it is not so good for understanding the movement of galaxies, or what happens inside a semiconductor. Newtonian physics is “true” if the situation is constrained and simple enough for it to apply to; so too it is with utilitarianism. This is the etiology of the poison.

13

0
0

Reactions

0
0

More posts like this

Comments23
Sorted by Click to highlight new comments since: Today at 10:30 AM

In regard to your blog. I write this quickly and bluntly, but I think I reflect the tone of your article so I think that's okay. I'm sure we'd enjoy chatting over dinner and I respect anyone who takes the time to write a blog.

Implicit claim: EAs are mostly utilitarians

IIRC about 70% of EAs are consequentialists, but I don't think most will response to your claims as you do. I think your claims are largely straw men.

Claim: Utilitarians would murder 1 person to save 5

a) No they wouldn't. They would know that they would be arrested and unable to help the many others in easier and less risky ways. Given how cheap it is to save a life, why would you risk prison? 

b) Noone in EA acts like this. Noone in EA tells people to act like this. It's a straw man.

Claim: There is a slippery slope where you have to become a moral angel and give everything away.

I think EA has probably messaged badly on this in the past. But now I guess there is concensus that you choose how much of your resources you want to use effectively then you do so. Will MacAskill donates everything above £26k. That's way too much for me and nearly everyone else. Sure, there is a status hierarchy of self denial (though there are other status hierarchies as well) but are we really gonna criticise that compared to all the other status hierarchies there are in other movements. Many people live normal lives and use there jobs or single digit % points of their income to help people. There is room for concern here, but I think it's overblown.

Claim: "Is the joy of rich people hiking really worth the equivalent of all the lives that could be stuffed into that land if it were converted to high-yield automated hydroponic farms and sprawling apartment complexes?"

This is not an EA view, it's my personal one. If there were a billion americans, the US would be less dense than the UK. It would be far less dense than England. There isn't a tradeoff here. But if there were I'd still lose the views and let billions have the lives we have. Is living in tokyo so bad? No. Fly and vacation somewhere people don't want to work so badly. If we argue it out I think I can convince you that wanting to keep immigrants out to preserve landscapes which wouldn't be built on anyway is actually a monstrous view. 

Claim: "This means that, at least in principle, to me effective altruism looks like a lot of good effort originating in a flawed philosophy. However, in practice rather than principle, effective altruists do a lot that I quite like and agree with and, in my opinion"

The upshot of the blog seems to be that EA is fine actually. Which means I don't really understand why you framed it so negatively. It's like saying "you shouldn't date that girl because maybe you'll get her pregnant and it will ruin your lives" sure, but how likely is that? If you think that EA largely gets these compromises correct, why are you writing an article describing it as poision? Maybe that is bad behaviour?

Claim: "would you be happier being a playwright than a stock broker? Who cares, stock brokers make way more money, go make a bunch of money to give to charity." 

Have you talked to any EAs this has happened to? I guess this has happened to 0 - 5 % of EAs in the last 5 years. Feel free to tell me I'm wrong. I don't think it's common. So I think this is unfair. I have seen plenty of EAs be told to enjoy their passions. Sometimes we are called libertines. 

Claim: "Who can say no to identifying cost-effective charities?"

Almost everyone. Hardly any of total charity donations goes to cost cost-effective charities. If you think this is obvious, I'd encourage you to write an article about it.  Depending on how many subscribers you have it might save a life on expectation.

Claim: "Whereas I’m of the radical opinion that the poison means something is wrong to begin with."

What is your alternative? Happy to read a blog about it.

And what about the poisions in other views. I haven't seen you say something like "now I don't think EA is much worse than other views, but it's new so I'm writing about it". There were many ways to make relative claims as well as absolute ones.

Claim: "Basically, just keep doing the cool shit you’ve been doing, which is relatively unjustifiable from any literal utilitarian standpoint, and keep ignoring all the obvious repugnancies taking your philosophy literally would get you into, but at the same time also keep giving your actions the epiphenomenal halo of utilitarian arbitrage, and people are going to keep joining the movement, and billionaires will keep donating, because frankly what you’re up to is just so much more interesting and fun and sci-fi than the boring stuff others are doing."

0) Yes you're right.

1) It's a fun essay to read but you are doing you are doing a C- job at convincing me you aren't an EA

2) It seems like you basically agree with EA recommendations in their entirety. It seems unfair that you've made up a bogey monster to criticise when you yourself acknowledge that in practice it's going really well.

3) I wish this essay wasn't so negative in framing, or that the "EA is poison" soundbite won't get to be quoted, when actually your conclusions are postive. I think that was bad behaviour from you and I don't think you'd like me to have done it.

4) I would enjoy an essay with concrete ways you think EA goes too far. I think that would be valuable. But I want bad outcomes that you actually think are going to happen. 

5) If this is going to leave you with the opinion that EAs are all defensive, then tell me and I'll edit it. I wrote it in haste. But in my defence, I did read the entirety of your blog and respond to the points I thought were important.

6) I think you wrote this in good faith.

You appear to have missed the central point of the essay, which is strange because it's repeated over and over. I point out that utilitarians must either dilute or swallow the poison of repugnant conclusions - dilution of the repugnancy works, but the cost is making EA weaker and more vapid, of it becoming a toothless philosophy like "do good in the world." Instead of grappling with this criticism, you've transformed the thesis instead into a series of random unconnected claims, some of which don't even represent my views. Now, I won't address your defensive scolding of me personally, as I don't want to engage with something so childish here. I'd rather see some actual grappling with the thesis, or at least the more interesting parts like on whether there are qualitative, in addition to quantitative, moral differences, but here's the responses to the set of mostly uninteresting claims you've ascribed to me instead of dealing with the thesis.

Implicit claim: EAs are mostly utilitarians

IIRC about 70% of EAs are consequentialists, but I don't think most will response to your claims as you do. I think your claims are largely straw men.

Already addressed in the text: "I don’t think that there’s any argument effective altruism isn’t an outgrowth of utilitarianism—e.g., one of its most prominent members is Peter Singer, who kickstarted the movement in its early years with TED talks and books, and the leaders of the movement, like William MacAskill, readily refer back to Singer’s “Famine, Affluence, and Morality” article as their moment of coming to."

You'll have to explain to me how so many EA leaders readily reference utilitarian philosophy, or refer to utility calculations being the thing that makes EA special, or justify what counts as an effective intervention via utilitarian definitions, without anyone actually being utilitarian. People can call themselves whatever they want, and I understand people wanting to divorce themselves from the repugnancies of utilitarianism, but so much in EA draws on a utilitarian toolbox and all the origins are (often self-admittedly!) in utilitarian thought experiments.

Claim: Utilitarians would murder 1 person to save 5

a) No they wouldn't. They would know that they would be arrested and unable to help the many others in easier and less risky ways. Given how cheap it is to save a life, why would you risk prison? 

b) Noone in EA acts like this. Noone in EA tells people to act like this. It's a straw man.

(a)  If you could get away with it utilitarianism tells you it's moral to do, you're just saying "in no possible world could you get away with it" which is both a way-too-strong claim and also irrelevant, for the repugnancy is found in that it is moral to do it and keep it a secret, if you can.  As for (b) since harm is caused by inaction (at least according to many in EA), then diverting the charity money from say, the USA, where it will go less far and only save 1 life, to a third-world country, where it will save 5, is exactly this. You saying that "no one says to do that" seems to fly in the face of. . . what everyone is saying to do. 

Claim: There is a slippery slope where you have to become a moral angel and give everything away.

I don't say this anywhere I know of in the text.

Claim: "Is the joy of rich people hiking really worth the equivalent of all the lives that could be stuffed into that land if it were converted to high-yield automated hydroponic farms and sprawling apartment complexes?"

If there were a billion americans, the US would be less dense than the UK. It would be far less dense than England. There isn't a tradeoff here.

You're missing the point of this part, which is that the utilitarian arbitrage has to necessarily keep going. You just, what, stop at a billion? Why? Because it sounds good to you, Nathan Young? The moral thing to do is keep going. That's my point about utilitarian arbitrage leading very naturally to the repugnant conclusion. So this seems to just not grok this part.

Claim: "Whereas I’m of the radical opinion that the poison means something is wrong to begin with."
What is your alternative? Happy to read a blog about it.

"Before you criticize effective altruism come up with something better than it" seems like a pretty high standard to me.

Claim: "Basically, just keep doing the cool shit you’ve been doing, which is relatively unjustifiable from any literal utilitarian standpoint, and keep ignoring all the obvious repugnancies taking your philosophy literally would get you into, but at the same time also keep giving your actions the epiphenomenal halo of utilitarian arbitrage, and people are going to keep joining the movement, and billionaires will keep donating, because frankly what you’re up to is just so much more interesting and fun and sci-fi than the boring stuff others are doing."

 It seems like you basically agree with EA recommendations in their entirety. It seems unfair that you've made up a bogey monster to criticise when you yourself acknowledge that in practice it's going really well.

I'm clear about some of the things EA is known for, like AI safety, being justifiable through other philosophies, and that I agree with some of them. Which, you're right,  the argument is focused on my in-principle disagreements, particularly that many will find the in-principle aspects repugnant, and my recommendation is to instead dilute it and use utilitarian calculations as a fig leaf. Again, a more complicated thesis that you're simply. . . not addressing in this breakdown of unconnected supposed claims.

If this is going to leave you with the opinion that EAs are all defensive, then tell me and I'll edit it. I wrote it in haste. But in my defence, I did read the entirety of your blog and respond to the points I thought were important.

None of these points were very important to the argument, and the ones that are, like whether or not EA is an outgrowth of utilitarianism, seem pretty settled. 

Erik,

You are right to criticise my tone. It wasn't constructive and I'm sorry. I'm glad I wrote criticisms, but I wish I had written them in a more gracious way.

I won't respond point by point, since, as you say, the points you are responding to aren't your main points anyway.

I don't think I understood your article initially.

Am I right that this was your main point?

  • EA is doing well, but only because it ignores the fundamental conclusions of utilitarianism?

If so, I have 3 main points:

  1. All moral systems have their own "repugnant conclusions"
    1. Never lie deontology encourages you to tell an axe murderer your friend is hiding in the house house
    2. Liberal tolerance can't even be intolerant of Nazis
    3. Maximum discounting means that cleopatra should have had another biscuit even if it caused all of us to be extinct.
    4. I don't understand how utilitarianism is uniquely poisonous here. Do you write articles calling all these other worldviews poisonous? If not, why not? It's not like they aren't powerful. Your critcisms are are either unfair or universal.
  2. EAs might be consequentialists, but that doesn't mean they have to bite the bullets you describe here:
    1. Only total utilitarians (if I'm getting that right) face the repugnant conclusion 
    2. Some kinds of consequentialists weigh illegal actions more heavily, so won't be caught out by your surgeon example
    3. If people amending utilitarianism to better fit their intuition is somehow bad then they are damned if they do damned if they don't. Do you believe this?
  3. Again, and this is where you miss my main point, what EAs do in practice matters. You act as if noone in EA has seen the problems you raise and that we avoid them by mere accident. Maybe we have avoided the pitfalls you state because we saw them and chose to avoid them. You yourself acknolwedge EA does a pretty good job. Maybe that's deliberate.

Moral systems are what we make them. If utilitarianism has unintuitive consequences we can think about why that it and then modify them. I think the real answer here is that EA is made up of a wider group of consequentialists than you think and that EAs take their consequentialism a little less seriously than you fear they might. What else would you have people do? 

  • You suggest that EAs will either drink the poison and behave badly
  • Or dilute the poison and fail to thake their beleifs seriously

It seems they get your judgement regardless. How will you be satisfied by the action of EAs here?

As a final point, I'm pretty happy to defend any of the things I said originally. If any of them are particularly important to you, I will. 

As always, I hope you are well.

Thanks Nathan, I'll try to keep my replies brief here and address the critical points of your questions.

Am I right that this was your main point?

EA is doing well, but only because it ignores the fundamental conclusions of utilitarianism?

I wouldn't phrase it like this. I think EA has been a positive force in the world so far, particularly in some of the weirder causes I care about (e.g., AI safety, stimulating the blogosphere, etc). But I think it's often good practices chasing bad philosophy, and then my further suggestion is that the best thing to do is dilute that bad philosophy out of EA as much as possible (which I point out is already a trend I see happening now).

I don't understand how utilitarianism is uniquely poisonous here. Do you write articles calling all these other worldviews poisonous? If not, why not? It's not like they aren't powerful. Your critcisms are are either unfair or universal.

This is why I make the metaphor to arbitrage (e.g., pointing out that arbitrage how SBF made all his money and using the term "utilitarian arbitrage"). Even if it were true one can find repugnant conclusions from any notion of morality whatsoever (I’m not sure how one would prove this), there would still be great and lesser degrees of repugnance, and there would also be the ease at which it is arrived at. E.g., the original repugnant conclusion is basically the state of the world should utilitarianism be taken literally—a bunch of slums stuffed with lives barely worth living. This is because utilitarianism, as I tried to explain in the piece, is based on treating morality like a market and performing arbitrage. So you just keep going, performing the arbitrage. In other moral theories, which aren’t based on arbitrage, but perhaps rights, or duties (just to throw out an example), they don’t have this maximizing property, so they don’t lead so inexorably to repugnant conclusions. That is, just because you can identify cases of repugnancy doesn’t mean they are equivalent, as one philosophy might lead very naturally to repugnancies (as I think utilitarianism does), whereas the other might require incredibly specific states of the world (e.g., an axe murderer in your house). Even if two philosophies fail in dealing with specific cases of serial killers, there's a really big difference in the one that encourages you to be the serial killer if you can get away with it.

Also, if one honestly believes that all moral theories end up with uncountable repugnancies, why not be a nihilist, or a pessimist, rather than an effective altruist?

Only total utilitarians (if I'm getting that right) face the repugnant conclusion

From the text it should be pretty clear I disagree with this, as I give multiple examples of repugnancy that are not Parfit's classic "the repugnant conclusion" - and I also say that adding in epicycles by expanding beyond what you're calling "total utilitarianism" often just shifts where the repugnancy is, or trades one for another.

Again, and this is where you miss my main point, what EAs do in practice matters. You act as if no one in EA has seen the problems you raise and that we avoid them by mere accident.

I'm unaware of saying that no one in EA is aware of these problems (indeed, one of my latter points implies that they absolutely are), nor that EA avoids them by mere accident. I said explicitly that it avoids them by diluting the philosophy with more and more epicycles to make it palatable. E.g., "Therefore, the effective altruist movement has to come up with extra tacked-on axioms that explain why becoming a cut-throat sociopathic business leader who is constantly screwing over his employees, making their lives miserable, subjecting them to health violations, yet donates a lot of his income to charity, is actually bad. To make the movement palatable, you need extra rules that go beyond cold utilitarianism. . ."

What else would you have people do? 

  • You suggest that EAs will either drink the poison and behave badly
  • Or dilute the poison and fail to thake their beleifs seriously

The latter.

I've gone on too long here after saying in the initial post I'd try to keep my replies to a minimum. Feel free to reply, but this will be my last response.

Thanks for talking :)

To be honest, I did feel like it came off this way to me as well. The majority of the piece feels like an essay on why you think utilitarianism sucks, and this post itself frames this as a criticism of EA’s “utilitarian core”. I sort of remember the point about EA just being ordinary do gooding when you strip this away as feeling like a side note, though I can reread it when I get a chance in case I missed something.

To address the point though, I’m not sure it works either, and I feel like the rest of your piece undermines it. Lots of things EA focuses on, like animal welfare and AI safety, are weird or at least weird combinations, so are plenty of its ways of thinking and approaching questions. These are consistent with utilitarianism, but they aren’t specifically tied to it, indeed you seem drawn to some of these and no one is going to accuse you of being a utilitarian after reading this, I have to imagine the idea that you do think something valuable and unique is left behind if you don’t just view EA as utilitarianism has to at least partly be behind your suggestion that we “dilute the poison” all the way out. If we already have “diluted the poison” out, I’m not sure what’s left to argue.

The point about how the founders of the movement have generally been utilitarians or utilitarian sympathetic doesn’t strike me as enough to make your point either[1]. If you mean that the movement is utilitarian at its core in the sense that utilitarianism motivated many of its founders, this is a good point. If you mean that it has a utilitarian core in the sense that it is “poisoned” by the types of implications of utilitarianism you are worried about, this doesn’t seem enough to get you there. I also think it proves far to much to mention the influence of Famine, Affluence and Morality. Non-utilitarian liberals regularly cite On Liberty, non-utilitarian vegans regularly cite Animal Liberation. Good moral philosophers generally don’t justify their points from first principles, but rather with the minimum premises necessary to agree with them on whatever specific point they’re arguing. These senses just seem crucially different to me.


  1. I also think it’s overstated. Singer is certainly a utilitarian, but MacAskill overtly does not identify as one even though he is sympathetic to the theory and I think has plurality credence in it relative to other similarly specific theories, Ord I believe is the same, Bostrom overtly does not identify with it, Parfit moved around a bunch in his career but by the time of EA I believe he was either a prioritarian or “triple theorist” as he called it, Yudkowsky is a key example of yours but from his other writing he seems like a pluralist consequentialist at most to me. It’s true that, as your piece points out, he defends pure aggregation, but so do tons of deontologists these days, because it turns out that when you get specific about your alternative, it becomes very hard not to be a pure aggregationist. ↩︎

Hi Erik,

I just wanted to leave a very quick comment (sorry I'm not able to engage more deeply).

I think yours is an interesting line of criticism, since it tries to get to the heart of what EA actually is

My understanding of your criticism is that EAs attempts to find an interesting middle ground between full utilitarianism and regular sensible do-gooding, whereas you claim there isn't one. In particular, we can impose limits on utilitarianism, but they're arbitrary and make EA contentless. Does this seem like a reasonable summary?

I think the best argument that an interesting middle ground exists the fact that EAs in practice have come up with ways of doing that that aren't standard (e.g. only a couple of percent of US philanthropy is spent on evidence-backed global health at best, and << 1% on ending factory farming + AI safety + ending pandemics). 

More theoretically, I see EA as being about something like "maximising  global wellbeing while respecting other values". This is different from regular sensible do-gooding in being more impartial, more wellbeing focused and more focused on finding the very best ways to contribute (rather than the merely good). I think another way EA is different is being more skeptical, open to weird ideas and trying harder to take a bayesian, science-aligned approach to finding better ways to help. (Cf the key values of EA.) 

However, it's also different from utilitarianism since you can practice these values without saying maximising hedonic utility is the only  thing that matters, or a moral obligation.

(Another way to understand EA is the claim that we should pay more attention to consequences, given the current state of the world, but not that only consequences matter.)

You could respond that there's arbitrariness in how to adjudicate conflicts between maximising wellbeing and other values. I basically agree.

But I think all moral theories imply crazy things ("poison") if taken to extremes (e.g. not lying to the axe murder as a deontologist; deep ecologists who think we should end humanity to preserve the environment; people who hold the person-affecting view in population ethics who say there's nothing bad about creating a being who's life is only suffering).

So imposing some level of arbitrary cut offs on your moral views is unavoidable. The best we can do is think hard about the tradeoffs between different useful moral positions, and try to come up with an overall course of action that's non-terrible on the balance of them.

Hi Benjamin, thanks for your thoughts and replies. Some thoughts in return:

In particular, we can impose limits on utilitarianism, but they're arbitrary and make EA contentless. Does this seem like a reasonable summary?

I think it should literally be thought of as dilution, where you can dilute the philosophy more and more, and as you do so, EA becomes "contentless" in that it becomes closer to just "fund cool stuff no one else is really doing."

However, it's also different from utilitarianism since you can practice these values without saying maximising hedonic utility is the only  thing that matters, or a moral obligation.

Even the high-level "key values" that you link to implies a lot of utilitarianism to me, e.g., moral obligations like"it’s important to consider many different ways to help and seek to find the best ones," some calculation of utility like "it’s vital to attempt to use numbers to roughly weigh how much different actions help" as well as a call to "impartial altruism" that's pretty much just saying to sum up people (what is one adding up in that case? I imagine something pretty close to hedonic utility).

But I think all moral theories imply crazy things ("poison") if taken to extremes

My abridged response buried within a different comment thread:  

Even if it were true one can find repugnant conclusions from any notion of morality whatsoever (I’m not sure how one would prove this), there would still be great and lesser degrees of repugnance, and there would also be the ease at which it is arrived at. E.g., the original repugnant conclusion is basically the state of the world should utilitarianism be taken literally—a bunch of slums stuffed with lives barely worth living. This is because utilitarianism, as I tried to explain in the piece, is based on treating morality like a market and performing arbitrage. So you just keep going, performing the arbitrage. In other moral theories, which aren’t based on arbitrage, but perhaps rights, or duties (just to throw out an example), they don’t have this maximizing property, so they don’t lead so inexorably to repugnant conclusions. . . Also, if one honestly believes that all moral theories end up with uncountable repugnancies, why not be a nihilist, or a pessimist, rather than an effective altruist?

where you can dilute the philosophy more and more, and as you do so, EA becomes "contentless" in that it becomes closer to just "fund cool stuff no one else is really doing.

 

Makes sense. It just seems to me that the diluted version still implies interesting & important things.

Or from the other direction, I think it's possible to move in the direction of taking utilitarianism more seriously, without having to accept all of the most wacky implications.

 

So you just keep going, performing the arbitrage. In other moral theories, which aren’t based on arbitrage, but perhaps rights, or duties (just to throw out an example), they don’t have this maximizing property, so they don’t lead so inexorably to repugnant conclusions

I agree something like trying to maximise might be at the core of the issue (where utilitarianism is just one ethical theory that's into maximising).

However, I don't think it's easy to avoid by switching to a rights or duties. Philosophers focused on rights still think that if you can save 10 lives with little cost to yourself, that's a good thing to do. And that if you can 100 lives with the same cost, that's an even better thing to do. A theory that said all that matters ethically is not violating rights would be really weird.

Or another example is that all theories of population ethics seem to have unpleasant conclusions, even the non-totalising ones.

If one honestly believes that all moral theories end up with uncountable repugnancies, why not be a nihilist, or a pessimist, rather than an effective altruist?

I don't see why it implies nihilism. I think it's shows the moral philosophy is hard, so we should moderate our views, and consider a variety of perspectives, rather than bet everything on a single theory like utilitarianism.

When sharing this article you tweeted:

"In the last week alone, the effective altruist movement has been on the cover of The New Yorker, The NYT, and Time Magazine. It has billions in funding, and wants to make the world a better place. The problem is that it's poisonous."

However you finished that same thread by tweeting:

"10/ Ultimately this is what the "longtermism" view is - merely a dilution of the utilitarianism in effective altruism to caring only about existential risk, which is something everyone can get on board with"

I dislike the implication that EA is poisonous, but fair enough. But seemingly you don't believe this either, hence you tweeted your final tweet. That seems clickbaity. 

Also, if you think that EA gets the balance right in practice, I don't think it's okay to say it's a poision. If the median EA does the things as you'd want them done, then it seems like EA is antivenom, even if parts of the dose would be venomous on their own. This seems unreasonable.

What's more, would you call the need to tell an axe murdered where your friend is hiding a "poison" in virtue ethics? Or most people's denial of those dying in the developing world in a way that can be cheaply prevented? This seems an unfair judgement of EA alone.

I think the framing (which again, even you don't seem to believe) of EA being a poison is unreasonable, unfair and clickbaity.  

Just to note: the specific accusation of it being “unreasonable” and clickbaity” relies entirely on there being a really strong difference in valence between the terms “poison” (my lay term for it) and “repugnancy” (the well-accepted academic term for it) and I just don’t think it’s the case that “this philosophy is poisonous” is an unreasonable stretch from “this philosophy is repugnant.” That may be a personal thing, but they seem within the same range of negative tone to me, and hence it also seems neither especially unreasonable or clickbaity to lead with a more understandable analogy of the same valence and then explain it in the text. Clickbait would have been if I titled it “Why do billionaires keep giving to a secretive poisonous philosophy?” not “Why I am not an effective altruist.”

Yeah, I think there is a clear difference. Do you write about the flaws in other moral systems using equivalently valent terms?

But even if there isn't a difference, you yourself don't believe that EA is a poison. You think it's got some poision in it. I dislike the framing of that original, much shared, tweet.

EJT
2y10
0
0

A point that seems worth noting, from Puzzles for Everyone:

In an especially striking example of conflating utilitarianism with anything remotely approaching systematic thinking, popular substacker Erik Hoel recently characterized the Beckstead & Thomas paper on decision-theoretic paradoxes as addressing “how poorly utilitarianism does in extreme scenarios of low probability but high impact payoffs.” Compare this with the very first sentence of the paper’s abstract: “We show that every theory of the value of uncertain prospects must have one of three unpalatable properties.” Not utilitarianism. Every theory.

(Alas, when I tried to point this out in the comments section, after a brief back-and-forth in which Erik initially doubled down on the conflation, he abruptly decided to instead delete my comments explaining his mistake.)

No comment here really engages with the core argument in Erik Hoels text. Utilitarianism must be the basis of EA.  How would you measure effectiveness if not through maximizing utility? 

I identify as an EA, and have been drawn to utilitarianism my whole adult life, but always got stuck on the seemingly impossible problem of comparing suffering and pleasure. I guess some form of rule utilitarianism is where I ended up. (e.g. don't kill unless to stop a killing), but it doesn't completely solve the problem.

Somewhat of an aside but I was wondering if some others could confirm or deny my observations:

From interacting with EA folks, my impression was that the community has quite a pluralistic attitude towards utilitarianism. That is, not everyone is utilitarian, and those who are are often aware of the debates in philosophy, meaning that there's a variety of views in terms of if people accept total utilitarianism vs. average utilitarianism, as well as act/rule/preference utilitarianism. Yet a lot of the criticisms seem to see the EA community as having an rigid consensus on kind of OG Bentham-esque act utilitarianism.

Thus it seems to me that either

  1. my impression is wrong, EA is committed to simple hedonistic act utilitarianism, or
  2. there's an opportunity for some messaging to better reflect that EA is more of a "big tent" in this regard.

Incidentally, I think this is a well-written and useful article. It voices concerns that many people have, and does so with an unusual amount of charity,  so I think it would be a positive thing to try to read it in good faith.

I think you are right. 

I don't think most people care what kind of consequentialism EA is.

I think it's a shame that a rare article which covers the issue while in good faith, so poorly represents reality.

[anonymous]2y6
0
0

Thanks, I enjoyed reading this. 

Is this a correct summary of your position:

  1. EA descends heavily from utilitarianism. E.g. Peter Singer's drowning child analogy is foundational, most people in EA appeal to utilitarianism to some degree, there's no real opposition to utilitarianism in EA, etc etc. 
  2. Utilitarianism has many, many problems, such as the incomparability of some moral actions/lack of ethics being well-ordered. Attempts to make it into a theory of everything of morality end up either diluting it beyond recognition, include lots and lots (too many) epicycles, include repugnancies that should lead us to reject utilitarianism, or some or all of these. 
  3. In practice, EAs do a lot of great, altruistic work that you want to see promoted. But little to none of this depends on utilitarianism. 

Is there anything you want to add or correct to this? 

Thanks! Your summaries are very helpful. Yes, I agree with the first two as a summary of my beliefs. However, for (3) I mostly agree with the first sentence, but disagree with the second. 

This is because in-principle objections to utilitarianism do have the potential to affect the altruistic work that EA does. Indeed, there’s a sense in which in-principle concerns impact all the in-practice ones. E.g., let’s say there there are indeed qualitative moral differences, as (2) might imply. If so, then donating enough money to charity to save, in expectation, a life, could very well not be qualitatively equivalent to jumping in to save a child from drowning in a pond. It might be merely quantitatively equivalent. That is, the former might be a morally heroic act that it’s any adult’s duty to do, the other, still admirable, but one has far less of a duty to do it. And if it’s true there’s a qualitative difference between saving a child from drowning and donating enough to charity to save a life in expectation, this calls into question whether the entire motto of EA, of maximizing the good, is accomplished by the sort of secular tithing that make up the core of its in-practice operations. This is what's behind my suggestion (I probably should have made it explicit) to continue to shift EA away from purely utilitarian causes and to much broader ones, like promoting "longtermism" or even just cool projects that no one else is doing that have little to zero utilitarian value. I very much agree that this piece lacks any specifics of how to do that (I think I glibly suggest mining an astroid) and could see a lack of specificity as a valid criticism of it, although I also think that the level of specificity of "move X dollars here" might be a somewhat high bar.

I think the comparison between calling yourself a Christian but not believing in the Divinity of Jesus or something is a worse analogy to being a non-Utilitarian EA than calling yourself a Republican but not believing in the divinity of Jesus. It’s true that utilitarianism is overrepresented among EAs including influential ones, and most of their favored causes are ones utilitarians like, but it is my impression that most EAs are not utilitarians and almost none of them think utilitarianism is just what EA is.

Given this, the post reads to me sort of like “I’m a pro-life free market loving Buddhist, but Christianity is wrong therefore I can’t be a Republican”.

This makes the rest of the post less compelling to me to be honest, debates about high level moral philosophy are interesting but unlikely to be settled over one blogpost (even just the debate over pure aggregation is extremely complicated, and you seem to take a very dismissive attitude towards it), and the connection to EA as a movement made in the post seems too dubious to justify it. The piece seems like a good explanation of why you aren’t a utilitarian, but I don’t take it that that was your motive.

The article seems to contradict itself in the end. In the beginning of the article, I thought you were saying you're not an EA because you're not a utilitarian (because utilitarianism is poison), and to be an EA is just to be a utilitarian in some form—and that even if EAs are utilitarians in a very diluted form, the philosophy they are diluting is still a poison, no matter how diluted, and so is unacceptable. So, I was expecting you to offer some alternative framework or way of thinking to build an altruistic movement on, like moral particularism or contractualism or something, but the solution I read you as giving is for EAs to keep doing exactly what you start off saying they shouldn’t do: for them to base their movement on utilitarianism in a diluted form. 

I hope this isn't uncharitable, but this is how your core argument actually comes across to me: “I’m not an EA because EAs ground their movement on a diluted form of utilitarianism, which is poisonous no matter how much you dilute it. What do I suggest instead? EAs should keep doing exactly what they’re doing—diluting utilitarianism until it’s no longer poisonous!” (Maybe my biggest objection to your article is that you missed the perfect opportunity for a homeopathy analogy here!) This seems contradictory to me, which makes it hard to figure out what to take from your article.  

To highlight what made me confused, I'll quote what I saw as the most contradictory seeming passages. In the first four quotes here, you seem to say that diluting utilitarianism is futile because this can’t remove the poison:

“the origins of the effective altruist movement in utilitarianism means that as definitions get more specific it becomes clear that within lurks a poison, and the choice of all effective altruists is either to dilute that poison, and therefore dilute their philosophy, or swallow the poison whole.

“This poison, which originates directly from utilitarianism (which then trickles down to effective altruism), is not a quirk, or a bug, but rather a feature of utilitarian philosophy, and can be found in even the smallest drop. And why I am not an effective altruist is that to deal with it one must dilute or swallow, swallow or dilute, always and forever.” … 

“But the larger problem for the effective altruist movement remains that diluting something doesn’t actually make it not poison. It’s still poison! That’s why you had to dilute it.” 

“I’m of the radical opinion that the poison means something is wrong to begin with.”

It seems like you’re saying we should neither drink the poison of pure utilitarianism nor try to fool ourselves into thinking it’s okay to drink it because we have diluted it. Yet here, at the end of the article, you sound like a Dr Bronner’s bottle commanding “dilute dilute dilute!”: 

“So here’s my official specific and constructive suggestion for the effective altruism movement: … keep diluting the poison. Dilute it all the way down, until everyone is basically just drinking water. You’re already on the right path. …


“I really do think that by continuing down the path of dilution, even by accelerating it, the movement will do a lot of practical good over the next couple decades as it draws in more and more people who find its moral principles easier and easier to swallow…  
 

“What I’m saying is that, in terms of flavor, a little utilitarianism goes a long ways. And my suggestion is that effective altruists should dilute, dilute, dilute—dilute until everyone everywhere can drink.”
 

If “everyone everywhere” is drinking diluted utilitarianism at the end, doesn’t that include you? Are you saying you’re not an EA because EAs haven’t diluted utilitarianism quite enough yet, but eventually you think they’ll get there? Doesn’t this contradict what you say earlier about poison being poison no matter how diluted? You seem to begin the essay in fundamental opposition to EA, and you conclude by explicitly endorsing what you claim is the EA status quo: “Basically, just keep doing the cool shit you’ve been doing”! 
 

I assume there are ways to re-phrase the last section of your article so you’re not coming across as contradicting yourself, but as it’s written now, your advice to EA seems identical to your critique of EA. 


What made you shy away from suggesting EA shift from utilitarianism to a different form of consequentialism or to a different moral (or even non-moral) framework entirely? It can't be that you think all the utilitarian EAs would have ignored you if you did that, because you say in the article that you know you’ll be ignored and lose this contest because your critique of EA is too fundamental and crushing for EAs to be able to face. Do you think diluted utilitarianism is the best basis for an altruistic movement? It certainly doesn't seem so in the beginning, but that's all I am able to get out of the ending. 

I think this is the standard reply to the repugnant conclusion, from Yudkowsky's Ends Don't Justify Means (Among Humans) (emphasis mine).

So here's a reply to that philosopher's scenario, which I have yet to hear any philosopher's victim give:

"You stipulate that the only possible way to save five innocent lives is to murder one innocent person, and this murder will definitely save the five lives, and that these facts are known to me with effective certainty.  But since I am running on corrupted hardware, I can't occupy the epistemic state you want me to imagine.  Therefore I reply that, in a society of Artificial Intelligences worthy of personhood and lacking any inbuilt tendency to be corrupted by power, it would be right for the AI to murder the one innocent person to save five, and moreover all its peers would agree.  However, I refuse to extend this reply to myself, because the epistemic state you ask me to imagine, can only exist among other kinds of people than human beings.

ie. the repugnant conclusion tells us a lot more about how human cognition works rather than how consequentionalism fundamentally works.

It's certainly a reply, but if this argument were sound, it would apply to everything in human cognition. Seems like it's being applied selectively, such that all the repugnant conclusions are things we somehow cannot be sure of and therefore can't endorse, but all the non-repugnant conclusions of the philosophy are things we are sure about and can endorse. It's also a bit off, since in the first part he says the issue is definite knowledge, and the issue in the second part is that power corrupts. These are two separate replies, although both suffer from being selectively applied.

Hi Eric, thanks for sharing this, made me think differently, which is much appreciated. 
I’d like to comment on one point you made: “One can see repugnant conclusions pop up in the everyday choices of the movement: would you be happier being a playwright than a stock broker? Who cares, stock brokers make way more money, go make a bunch of money to give to charity.”

I have a personal, subjective example which contradicts your criticism of this idea. I don’t know how prevalent this single experience will be in others, but I believe this idea is worth pushing back against. 

I was a musician, which was my “passion,” but was extraordinarily unhappy whilst working as a musician. The poor financial prospects, difficulty in making ends meet, and requirement to piece together unpleasant part-time and freelance gigs in order to survive, and maybe fund the “fun” and more “artisically fulfilling” stuff once in a blue moon resulted in developing repugnance towards music entirely. Most people who pursue their passions risk this experience, as the success rate of all artists is infinitesimally low. Or, they often keep going, even though they don’t like it anymore, for any number of reasons.

Now, changing careers towards research/academia (which obviously does not have a reputation of being lucrative, but compared with what I made as a musician looks like a goldmine), I finally made the GWWC pledge, and net happiness has increased for myself and others: I’m happier because I can afford to pay for rent and food, and on top of that, can afford to donate as an EA. Other people are presumably happier because they benefit from those donations. 

On top of the EA donations, I do tend to do what you mention at the end of the blog. Especially as I have so many artist friends that I want to support. And that is extra (on top of EA donations). And again, increases happiness for myself and others, as well as hitting at this diluted, longtermism view you give, by, for example, facilitating art making (which artists get to enjoy, and then consumers of the art) in addition to saving lives. 

I’m not sure how useful this is or isn’t to you - clearly, we largely agree. I’m just adding a hint of perspective on this idealistic mention of “pursue your passion because it will make you happier,” which you describe as being a move which contradicts EA philosophy. Because, I believe the odds are that it actually won’t, and in many circumstances, one might actually be happier as that stock broker. 

Curated and popular this week
Relevant opportunities