Here is my entry for the EA criticism contest being held this month. Originally, this essay was linked to the EA forum by someone else yesterday, but that post was eventually deleted (not sure why). So I'm personally reposting now, as it's encouraged to engage with the EA forum in the contest guidelines.
A note: This is very much a strident critical piece about the utilitarian core of EA, so know that going in. I definitely do not expect the average reader here to agree with my points, but my hope would be that what's helpful or interesting about this essay is you can see how a critic might think about the movement, especially how they can be off-put by the inevitable "repugnancy" of utilitarianism, and, in the ideal situation, you might walk away more favorable to the idea that dilution of utilitarianism out of EA is helpful for broadening the scope of the movement and making it appeal to more people. So please "judge" the piece along those lines. I expect criticism in response, of course. (However, I won't be posting many replies just because this is your space and I don't want to bogart it. I may reply occasionally if I think it'd be especially helpful/relevant.)
Here's an excerpt to get a sense of it:
How can the same moral reasoning be correct in one circumstance, and horrible in another? E.g., while the utilitarian outcome of the trolley problem is considered morally right by a lot of people (implying you should switch the tracks, although it’s worth noting even then that plenty disagree) the thought experiment of the surgeon slitting innocent throats in alleys is morally wrong to the vast majority of people, even though they are based on the same logic. Note that this is exactly the same as our intuitions in the shallow pond example. Almost everyone agrees that you should rescue the child but then, when this same utilitarian logic is instead applied to many more decisions instead of just that one, our intuitions shift to finding the inevitable human factory-farms repugnant. This is because utilitarian logic is locally correct, in some instances, particularly in low-complexity ceteris paribus set-ups, and such popular examples are what makes the philosophy attractive and have spread it far and wide. But the moment the logic is extended to similar scenarios with slightly different premises, or the situation itself complexifies, or the scope of the thought experiment expands to encompass many more actions instead of just one, then suddenly you are right back at some repugnant conclusion. Such a flaw is why the idea of “utility monsters” (originally introduced by Robert Nozick) is so devastating for utilitarianism—they take us from our local circumstances to a very different world in which monsters derive more pleasure and joy from eating humans than humans suffer from being eaten, and most people would find the pro-monsters-eating-humans position repugnant.
To give a metaphor: Newtonian physics works really well as long as all you’re doing is approximating cannon balls and calculating weight loads and things like that. But it is not so good for understanding the movement of galaxies, or what happens inside a semiconductor. Newtonian physics is “true” if the situation is constrained and simple enough for it to apply to; so too it is with utilitarianism. This is the etiology of the poison.
Thanks Nathan, I'll try to keep my replies brief here and address the critical points of your questions.
I wouldn't phrase it like this. I think EA has been a positive force in the world so far, particularly in some of the weirder causes I care about (e.g., AI safety, stimulating the blogosphere, etc). But I think it's often good practices chasing bad philosophy, and then my further suggestion is that the best thing to do is dilute that bad philosophy out of EA as much as possible (which I point out is already a trend I see happening now).
This is why I make the metaphor to arbitrage (e.g., pointing out that arbitrage how SBF made all his money and using the term "utilitarian arbitrage"). Even if it were true one can find repugnant conclusions from any notion of morality whatsoever (I’m not sure how one would prove this), there would still be great and lesser degrees of repugnance, and there would also be the ease at which it is arrived at. E.g., the original repugnant conclusion is basically the state of the world should utilitarianism be taken literally—a bunch of slums stuffed with lives barely worth living. This is because utilitarianism, as I tried to explain in the piece, is based on treating morality like a market and performing arbitrage. So you just keep going, performing the arbitrage. In other moral theories, which aren’t based on arbitrage, but perhaps rights, or duties (just to throw out an example), they don’t have this maximizing property, so they don’t lead so inexorably to repugnant conclusions. That is, just because you can identify cases of repugnancy doesn’t mean they are equivalent, as one philosophy might lead very naturally to repugnancies (as I think utilitarianism does), whereas the other might require incredibly specific states of the world (e.g., an axe murderer in your house). Even if two philosophies fail in dealing with specific cases of serial killers, there's a really big difference in the one that encourages you to be the serial killer if you can get away with it.
Also, if one honestly believes that all moral theories end up with uncountable repugnancies, why not be a nihilist, or a pessimist, rather than an effective altruist?
From the text it should be pretty clear I disagree with this, as I give multiple examples of repugnancy that are not Parfit's classic "the repugnant conclusion" - and I also say that adding in epicycles by expanding beyond what you're calling "total utilitarianism" often just shifts where the repugnancy is, or trades one for another.
I'm unaware of saying that no one in EA is aware of these problems (indeed, one of my latter points implies that they absolutely are), nor that EA avoids them by mere accident. I said explicitly that it avoids them by diluting the philosophy with more and more epicycles to make it palatable. E.g., "Therefore, the effective altruist movement has to come up with extra tacked-on axioms that explain why becoming a cut-throat sociopathic business leader who is constantly screwing over his employees, making their lives miserable, subjecting them to health violations, yet donates a lot of his income to charity, is actually bad. To make the movement palatable, you need extra rules that go beyond cold utilitarianism. . ."
The latter.
I've gone on too long here after saying in the initial post I'd try to keep my replies to a minimum. Feel free to reply, but this will be my last response.