Here is my entry for the EA criticism contest being held this month. Originally, this essay was linked to the EA forum by someone else yesterday, but that post was eventually deleted (not sure why). So I'm personally reposting now, as it's encouraged to engage with the EA forum in the contest guidelines.
A note: This is very much a strident critical piece about the utilitarian core of EA, so know that going in. I definitely do not expect the average reader here to agree with my points, but my hope would be that what's helpful or interesting about this essay is you can see how a critic might think about the movement, especially how they can be off-put by the inevitable "repugnancy" of utilitarianism, and, in the ideal situation, you might walk away more favorable to the idea that dilution of utilitarianism out of EA is helpful for broadening the scope of the movement and making it appeal to more people. So please "judge" the piece along those lines. I expect criticism in response, of course. (However, I won't be posting many replies just because this is your space and I don't want to bogart it. I may reply occasionally if I think it'd be especially helpful/relevant.)
Here's an excerpt to get a sense of it:
How can the same moral reasoning be correct in one circumstance, and horrible in another? E.g., while the utilitarian outcome of the trolley problem is considered morally right by a lot of people (implying you should switch the tracks, although it’s worth noting even then that plenty disagree) the thought experiment of the surgeon slitting innocent throats in alleys is morally wrong to the vast majority of people, even though they are based on the same logic. Note that this is exactly the same as our intuitions in the shallow pond example. Almost everyone agrees that you should rescue the child but then, when this same utilitarian logic is instead applied to many more decisions instead of just that one, our intuitions shift to finding the inevitable human factory-farms repugnant. This is because utilitarian logic is locally correct, in some instances, particularly in low-complexity ceteris paribus set-ups, and such popular examples are what makes the philosophy attractive and have spread it far and wide. But the moment the logic is extended to similar scenarios with slightly different premises, or the situation itself complexifies, or the scope of the thought experiment expands to encompass many more actions instead of just one, then suddenly you are right back at some repugnant conclusion. Such a flaw is why the idea of “utility monsters” (originally introduced by Robert Nozick) is so devastating for utilitarianism—they take us from our local circumstances to a very different world in which monsters derive more pleasure and joy from eating humans than humans suffer from being eaten, and most people would find the pro-monsters-eating-humans position repugnant.
To give a metaphor: Newtonian physics works really well as long as all you’re doing is approximating cannon balls and calculating weight loads and things like that. But it is not so good for understanding the movement of galaxies, or what happens inside a semiconductor. Newtonian physics is “true” if the situation is constrained and simple enough for it to apply to; so too it is with utilitarianism. This is the etiology of the poison.
In regard to your blog. I write this quickly and bluntly, but I think I reflect the tone of your article so I think that's okay. I'm sure we'd enjoy chatting over dinner and I respect anyone who takes the time to write a blog.
Implicit claim: EAs are mostly utilitarians
IIRC about 70% of EAs are consequentialists, but I don't think most will response to your claims as you do. I think your claims are largely straw men.
Claim: Utilitarians would murder 1 person to save 5
a) No they wouldn't. They would know that they would be arrested and unable to help the many others in easier and less risky ways. Given how cheap it is to save a life, why would you risk prison?
b) Noone in EA acts like this. Noone in EA tells people to act like this. It's a straw man.
Claim: There is a slippery slope where you have to become a moral angel and give everything away.
I think EA has probably messaged badly on this in the past. But now I guess there is concensus that you choose how much of your resources you want to use effectively then you do so. Will MacAskill donates everything above £26k. That's way too much for me and nearly everyone else. Sure, there is a status hierarchy of self denial (though there are other status hierarchies as well) but are we really gonna criticise that compared to all the other status hierarchies there are in other movements. Many people live normal lives and use there jobs or single digit % points of their income to help people. There is room for concern here, but I think it's overblown.
Claim: "Is the joy of rich people hiking really worth the equivalent of all the lives that could be stuffed into that land if it were converted to high-yield automated hydroponic farms and sprawling apartment complexes?"
This is not an EA view, it's my personal one. If there were a billion americans, the US would be less dense than the UK. It would be far less dense than England. There isn't a tradeoff here. But if there were I'd still lose the views and let billions have the lives we have. Is living in tokyo so bad? No. Fly and vacation somewhere people don't want to work so badly. If we argue it out I think I can convince you that wanting to keep immigrants out to preserve landscapes which wouldn't be built on anyway is actually a monstrous view.
Claim: "This means that, at least in principle, to me effective altruism looks like a lot of good effort originating in a flawed philosophy. However, in practice rather than principle, effective altruists do a lot that I quite like and agree with and, in my opinion"
The upshot of the blog seems to be that EA is fine actually. Which means I don't really understand why you framed it so negatively. It's like saying "you shouldn't date that girl because maybe you'll get her pregnant and it will ruin your lives" sure, but how likely is that? If you think that EA largely gets these compromises correct, why are you writing an article describing it as poision? Maybe that is bad behaviour?
Claim: "would you be happier being a playwright than a stock broker? Who cares, stock brokers make way more money, go make a bunch of money to give to charity."
Have you talked to any EAs this has happened to? I guess this has happened to 0 - 5 % of EAs in the last 5 years. Feel free to tell me I'm wrong. I don't think it's common. So I think this is unfair. I have seen plenty of EAs be told to enjoy their passions. Sometimes we are called libertines.
Claim: "Who can say no to identifying cost-effective charities?"
Almost everyone. Hardly any of total charity donations goes to cost cost-effective charities. If you think this is obvious, I'd encourage you to write an article about it. Depending on how many subscribers you have it might save a life on expectation.
Claim: "Whereas I’m of the radical opinion that the poison means something is wrong to begin with."
What is your alternative? Happy to read a blog about it.
And what about the poisions in other views. I haven't seen you say something like "now I don't think EA is much worse than other views, but it's new so I'm writing about it". There were many ways to make relative claims as well as absolute ones.
Claim: "Basically, just keep doing the cool shit you’ve been doing, which is relatively unjustifiable from any literal utilitarian standpoint, and keep ignoring all the obvious repugnancies taking your philosophy literally would get you into, but at the same time also keep giving your actions the epiphenomenal halo of utilitarian arbitrage, and people are going to keep joining the movement, and billionaires will keep donating, because frankly what you’re up to is just so much more interesting and fun and sci-fi than the boring stuff others are doing."
0) Yes you're right.
1) It's a fun essay to read but you are doing you are doing a C- job at convincing me you aren't an EA
2) It seems like you basically agree with EA recommendations in their entirety. It seems unfair that you've made up a bogey monster to criticise when you yourself acknowledge that in practice it's going really well.
3) I wish this essay wasn't so negative in framing, or that the "EA is poison" soundbite won't get to be quoted, when actually your conclusions are postive. I think that was bad behaviour from you and I don't think you'd like me to have done it.
4) I would enjoy an essay with concrete ways you think EA goes too far. I think that would be valuable. But I want bad outcomes that you actually think are going to happen.
5) If this is going to leave you with the opinion that EAs are all defensive, then tell me and I'll edit it. I wrote it in haste. But in my defence, I did read the entirety of your blog and respond to the points I thought were important.
6) I think you wrote this in good faith.
Thanks for talking :)