This is a special post for quick takes by PabloAMC 🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Contra Vasco Grilo on GiveWell may have made 1 billion dollars of harmful grants, and Ambitious Impact incubated 8 harmful organisations via increasing factory-farming?

The post above explores how under the utilitarian hedonistic moral framework, the meat-eater problem may result in GiveWell grants or AIM charities to be net-negative. The post seems to argue that one expected value grounds, one should let children die of malaria because they could end up eating chicken, for example.

I find this argument morally repugnant and want to highlight it. Using some of the words I have used in a reply:

Let me quote William MacAskill comments on "What We Owe the Future" and his reflections on FTX (https://forum.effectivealtruism.org/posts/WdeiPrwgqW2wHAxgT/a-personal-statement-on-ftx):

A clear-thinking EA should strongly oppose “ends justify the means” reasoning.

First, naive calculations that justify some harmful action because it has good consequences are, in practice, almost never correct.

Second, plausibly it is wrong to do harm even when doing so will bring about the best outcome.

Finally, let me say the post itself seems to pit animal welfare against global poverty causes, which I found divisive and probably counterproductive.

I downvoted this post because it is not representative of the values I believe EA should strive for. It may have been sufficient to show disagreement, but if someone goes for the first time into the forum and sees the post with many upvotes, their impression will be negative and may not become engaged with the community. If a reporter reads the forum and reads this, they will negatively cover both EA and animal welfare. And if someone was considering taking the 10% pledge or changing their career to support either animal welfare or global health and read this, they will be less likely to do so.

I am sorry, but I will strongly oppose "ends justify the means" argument put forward by this post.

Vasco has come to a certain conclusion on what the best action is, given a potential trade-off between the impact of global health initiatives and animal welfare.

I think it is reasonable to disagree but I think it is bad for the norms of the forum and unnecessarily combative for us to describe moral views we disagree with as "morally repugnant". I think this is particularly unfair if we do not elaborate on why we either:

a) think this trade-off does not exist, or is very small.

or

b) disagree.

For example, global health advocates could similarly argue that EA pits direct cash transfers against interventions like anti-malaria bednets, which is divisive and counterproductive, and that EA forum posts doing this will create a negative impression of EA on reporters and potential 10% pledgers.

In my view, discussing difficult, morally uncomfortable trade-offs between prioritising different, important causes is a key role of the EA forum - whether within cause areas (should we let children die of cancer to prioritise tackling malaria / should we let cows be abused to prioritise reducing battery cage farming of hens), or across cause areas. We should discuss these questions openly rather than avoiding them to help us make better moral decisions. 

I think it would also be bad if we stopped discussing these questions openly for fear of criticism from reporters - this would bias EA towards preserving the world's moral status quo enforced by the media.

Also, traditionally, criticism of "ends justifies the means" reasoning tends to object to arguments which encourage us to actively break deontological rules (like laws) to pursue some aggregate increase in utility, rather than arguments to prioritise one approach to improving utility over the other (which causes harm by omission rather than active harm), eg - prioritising animal welfare over global health, or vice-versa. With a more expansive use of the term, critics could reject GiveWell style charity comparison as "ends justifies the means reasoning" which argues one should let some children die of tetanus to save other children from malaria.

I'd say that it's a (putative) instance of adversarial ethics rather than "ends justify the means" reasoning (in the usual sense of violating deontic constraints).

Sometimes that seems OK. Like, it seems reasonable to refrain from rescuing the large man in my status-quo-reversal of the Trolley Bridge case. (And to urge others to likewise refrain, for the sake of the five who would die if anyone acted to save the one.) So that makes me wonder if our disapproval of the present case reflects a kind of speciesism -- either our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem?

OTOH, I think the meat-eater problem is misguided anyway, so another possibility is just that mistakenly urging against saving innocent people's lives is especially bad. I guess I do think the moral risk here is sufficient to be extra wary about how one expresses concerns like the meat-eater problem. Like Jason, I think it's much better to encourage AW offsets than to discourage GHD life-saving.

(Offsetting the potential downsides from helping others seems like a nice general solution to the problem of adversarial ethics, even if it isn't strictly optimal.)

You may be interested to read some of MacAskill's older writing on the subject https://www.lesswrong.com/posts/FCiMtrsM8mcmBtfTR/?commentId=9abk4EJXMtj72pcQu

Just wanted to copy MacAskill's comment here so people don't have to click through: 

Though I was deeply troubled by the poor meater problem for some time, I've come to the conclusion that it isn't that bad (for utilitarians - I think it's much worse for non-consequentialists, though I'm not sure).

The basic idea is as follows. If I save the life of someone in the developing world, almost all the benefit I produce is through compounding effects: I speed up technological progress by a tiny margin, giving us a little bit more time at the end of civilisation, when there are far more people. This benefit dwarfs the benefit to the individual whose life I've saved (as Bostrom argues in the first half of Astronomical Waste). Now, I also increase the amount of animal suffering, because the person whose life I've saved consumes meat, and I speed up development of the country, which means that the country starts factory farming sooner. However, we should expect (or, at least, I expect) factory farming to disappear within the next few centuries, as cheaper and tastier meat substitutes are developed. So the increase in animal suffering doesn't compound in the same way: whereas the benefits of saving a life continue until the humanity race (or its descendants) dies out, the harm of increasing meat consumption ends only after a few centuries (when we move beyond farming).

So let's say the benefit to the person from having their live saved is N. The magnitude of the harm from increasing factory farming might be a bit more than N: maybe -10N. But the benefit from speeding up technological progress is vastly greater than that: 1000N, or something. So it's still a good thing to save someone's life in the developing world. (Though of course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfed by existential risk mitigation).

Thanks MHR!

This is informative, I strongly upvoted. A few comments though:

  1. I find it ok to entertain the idea of what is the expected value of doing X or Y as a function of their consequences, be it longtermism or animal welfare.

  2. I would find it very morally unappealing to refuse to save lives on the grounds of convicting people of actions they have not committed yet. Eg, if a child is drowning before you, I think it would be wrong in my opinion to let her drown because he might cause animal suffering. A person can make decisions and I would find it wrong to let her die because of what her statistical group does.

As I commented there: I don't think this is the kind of "ends justify the means" reasoning that MacAskill is objecting to. Vasco isn’t arguing that we should break the law. He’s just doing a fairly standard EA cause prioritization analysis. Arguing that people should not donate to global health doesn't even contradict common-sense morality because as we see from the world around us, common-sense morality holds that it's perfectly permissible to let hundreds or thousands of children die of preventable diseases. Utilitarians and other consequentialists are the ones who hold "weird" views here, because we reject the act/omission distinction in the first place.

(For my part, I try to donate in such a way that I'm net-positive from the perspective of someone like Vasco as well as global health advocates.)

Hi @Jbentham,

Thanks for the answer. See https://forum.effectivealtruism.org/posts/K8GJWQDZ9xYBbypD4/pabloamc-s-quick-takes?commentId=XCtGWDyNANvHDMbPj for some of the points. Specifically, the problem I have with the post is not about cause prioritization or cost-effectiveness.

Arguing that people should not donate to global health doesn't even contradict common-sense morality because as we see from the world around us, common-sense morality holds that it's perfectly permissible to let hundreds or thousands of children die of preventable diseases.

I think I disagree with this. Instead, I think most people find it hard to do what they believe because of social norms. But I think it would be hard to find a significant percentage of people who believe that "letting innocent children die because of what they could do".

Utilitarians and other consequentialists are the ones who hold "weird" views here, because we reject the act/omission distinction in the first place.

Probably you are somewhat right here, but I believe "letting innocent children die" is even a weirder opinion to have.

Curated and popular this week
Relevant opportunities