Hide table of contents

The meat eater problem

I've heard people argue that it's possible global health and development could actually be negative utility due to the consequence of increased meat consumption and also factory farming. It feels like EAs who value animal suffering at all must provide very clear reasons why its okay to support saving the lives of meat-eaters and also developing the third-world given the "meat eater problem."

Even stepping away from utilitarianism, it seems more wrong to actively save someone who is very likely going to commit a moral atrocity (if you believe eating meat is a moral atrocity).

How do you deal with the meat-eater problem? I find the problem very compelling and I do not know myself how to deal with it.

81

4
0

Reactions

4
0
New Answer
New Comment

23 Answers sorted by

Since the discussion on this thread, I've had the view that the meat-eater problem is dwarfed by the cause prioritisation problem, in the sense that if you give money to a global health and development charity, overwhelmingly the biggest harm to animals is that you didn't give that money to animal welfare charities: the actual negative effect of your donation is likely very small by comparison.

(There's obviously an act-omission difference here, but I don't personally find that an important difference.)

Im worried that the chunk of EA that is concerned with effective human nearterm charities are all at risk of being net negative

6
Vasco Grilo🔸
Hi Sammy, I think describing donations to global health and development interventions as net negative because animal welfare interventions are more cost-effective is misleading: * The default counterfactual for this sort of comparisons is simply not donating anything, i.e. keeping more money for personal consumption. * I believe cost-effective global health and development interventions, such as those of GiveWell's top charities, are more cost-effective than the marginal personal consumption of donors. Just to be clear, I also see value in Ben's point. As I had answered below:
3
sammyboiz
Sorry I meant compared to doing nothing. I'm mainly concerned about specifically the consequence of increased meat consumption and also factory farming from GH&D.
2
Vasco Grilo🔸
Ah, sorry for misinterpreting too. I thought your "net negative" was connected to Ben's point.
2
Miquel Banchs-Piqué (prev. mikbp)
Being net negative or positive and how much just depends on the values of whoever does this assessment. So I don't think such statements are useful. These EAs may be net negative given your values. Probably much less so given their values. I don't think it is useful or helpful to speak in general terms about how positive/negative the (expected) value of something is. There is no universal way to value stuff.

It is a pretty uncomfortable problem and not one that I have been able to reconcile very well. One way around it is steering people to support global health/dev orgs that help people while not increasing meat consumption. An example is Female Family Empowerment Media, which improves openness and access to contraceptives in Nigeria. Another examples is the Beans is How Coalition, which aims to double worldwide bean consumption for the purpose of reducing hunger and increasing sustainability.

Tangentially, this conversation illustrates how (if person-affecting views are false), the sign of Family Empowerment Media (FEM) is the opposite of AMF and other life-saving charities. FEM prevents human lives and AMF saves lives, and they have the opposite downstream effects on human lived experience, farmed animal welfare, and so on.

Therefore, I would not suggest anyone split their donations between life-preventing charities like FEM and lifesaving charities like AMF, because their effects will offset each other. People who are sympathetic to FEM (as op... (read more)

The sign is only opposite through this particular generic population increase/decrease channel though.

AMF also has impacts on quality of life and maybe human capital I suspect.

FEM may have positive impacts on earnings, on women’s rights, and on the composition of who has children and when (ie towards people who are ready and willing to have them). 

5
Ariel Simnegar 🔸
I agree with that caveat! Though I suspect that the downstream effects of the population increase/decrease channel dominate, especially for animal welfare.

EDIT: Thanks Richard, slightly silly question in retrospect!

Thanks Constance, how is FEM better (I think Family not Female :D) from this perspective than any other "saving life" org, like AMF?

4
Richard Y Chappell🔸
I think the idea is to reduce the future population of meat-eaters by encouraging contraceptive use, so kind of the opposite (in terms of total population) of saving lives. (I have to say, the idea that we should positively prefer future people to not exist sounds pretty uncomfortable to me, and certainly less appealing than supporting people in making whatever reproductive decisions they personally prefer, which would include both contraceptive and fertility/child support.)
4
Ariel Simnegar 🔸
Your writings on this subject often emphasize an extremely high regard for the value of people making their own reproductive decisions, even when the weights are (as in this case) a human's life and an enormous amount of farmed animal suffering. When would the other stakes be sufficiently large for you to endorse preventing someone from making their own reproductive decision? For example, let's say Hitler's mother could have been forced to have an abortion, preventing Hitler's birth. Would you say that's a tradeoff worth making, with regret? Or let's say we know Alice's son Bob, were he to be born, will save 1 billion lives by preventing a nuclear war, and Alice currently intends to abort Bob. Would you say forcing Alice to carry Bob to term would be a tradeoff worth making, with regret about the forced birth? The reason why I ask is because my intuition is that while reproductive autonomy is very important, it seems to me that there are always ways to up the stakes such that it can be the right thing to compromise on that principle, with regrets. I feel like there's something I'm missing in my understanding of your view which has caused us historically to talk past each other.
4
Richard Y Chappell🔸
If you can stipulate (e.g. in a thought experiment) that the consequences of coercion are overall for the best, then I favor it in that case. I just have a very strong practical presumption (see: principled proceduralism) that liberal options tend to have higher expected value in real life, once all our uncertainty (and fallibility) is fully taken into account. Maybe also worth noting (per my other comment in this thread) that I'm optimistic about the long-term value of humanity and human innovation. So, putting autonomy considerations aside, if I could either encourage people to have more kids or fewer, I think more is better (despite the short-term costs to animal welfare).
4
NickLaing
Thanks of course! That's actually quite obvious in retrospect, not sure how I missed that on first pass. There would also be a counter argument that reducing family size is strongly associated with rapid development and with it in turn mass deployment of factory farming. One American probably eats 10-30x the meat of the Nigerians FEM serves. Its tricky....

Beans is How Coalition

Cool org I've not heard of, thanks!!

You would still have to deal with the increase in factory farming and per capita meat consumption that comes with societal development.

This doesn't answer the core question, but in many places like UgAnda the lives we are saving (people who live in the village) aren't eating any factory farmed meat. The little meat they do eat is from animals reared locally which I think have net positive lives.

Fai
45
4
0
3
1

I have recently done a bit of research on the intensification of animal agriculture in Africa. I have a few comments to make in response to yours.

I am very confident that people in poor countries like Uganda eat way less animal products than the global average. But I am not sure that they all don't eat factory farmed animal products. I think I have quite a high level of belief that your claim about the meat consumption patterns of the people in the areas in Uganda you work in. But I don't think we should generalise to: "All people in very poor countries don't eat factory farmed meat".

I think a very important fact we should recognize is that factory farming clearly exist and is booming and intensifying quickly in Africa, including Uganda, or even poorer countries such as Burundi and South Sudan. This means that the meat-eating problem (convinced by JWS's comment that we should change the wording, even though I don't agree about all the things said in the comment), if it is a problem at all, is going to get worse in Africa and other parts in the world with many people in extreme poverty.

A very important note needs to be introduced here: I think we one species of farmed animals we sho... (read more)

Thanks Fai - I was just making a small comment, to point out that most rural Ugandans, and probably Sub-saharan Africans eat either little or non factory farmed meat.

To your comment "But I am not sure that they all don't eat factory farmed animal products." For sure, plenty of people here eat a LOT of barn/factory farmed meat (mainly chickens see below), those in cities - but not the 70% of people who live in the village and barely ever buy "meat" at all.

I agree its unclear whether free range farm animals live net positive lives - I'm maybe 70% sure they do. There's no de-beaking in the village here - that starts happening when animal farming is commercial. For sure transport is often nasty, Slaughter is often especially horrible but in my wee opinion that's not nearly enough to negate the rest of their lives doing what animals do without huge constraint. We can probably agree to disagree on the whole-life positivity vs. negativity thing. Yes chicken lives are short, and many die early to disease (which can but doesn't always involve a lot of suffering). Most village chickens here though live between 4 and 9 months - it takes 6 months-ish before they reach sexual maturity.  

I ... (read more)

4
Fai
Thank you for your detailed reply! I admire your courage to raise this issue in front of your colleagues/the locals there - I am not sure I would find the courage to do so.  I have some hope that there might at least be ways to reduce the % of factory farming there will be in poor countries in the world in the future. Some EAs are working on it and I am trying to see what I can help there too.

Nice point, Nick! Even considering the conditions are as bad as in high income countries (pessimistic), I Fermi estimated accounting for the meat-eater problem only decreases the cost-effectiveness of GiveWell's top charities by 8.72 %. On the other hand, I did not account for future increases in the consumption of animals throughout the lives of people who are saved (optimistic), which usually follow economic growth. For reference, I also Fermi estimated the badness of the experiences of all farmed animals alive is 4.64 times the goodness of the experienc... (read more)

9
NickLaing
Thanks Vasco yeah I've read your great posts. I straight up disagree on the relative importance of farmed welfare to human welfare, but that's because Im not a hedonistic utilitarian and have far lower moral weights for animals than RP.

Even if the current population isn't consuming much factory-farmed meat, if it's children's lives being saved, the amount they consume over the next half century or so may be substantial as the countries develop and adopt more industrialised food production. Also, saving lives today seems likely to increase population in future (I recall a GiveWell-commissioned study on this), so potentially leading to greater factory-farmed meat consumption.

I feel like this answer to the problem is easily forgotten by me, and probably a lot of similar-minded people who post here, because it's not a clever, principled philosophical solution. But on reflection, it sounds quite reasonable! 

I like this explanation a lot, from an 80000 Hours podcast:

Rob Wiblin: One really important consideration that plays into Open Phil’s decisions about how to allocate its funding — and also it really bears importantly on how the effective altruism community ought to allocate its efforts — is worldview diversification. Yeah, can you explain what that is and how that plays into this debate?

Alexander Berger: Yeah, the central idea of worldview diversification is that the internal logic of a lot of these causes might be really compelling and a little bit totalizing, and you might want to step back and say, “Okay, I’m not ready to go all in on that internal logic.” So one example would be just comparing farm animal welfare to human causes within the remit of global health and wellbeing. One perspective on farm animal welfare would say, “Okay, we’re going to get chickens out of cages. I’m not a speciesist and I think that a chicken-day suffering in the cage is somehow very similar to a human-day suffering in a cage, and I should care similarly about these things.”

Alexander Berger: I think another perspective would say, “I would trade an infinite number of chicken-days for any human experience. I don’t care at all.” If you just try to put probabilities on those views and multiply them together, you end up with this really chaotic process where you’re likely to either be 100% focused on chickens or 0% focused on chickens. Our view is that that seems misguided. It does seem like animals could suffer. It seems like there’s a lot at stake here morally, and that there’s a lot of cost-effective opportunities that we have to improve the world this way. But we don’t think that the correct answer is to either go 100% all in where we only work on farm animal welfare, or to say, “Well, I’m not ready to go all in, so I’m going to go to zero and not do anything on farm animal welfare.”

Alexander Berger: We’re able to work on multiple things, and the effective altruism community is able to work on multiple things. A lot of the idea of worldview diversification is to say, even though the internal logic of some of these causes might be so totalizing, so demanding, ask so much of you, that being able to preserve space to say, “I’m going to make some of that bet, but I’m not ready to make all of that bet,” can be a really important move at the portfolio level for people to make in their individual lives, but also for Open Phil to make as a big institution.

Rob Wiblin: Yeah. It feels so intuitively clear that when you’re to some degree picking these numbers out of a hat, you should never go 100% or 0% based on stuff that’s basically just guesswork. I guess, the challenge here seems to have been trying to make that philosophically rigorous, and it does seem like coming up with a truly philosophically grounded justification for that has proved quite hard. But nonetheless, we’ve decided to go with something that’s a bit more cluster thinking, a bit more embracing common sense and refusing to do something that obviously seems mad.

This is also how I think about the meat eater problem. I have a lot of uncertainty about the moral weight of animals, and I see funding/working on both animal welfare and global development as a compromise position that is good across all worldviews. (Your certainty in the meat eater problem can reduce how much you want to fund global development on the margin, but not eliminate it altogether.)

Thanks for sharing that, Karthik.

I have a lot of uncertainty about the moral weight of animals, and I see funding/working on both animal welfare and global development as a compromise position that is good across all worldviews.

Note stopping to support global health and developnent is not on the table. We are discussing marginal donations/spending. The global spending on global health and development would still be much larger than that on animal welfare after a small donor or even Open Phil changed to overwhelmingly supporting animal welfare.

Imagine you h... (read more)

9
Karthik Tadepalli
The amount of global spending on each cause is basically irrelevant if you think most of it is non-impactful. Imaginine that John Q Warmglow donates $1 billion to global health, but he stipulates that that billion can only be spent on PlayPumps. Then global spending on GHD is up by $1 billion, but the actual marginal value of money to GHD is unchanged, because that $1 billion did not go to the best opportunities, the ones that would move down the marginal utility of money to the whole cause area. I understand you're aware of this, which is why your Fermi estimates focus on the marginal value of money to each cause by comparing the best areas within each cause. But the level of global spending on a cause contributes very little to the marginal value of money if most of that spending is low-impact. I don't have a satisfying answer to what x is for me. I will say somewhere between 0.5 and 1.5, corresponding to the intuition that neither GHD nor FAW dominates each other. I would guess my cruxes with you come from two sources: 1. My median moral weight on chickens is much less than 0.33, ~2 OOMs less.[1] This is a difficult inferential gap to cross. 2. I think the quality of FAW cost-effectiveness estimates is vastly lower than GHD cost-effectiveness estimates, making the comparison apples-to-oranges. Saulius's estimates are a good start on a hard problem, but * There are a lot of made-up numbers based on intuition (e.g. their assumption of 24% compliance with pledges in the absence of follow-up pressure is wildly out of line with my intuitions) * There's likely steeply declining returns to effort given that campaigns will initially target the lowest hanging fruit, and eventually things will get much harder. Making a cost-effectiveness estimate based on early successful attempts is not representative of the value of future funding. This is not a knock on people who are doing the best they can with limited data. I am just not comfortable taking these as unbia
6
Ben Millwood🔸
(I realise this was posted a month ago but) this sounds to me like it overstates how bad global health aid is? I think all GiveWell top charities are existing organisations and programs that GiveWell only advocates increasing spending to, so surely effective aid existed before GiveWell did. Moreover, I have a not-particularly-concrete impression that e.g. vaccine distribution is only not an EA cause because it was already fully funded (at least in the easy cases) by non-EAs, so that our top charities are very much "top remaining" and not "best ever". I have the impression that even if EA and OpenPhil collectively tomorrow decided to move all of our global health funding to animals, there would still be a lot of effective global development aid -- there would still be e.g. Gavi and the Bill and Melinda Gates Foundation (which sure, does ineffective things, but does effective things too) and many others. Wouldn't that still meet the need you identified in your original answer for a compromise position?
2
Vasco Grilo🔸
Thanks for the follow up! Just to clarify, I only care about the marginal cost-effectiveness. However, I feel like some intrinsically care about spending/neglectedness independently of how it relates to marginal cost-effectiveness. Note this also applies to animal welfare. Thanks for explaining your views! Your moral weight is 1 % (= 10^-2) of mine[1], and I multiplied Saulius' mainline estimate of 41 chicken-years per $ by 0.2[2]. So, ignoring other disagreements, your marginal cost-effectiveness would have to be 1.32 % (= 0.2/(1.51*10^3*0.01)) the non-marginal cost-effectiveness linked to Saulius' mainline estimate for corporate campaigns for chicken welfare to be as cost-effective as GiveWell's top charities. Does this sound right? Open Phil did not share how they got to their adjustment factor of 1/5, and I do agree it would be great to have more rigorous estimates of the cost-effectiveness of animal welfare interventions, so I would say your intuition here is reasonable, although I guess you are downgrading Saulius' estimate too much. On the other hand, I find it difficult to understand how one can get to such a low moral weight. How many times as large would your moral weight become conditioning on (risk-neutral) expected total hedonistic utilitarianism? Thanks for clarifying. Given i) 1 unit of welfare with certainty, and ii) 10 x units of welfare with 10 % chance (i.e. x units of welfare in expectation), what is the x which would make you value i) as much as ii) (for me, the answer would be 1)? Why not a higher/lower x? Are your answers to these questions compatible with your intuition that corporate campaigns for chicken welfare are 0.5 to 1.5 times as cost-effective as GiveWell's top charities? If it is hard to answer these questions, is there a risk of your risk aversion not being supported by seemingly self-evident assumptions[3], and instead being a way of formalising/rationalising your pre-formed intuitions about cause prioritisation? 1. ^
8
Karthik Tadepalli
I want to be clear that I see risk aversion as axiomatic. In my view, there is no "correct" level of risk aversion. Various attitudes to risk will involve biting various bullets (St Petersburg paradox on the one side, concluding that lives have diminishing value on the other side), but I view risk preferences as premises rather than conclusions that need to be justified. I don't actually think moral weights are premises. However, I think in practice our best guesses on moral weights are so uninformative that they don't admit any better strategy than hedging, given my risk attitudes. (That's the view expressed in the quote in my original comment.) This is not a bedrock belief. My views have shifted over time (in 2018 I would have scoffed at the idea of THL and AMF being even in the same welfare range), and will probably continue to shift. Yes, I am formalizing my intuitions about cause prioritization. In particular, I am formalizing my main cruxes with animal welfare - risk aversion and moral weights. (These aren't even cruxes with "we should fund AW", they are cruxes only with "AW dominates GHD". I do think we should reallocate funding from GHD to AW on the margin.) Is my risk aversion just a guise for my preference that GHD should get lots of money? I comfortably admit that my choice to personally work on GHD is a function of my background and skillset. I was a person from a developing country, and a development economist, before I was an EA. But risk aversion is a universal preference descriptively – it shouldn't be a high bar to believe that I'm actually just a risk averse person. At the end of the day, I hold the normie belief that good things are good. Children not dying of malaria is good. Chickens not living in cages is good. Philosophical gotchas and fragile calculations can supplement that belief but not replace it.
2
Vasco Grilo🔸
Thanks for clarifying. Are you saying that you are more likely than not to update towards animal welfare, or that you expect to update towards animal welfare? The former is fine. If the latter, it makes sense for you to update all the way now (one should not expect future beliefs to differ from past beliefs). Nice to know. One could work in a certain area, but support moving marginal donations from that area to animal welfare[1], as you just illustrated: Thanks for being transparent about this! I think it would be good for more people like you, who do not think spending on animal welfare should increase a lot, to clarify what they believe is more cost-effective at the margin (as this is what matters in practice). Right, but risk aversion with respect to resources makes sense because welfare increases sublinearly with resources. I assume people are less risk averse with respect to welfare. Even if people are significantly risk averse with respect to welfare, I do not think we should elevate this to being normative. People also discount the welfare of their future selves and foreigners. People in and governments of high income countries could argue they are already doing something pretty close to optimal with respect to supporting people in extreme poverty given their decriptive preferences. This may be right, but I would say such preferences are misguided, and that they should be much more impartial with respect to nationality. I think the vast majority of people arguing that animal welfare should receive way more funding would agree with the above. I certainly do. I just do not think the calculations are fragile to the extent that the current porfolio can be considered anywhere close to optimal. I Fermi estimated buying organic eggs is 2.11 times as cost-effective as donating to GiveWell's top charities[2], and I think that is far from the most cost-effective interventions in the space. which is what you suggest based on your guess that corporate campaigns for

Hi Sammy, I’ve also found the meat eater problem compelling/distressing. One idea that gives me peace is an “offsetting” argument - that the most effective animal welfare charities use resources remarkably well. I’ve seen estimates online that the average American eats 7,000 animals over the course of their life (98.5% of which are fish and chicken). My intuition is that the average recipient of a GiveWell-recommended program eats slightly fewer animals, though I could be wrong.
 

The Humane League claims to be able to be able to spare a hen from the life of extreme confinement for just $2.63, and the Fish Welfare Initiative claims to be able to help a fish live in lower stocking density, higher water quality environments for just $1. Altogether, this suggests that donating ~$11,000 to cost-effective farmed animal welfare organizations mitigates a substantial amount of the harm that a meat-eater might cause (consuming 4500 fish and 2400 chickens). So a portfolio of global health and animal welfare donations seems likely to do good across both worldviews.


A few other scattered thoughts:

  • Saving human lives doesn’t just contribute to the problem of animal consumption, I hope it it also accelerates the solutions to factory farming.
  • The extent to which averting a death of a child under 5 (which GiveWell programs primarily target) increases total global population is unclear to me. Families that lose a child may be more inclined to have another baby than families that don’t. My guess is that lifesaving GiveWell-style charities do increase population in the short term, but not by one full person per life saved.
  • I would recommend this blog on the idea of a “moral parliament”, which I think can be an interesting thought exercise for resolving tensions like this. Rethink Priorities also has a cool tool for this idea.
     

I know this may not be fully satisfying, and this isn’t a strong argument against using your resources to go all in on animal welfare (which I think is a great thing to do), but I hope it might be helpful.

So a portfolio of global health and animal welfare donations seems likely to do good across both worldviews.

Yes, sufficient donation to animal welfare can make it net positive. But it doesn't sound so good when one draws out the impact matrix:

  • Donate only to animal charities: +100
  • Donate only to human charities: -10
    • (not -100, because animal welfare is neglected, so proactive work on it does more good than creating more passive animal-eaters does bad)
  • Donate half to both: +45

    (edited to add: If one thinks the animal eater problem's conclusions are more likely true, such that these numbers could represent an average of one's probability distribution. It looks like there's disagreement in values about whether it's valuable itself to for an action to be {at least positive} per se; I write about this extensively below)

 

That has the same structure as this:

  • Donate only to EA charities: +100
  • Donate only to bad-thing-x: -10
    • Not naming a particular bad-thing, because it's unnecessary, but you can imagine.
  • Donate half to both: +45

If these feel different, a relevant factor may be how uplifting humans isn't the kind of thing that narratively should be bad. It's a central example of something many ... (read more)

This is unresponsive to (what I perceive as) the best version of Sam's argument, which is that a portfolio approach does more good given uncertainty about the moral weight on animals. Your impact matrix places all its weight on the view that animals a high enough moral value that donating to humans is net negative.

If you have a lot of uncertainty and you are risk averse, then a portfolio approach is the way to go. If you believe that there is a near 100% chance that helping poor people is bad for the world, then sure, don't try the portfolio approach. But that's a weirdly high amount of certainty, and I think you should question the process that led you there.

4
Robi Rahman
No, this is totally wrong. Whatever your distribution of credences of different possible moral weights of animals, either the global health charity or the animal welfare charity will do more good than the other, and splitting your donations will do less good than donating all to the single better charity.
4
Karthik Tadepalli
This is why I said risk aversion matters - see this for a detailed explanation. Or see the back and forth with quila that inspired me to post it
3
Robi Rahman
Risk aversion doesn't change the best outcome from donating to a single charity to splitting your donation, once you account for the fact that many other people are already donating to both charities. Given that both orgs already have many other donors, the best action for you to take is to give all of your donations to just one of the options (unless you are a very large donor).
6
Karthik Tadepalli
Yes, my response is from the perspective of the EA movement rather than any individual
1
quila
If by weight you meant probability, then placing 100% of that in anything is not implied by a discrete matrix, which must use expected values (i.e the average of {probability × impact conditional on probability}). One could mentally replace each number with a range for which the original number is the average. (It is the case that my comment premises a certain weighting, and humans should not update on implied premises, except in case of beliefs about what may be good to investigate, to avoid outside-view cascades.) I think beliefs about risk-aversion are probably where the crux between us is.  Uncertainty alone does not imply one should act in proportion to their probabilities.[1] I don't know what is meant by 'risk averse' in this context. More precisely, I claim risk aversion must either (i) follow instrumentally from one's values, or (ii) not be the most good option under one's own values.[2] * Example of (i), where acting in a way that looks risk-averse is instrumental to fulfilling ones actual values: The Kelly criterion. In a simple positive-EV bet, like at 1:2-odds on a fair coinflip, if one continually bets all of their resources, the probability they eventually lose everything approaches 1 as all their gains are concentrated into an unlikely series of events, resulting in many possible worlds where they have nothing and one where they have a huge amount of resources. The average resources had across all possible worlds is highest in this case. Under my values, that set of outcomes is actually much worse than available alternatives (due to diminishing value of additional resources in a single possible world). To avoid that, we can apply something called the Kelly criterion, or in general bet with sums that are substantially smaller than the full amount of currently had resources. This lets us choose the distribution of resources over possible worlds that our values want to result from resource-positive-EV bets; we can accept a low
4
Karthik Tadepalli
I agree that uncertainty alone doesn't warrant separate treatment, and risk aversion is key. (Before I get into the formal stuff, risk aversion to me just means placing a premium on hedging. I say this in advance because conversations about risk aversion vs risk neutrality tend to devolve into out-there comparisons like the St Petersburg paradox, and that's never struck me as a particularly resonant way to think about it. I am risk averse for the same reason that most people are: it just feels important to hedge your bets.) By risk aversion I mean a utility function that satisfies u(E[X])>E[u(X)]. Notably, that means that you can't just take the expected value of lives saved across worlds when evaluating a decision – the distribution of how those lives are saved across worlds matters. I describe that more here. For example, say my utility function over lives saved x is u(x)=√x. You offer me a choice between a charity that has a 10% chance to save 100 lives, and a charity that saves 5 lives with certainty. The utility of the former option to me is u(x)=0.1⋅√100=1, while the utility of the latter option is u(x)=1⋅√5. Thus, I choose the latter, even though it has lower expected lives saved (E[x]=0.1⋅100=10 for the former, E[x]=5 for the latter). What's going on is that I am valuing certain impact over higher expected lives saved. Apply this to the meat eater problem, where we have the choices 1. spend $10 on animal charities 2. spend $10 on development charities 3. spend $5 on each of them If you're risk neutral, 1) or 2) are the way to go – pick animals if your best bet is that animals are worth more (accounting for efficacy, room for funding, etc etc), and pick development if your best bet is that humans are worth more. But both options leave open the possibility that you are terribly wrong and you've wasted $10 or caused harm. Option 3) guarantees that you've created some positive value, regardless of whether animals or humans are worth more. If you're risk
1
quila
It sounds like we agree about what risk aversion is! The term I use that includes your example of valuing the square root of lives saved is a 'concave utility function'. I have one of these, sort of; it goes up quickly for the first x lives (I'm not sure how large x is exactly), then becomes more linear. But it's unexpected to me for other EAs to value {amount of good lives saved by one's own effect} rather than {amount of good lives per se}. I tried to indicate in my comment that I think this might be the crux, given the size of the world. (In your example of valuing the square root of lives saved (or lives per se), if there's 1,000 good lives already, then preventing 16 deaths has a utility of 4 under the former, and √1000−√984 under the latter; and preventing 64 is twice as valuable under the former, but ~4x as valuable under the latter)
2
Karthik Tadepalli
Your parenthetical clarifies that you just find it weird because you could add a constant inside the concave function and change the relative value of outcomes. I just don't see any reason to do that? Why does the size of the world net of your decision determine the optimal decision?
1
quila
The parenthetical isn't why it's unexpected, but clarifying how it's actually different. As an attempt at building intuition for why it matters, consider if an agent applied the 'square of lives saved by me' function newly to each action instead of keeping track of how many lives they've saved over their existence. Then this agent would gain more utility by taking four separate actions, each of which certainly save 1 life (for 1 utility each), than from one lone action that certainly saves 15 lives (for 3.87 utility). Then generalize this example to the case where they do keep track, and progress just 'resets' for new clones of them. Or the real-world case where there's multiple agents with similar values. I describe this starting from 6 paragraphs up in my edited long comment. I'm not sure if you read it pre- or post-edit.
2
Karthik Tadepalli
I suppose that is a coherent worldview but I don't share any of the intuitions that lead you to it.
1
quila
Could you describe your intuitions? 'valuing {amount of good lives saved by one's own effect} rather than {amount of good lives per se}' is really unintuitive to me.
2
Karthik Tadepalli
To me, risk aversion is just a way of hedging your bets about the upsides and downsides of your decision. It doesn't make sense to me to apply risk aversion to objects that feature no risk (background facts about the world, like its size). It has nothing to do with whether we value the size of the world. It's just that those background facts are certain, and von Neumann-Morgenstern utility functions like we are using are really designed to deal with uncertainty. Another way to put it is that concave utility functions just mean something very different when applied to certain situations vs uncertain situations. * In the presence of certainty, saying you have a concave utility function means you genuinely place lower value on additional lives given the presence of many lives. That seems to be the position you are describing. I don't resonate with that, because I think additional lives have constant value to me (if everything is certain). * But in the presence of uncertainty, saying that you have a concave utility function just means that you don't like high-variance outcomes. That is the position I am taking. I don't want to be screwed by tail outcomes. I want to hedge against them. If there were zero uncertainty, I would behave like my utility function was linear, but there is uncertainty, so I don't.
1
quila
This is so interesting to me. I introduced this topic and wrote more about it in this shortform. I wanted to give the topic its own thread and see if others might have responses. I do this too, but even despite the worlds size making my choices mostly only effecting value on the linear parts of my value function! Because tail outcomes are often large. (Maybe I mean something like kelly-betting/risk-aversion is often useful to fulfill instrumental subgoals too). (Edit: and I think 'correctly accounting for tail outcomes' is just the correct way to deal with them). Yes, though it's not because additional lives are less intrinsically valuable, but because I have other values which are non-quantitative (narrative) and almost maxxed out way before there are very large numbers of lives. A different way to say it would be that I value multiple things, but many of them don't scale indefinitely with lives, so the overall function goes up faster at the start of the lives graph.

It's a tough question and something I've tried to wrap my head around as well. All of the threads in the comments here are quite helpful!

This point you've made, Sam, is also something I have thought about:

Saving human lives doesn’t just contribute to the problem of animal consumption, I hope it it also accelerates the solutions to factory farming.

Awareness of animal welfare issues tends to increase as people get richer and have more space to think about something other than their immediate needs. Of course, factory farming is worse in richer societies, but... (read more)

Brian Tomasik has argued that if (a) wild animals have negative welfare on net, and (b) humans reduce wild animal populations, then that may swamp even the horrific scale of factory farming.

I personally think the meat eater problem is very serious, and the best way around it is to just donate to effective animal welfare charities! Those donations would be orders of magnitude more cost-effective than the best human-centered alternatives.

Nice question, Sammy! I worry the meat-eater problem is mostly a distraction. If one values 1 unit of welfare in animals as much as 1 unit of welfare in humans, and does not think Rethink Priorities' welfare ranges are wildly off, the best animal welfare interventions will be much more cost-effective than the best interventions to save human lives. I estimated corporate campaigns for chicken welfare, such as the ones supported by The Humane League (THL), are 1.51 k times as cost-effective as GiveWell's top charities.

Its not only the welfare ranges, but also assuming we are hedonistic utilitarians which is pretty important and I would be interested to know how man EAs are in that boat. In general though your point stands!

3
Vasco Grilo🔸
Thanks, Nick. Rejecting this implies rejecting hedonistic utilitarianism if welfare above is interpreted as hedonic welfare. I have now changed "welfare" to "hedonic welfare" above. Holding everything else constant in my calculations, one would have to be less than 0.0662 % (= 1/(1.51*10^3)) hedonistic utilitarian to prioritise GiveWell's top charities over corporate campaigns for chicken welfare. Weighting hedonistic utilitarianism so lightly, I think: * Current factory-farming could easily be justified. In essence, because one would be able to value eating factory-farmed animals up to 1.51 k times as much as today. * There would also be no obvious reason to support global health and development interventions. Such support makes sense assuming an additional year of healthy life or a given relative increase in income are worth roughly the same regardless of the person, but this is only obviously the case if one values hedonic welfare the same regardless of who experiences it. I suspect people who prioritise global health and development over animal welfare implicitly endorse hedonistic utilitarianism for comparisons between interventions targeting people, but reject it for comparisons across species.

Hi Vasco, I first learned about the meat eater problem from your post. Thank you for your insight.

My thoughts:

(1) If building human capacity has positive long-term ripple effects (e.g. on economic growth), these could be expected to swamp any temporary negative externalities.

(2) It's also not clear that increasing population increases meat-eating in equilibrium. Presumably at some point in our technological development, the harms of factory-farming will be alleviated (e.g. by the development of affordable clean meat). Adding more people to the current generation moves forward both meat eating and economic & technological development. It doesn't necessarily change the total number of meat-eaters who exist prior to our civ developing beyond factory farming.

But also: people (including those saved via GHD interventions) plausibly still ought to offset the harms caused by their diets. (Investing resources to speed up the development of clean meat, for example, seems very good.)

There was some Givewell-commissioned research that did find that saving lives likely leads to future population increases. I imagine there's a fair amount of uncertainty, but it seemed to be the best information available at the time I was looking into this a few years ago. I could dig it up if it's of interest and difficult to find.

2
NickLaing
Yes there may be Givewell research saying that, but its still very unclear, and the mainstream public health view (for what its worth) has generally been that better healthcare and saving lives may well lead to lower fertility rates and lower populations in the medium/long term.
1
ClimateDoc
Are there any good research articles that do a decent job of isolating the role of reducing mortality rates? Review articles would be particularly useful. Here's a link to the GiveWell-commissioned research that I have: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3635855 .

I think if you put some weight on viewpoint pluralism you should mostly not conclude that other peoples' lives aren't valuable because those people will make the wrong moral choices.

Im getting at that most people would would not go out of their way to save baby Hitler. I value Hitler's life, but I also wouldn't save his life.

I don't really understand this stance, could you explain what you mean here?

Like Sammy points out with the Hitler example, it seems kind of obviously counterproductive/negative to "save a human who was then going to go torture and kill a lot of other humans".

Would you disagree with that? Or is the pluralism you are suggesting here specifically between viewpoints that suggest animal suffering matters and viewpoints that don't think it matters?

As I understand worldview diversification stances, the idea is something like: if you are uncertain about whether an... (read more)

1
JoshYou
I'm not proposing any sort of hard rule against concluding that some people's lives are net negative/harmful. As a heuristic, you shouldn't think it's bad to save the lives of ordinary people who seem to be mostly reasonable, but who contribute to harmful animal agriculture. The pluralism here is between human viewpoints in general. Very naively, if you think every human has equal insight into morality you should maximize the lifespan and resources that go to any and all humans without considering at all what they will do. That's too much pluralism, of course, but I think refraining from cheaply saving human lives because they'll eat meat is too far in the other direction.

This is where we need a broad perspective. 

Long-term, we solve the problem of meat-eating with artificial protein, which also solves many other problems. 

Medium-term, we work to end factory-farming, which needlessly increases the suffering of animals. (I don't want to get into it because there are many experts here and I'm not one of them, but it may be arguable that an animal which is bred for food but gets to live a decent life in a field is better off than if it hadn't been born because people didn't need to eat it. However, in the case of factory-farming, such an argument seems totally untenable). 

Short-term, we accept that we live in an imperfect world and that most people value saving human lives, even at the cost of animal lives. So we work to save human lives and improve health and improve quality of life, and instead of losing sleep over the calculation of the net impact on animals, we support the amazing organisations who are working to end factory-farming (like Farmkind) and to develop alternative protein (like GFI). 

It's valuable to discuss questions like this, and I absolutely do not claim to have a definitive answer - all I say is that when I think about this, that's how I rationalise it. 

This problem[1] was one I independently noticed before finding EA.

I think in some ways the actions implied under it are similar: focusing on long-term world-change so that the world is eventually organized by morality.

If 'how do you deal with it' means 'how do you make your actions compatible with this view', my answer is that I'm trying to help with {figuring out how we can make powerful AIs also be benevolent}.[2]

If 'how do you deal with it' means 'how do you convince yourself it is false, or that things some EA orgs are contributing to are still okay given it', I don't think this is a useful attitude to have towards troubling truths.

I agree with this:

It feels like EAs who value animal suffering at all must provide very clear reasons why its okay to support saving the lives of meat-eaters and also developing the third-world given the "meat eater problem."

  1. ^

    Many current human lives causing extreme suffering by funding factory farming, enough to outweigh the positive value of their own life

  2. ^

    A nice fiction about coming into existence in a world with normalized moral catastrophe, and acting heroically: the sword of good by Yudkowsky

If 'how do you deal with it' means 'how do you convince yourself it is false, or that things some EA orgs are contributing to are still okay given it', I don't think this is a useful attitude to have towards troubling truths.

 

Well said and important :)

Good question. Does this also work in the opposite direction? ~ Worry less about catastrophic or existential risks because there'd be fewer animals in factory farms?

I have an intuition that any ASI that wipes out humans does the same to non-human animals though.

For a standard utilitarian, a benevolent superintelligence would create enough happiness (and not allow factory farming) to outweigh any current suffering due to the large length and size of the future.

For a suffering-focused altruist (such as myself), it's not that simple, although in any case it mostly revolves around (i) the possibility of locally-originating long-term s-risks (rather than factory farming, if it ends near-term), and (ii) the ability of aligned ASIs to reduce s-events in unreachable parts of the world through acausal trade; see my shortf... (read more)

We need to keep in mind that we are effective altruists (or effective anything) to the extent that we are also respected members of our communities, not only of the EA community. We live in a world in which meat-eaters are much of the population. In this kind of world, the belief that the life of a meat-eater life has a net negative value would turn us into a cult of phanatic monsters. Or at least we would be perceived in this way. We would have no chance whatsoever of contributing to the end of cruelty. We need to stay humans, connected to all kinds of humans. The quantitative approach to morality is powerful, one of the best ideas I can think of. But it's not the only tool we should use. 

I  deal with this problem by donating (almost) exclusively to animal welfare charities. Besides, I think that the suffering of animals reared for human consumption is much more intense than those of humans, in general.

More development may at least indirectly contribute to hastening ubiquitous lab-grown meat becoming economically cheaper than non-lab-grown meat. 

A lot of uncertainty here because I have no idea how much (if at all) more development may cause this, but, if it does, it leads to fewer total moral atrocities.

I would probably believe the poorest most rural parts of Africa would not be able to contribute to lab-grown meat development before it is brought efficiently to market. Furthermore, these parts of the world would likely be the last to adopt lab-gorwn meat.

RCT-informed interventions focused on the poorest will not increase demand for factory farmed meat - only broad based economic growth will do this. So one solution is to focus on micro interventions targeted at the extreme poor.

Another solution is to support the alternative proteins sector in LMICs, which could enable some degree of “leapfrogging” factory farmed meat and reduce carbon emissions.

I read some of the answer and they were really interesting. I like to simplify so i will bring my modest simple answer and my advice : 

I have a amateur answer but i am not an expert although this topic interest me a lot being interested in sustainability, agronomy and ea (here global health and animal farming) : 

Firstly pro donating to animal farming : 

  • if you believe in animal having a moral weight similar (1 - 10x) to humans it is probable that saving a human life might be negative compared to saving 100 - 1000 animals.  
  • We know that our way of living is not sustainable. We also know that one reason why our lifestyle is not sustainable is animal farming. 

Secondly pro donating to save a life : 

  • Saving a life is good (its one of the first moral reflex if it happens in front of you and)
  • You never know what people will do with their life. They probably eat local non industrial products and dont cause much pollution if they are poor.

My opinion : Saving a human life will probably do a little bad because it causes the death of some animals (not necessarily factory farm) and causes pollutions - we are not sustainable (dependent of polluting fertilizer N,P,K). Saving animals will probably do a little good because we would be more sustainable. So I would focus on saving non-human animals because it will probably help all animals and the opposite is probably false.

Now my advice : 

  • If you don't know the answer find some informations. When you have your answer keep being open-minded. I think this topic could be researched a hole lifetime without covering every aspect of it. Keep in mind since morals are subjectives there is no right answer. But since this quesion is important you should research the subject. I believe collective intelligence is the best "moral objectivity" we could acheive if everybody is well informed. 
  • Life is never black or white and full of other dilemnas. Saving a human life or other animals are not your only two options wich brings me to my last opinion : before doing one or the other we should focus on discussing this subject with the most people possible. Or at least make sure everybody knows animals suffer (especially factory farmed) and everybody knows we can save lives with 5000$

Two things to consider:

  • Animal suffering is currently invertly U-shaped in income.
  • The rise of lab-grown meat could surely dramatically increase the negative relationship between animal suffering and income at high-levels.

If you find this question truly compelling—if it’s more than just an intellectual challenge for you—I would suggest reconsidering vegetarianism. When expanding your empathy starts to hinder your basic empathy for humans, or when being a vegetarian makes you think of your aunt as a "meat-eater" rather than as a warm and kind person...

I acknowledge that I haven’t provided a strong justification for my answer, and I don't know your full set of beliefs and experiences, so this is definitly not a judgment. However, I do strongly believe that the "aunt" argument is a valid one.

I don't think that expanding compassion to animals leads to reduced compassion for aunts and other humans. It could make one's choices benefit humaunts less, but that's arguably desirable.

I do agree that when we find ourselves dispassionate or hateful towards others that's a sign that we may have stepped wrong at some point in our journey to do the most good

This makes me think that countries who as of yet don't have an entrenched factory farming lobby/industry would benefit advocacy groups similar to Shrimp Welfare Project (work in the reverent countries with stakeholders to improve the wellbeing of farmed animals).

 I began wondering if any org was approaching this similar to SWP. There seem to be two EA groups working on this:

Comments21
Sorted by Click to highlight new comments since:

I just want to publicly state that the whole 'meat-eater problem' framing makes me incredibly uncomfortable

  • First: why not call it the 'meat-eating' problem rather than 'meat-eater' problem? Human beliefs and behaviours are changeable and malleable. It is not a guarantee that future moral attitudes are set in stone - human history itself should be proof enough of that. Seeing other human beings as 'problems to be solved' is inherently dehumanising.
  • Second: the call on whether net human wellbeing is negated by net animal wellbeing is highly dependent on both moral weights and overall moral view. It isn't a 'solved' problem in moral philosophy. There's also a lot of empirical uncertainty people below have pointed out r.e. saving a life != increasing the population, counterfactual wild animal welfare without humans might be even more negative etc.
  • Third - and most importantly - this pattern matches onto very very dangerous beliefs:
    • Rich people in the Western World saying that poor people in Developing countries do not deserve to live/exist? bad bad bad bad bad
    • Belief that humanity, or a significant amount of it, ought not to exist (or the world would be better off were they to stop existing) danger danger
    • Like, already in the thread we've got examples of people considering whether murdering someone who eats meat isn't immoral, whether they ought to Thanos snap all humans out of existence, analogising average unborn children in the developing world to baby Hitler. my alarm bells are ringing
    • The dangers of the above grow exponentially if proponents are incredibly morally certain about their beliefs and unlikely to change regardless of evidence shown, believe that they may only have one chance to change things, believe that otherwise unjustifiable actions are justified in their case due to moral urgency.

For clarification I think Factory Farming is a moral catastrophe and I think ending it should be a leading EA cause. I just think that the latent misanthropy in the meal-eater problem framing/worldview is also morally catastrophic.

In general, reflecting on this framing makes it ever more clear to me that I'm just not a utilitarian or a totalist.

A lot of people in animal advocacy circles (inside and outside ea) choose not to have children and report that this is because humans tend to be meat eaters. There are far larger numbers of environmentally minded (mostly non-ea) people who claim to choose not to have children because their children will contribute to global warming or general environmental harms. Most such environmentally minded people are not particularly animal-welfare focused. Further, most such people are not committed utilitarians.

I am not defending this view, nor even claiming that these reasons are true drivers of personal decisions. However the frequency with which I hear similar suggestions about how having children is a moral wrong for the planet suggests to me that this sort of idea is not directed toward poorer people in particular, nor is it the result of considering animals as moral patients nor is it idiosyncratic to EA not does it stem from any strict interpretation of utilitarianism.

For better or worse, a certain type of misanthropy runs deep in modern culture.

For the vast majority of people (including myself), there is a big difference between choosing not to have a child and choosing not to save a child who already exists. In the EA context, the meat-eating problem seems to come up in the context of the perceived downsides of saving existing lives.

There are many reasons for choosing not to have kids that are in no way similar to concerns in the poor meat eater problem.

However, I disagree that choosing not to have children specifically because you think humans are a net moral bad is so vastly different from choosing not to actively expend resources to save an existing human in terms of the logic underlying the motivation.

The two actions have different consequences but the two beliefs imply roughly the same sorts of things that JWS finds uncomfortable when followed to their logical conclusion.

My only point was that these beliefs both stem from a similar kind of misanthropy that is not unique to ea/utilitarianism/the meat eater problem/ poor people.

Some people think humans are on net bad and want to see fewer of them, future or existing. People who think that having a child is any way wrong because humans are on average a net moral bad are in my opinion pretty ideologically aligned with people who think it's wrong to donate to human-focused charities for the same reason.

already in the thread we've got examples of people considering whether murdering someone who eats meat isn't immoral

If the question were about humans who cause an equivalent amount of harm to other humans, I would not expect to see objection to the question merely being asked or considered. When humans are at risk, this question is asked even when the price is killing (a lower number of) humans who are not causing the harm. It is true that present human culture applies such a double standard to humans versus to members of other species, but this is not morally relevant and should not influence what moral questions one allows themself to consider (though it still does, empirically. This is relevant to a principle introduced below).

I think that this question is both intuitive to ask and would be important in a neartermist frame given the animal lives at stake. It has also been discussed in at least one published philosophy paper.[1] That paper concludes (on this question) that in the current world, it is a much less effective way of reducing animal torture compared to other ways, and so shouldn't be done in order to avoid ending up arrested and unable to help animals in far more effective ways, but that it would likely reduce more suffering than it causes.[2] That is my belief too, by which I mean that seems to be the way the world is, not that I like that the world is this way. (This is a core rationalist principle, which I believe is also violated by other of your points in the 'pattern matches to dangerous beliefs' section.)

The Litany of Tarski is a template to remind oneself that beliefs should stem from reality, from what actually is, as opposed to what we want, or what would be convenient. For any statement X, the litany takes the form "If X, I desire to believe that X".

 

I think there are other instances of different standards being applied to how we treat extreme harm of humans versus extreme harm of members of other species throughout your comment.[3]

For example, I think that if one believes factory farming is a moral catastrophe (as you do), and if discomfort originated from that one's morality alone, then the use of 'meat' over 'animals' or 'animal bodies' would cause more discomfort than the use of 'meat eater' over 'meat eating'.

That's not to say I don't think the term 'eating' instead of 'eater' is better, or rather, more generally, that language should not have words that refer to a being by one of their malleable behaviors or attitudes. I might favor such a general linguistic change.

However, if this were a discussion of great ongoing harm being caused to humans, such as through abuse or murder, I would not expect to find comments objecting to referring to the humans causing that harm as 'abusers' or 'murderers' on the basis that they might stop in the future.[4] (I'm solely commenting on the perceived double standard here.)

 

There are other examples (in section three), but I can (in general) only find words matching my thoughts very slowly (this took me almost two hours to write and revise) so I'm choosing to stop here.

  1. ^

    https://journalofcontroversialideas.org/article/2/2/206

    In the Journal of Controversial Ideas, co-founded by Peter Singer. (wikipedia)

  2. ^

    I think this also shows that this question is importantly two questions:

    1. Is it right to kill someone who would otherwise continually cause animals to be harmed and killed, in isolation, i.e in a hypothetical thought-experiment-world where there's no better way to stop this, and doing so will not prevent you from preventing greater amounts of harm?
      • In this case, 'yes' feels like the obvious answer to me.
      • I also think it would feel like an obvious answer for most people if present biases towards members of other species were removed, for most would say 'yes' to the version of this question about a human creating and killing humans.
    2. Is it right to kill someone who would otherwise continually cause animals to be harmed and killed, in the current world, where this will lead to you being imprisoned?
      • In this case, 'no' feels like the obvious answer to me, because you could do more good just by causing two humans to go vegan for life, and even more good by following EA principles.
  3. ^

    (To preclude certain objections: These are different standards which would not be justified by members of a given species experiencing only less suffering from 0-2 years of psychological desperation and physical torture than humans would from that same situation).

  1. ^

    (Relatedly, after reading your comment, one thing I tried was to read it again with reference to {people eating animals} mentally replaced with reference to {people enacting moral catastrophes that are now widely opposed}, to isolate the 'currently still supported' variable, to see if anything in my perception or your comment would be unexpected if that variable were different, despite it not being a morally relevant variable. This, I think, is a good technique for avoiding/noticing bias.)

Show all footnotes

great reply

Takes a long time to read to ! Nice work its really interesting :)

[Epistemic status: diverse moral parliament with significant deontological and virtue-ethics representation, along with concern about fair distribution of good and bad things] 

  • Rich people in the Western World saying that poor people in Developing countries do not deserve to live/exist? bad bad bad bad bad

Relatedly, I find it concerning that the initial jump is from "increased human population may be net negative due to effects on farmed animal welfare" to "maybe we should refrain from preventing death among infants and toddlers in Africa." I'm not advocating pro-death policies like withholding lifesaving medical care anywhere, but in a sense I would find it less objectionable if consistently applied rather than beng focused on developing countries? 

That sense is coming from various places, including the interest in impartiality, the concern about powerful people furthering their objectives by forcing powerless people to bear the costs, and a distaste for free-riding (not having oneself or even one's society share in the risk of death "needed" to achieve the objective). There are also the empirical weaknesses (e.g., that these kids aren't nearly as likely to be eating much factory-farmed meat in the future, that failing to prevent their death may have a minimal effect on total global population) that may not be immediately discernable, but don't need extensive reflection to see. 

Moreover, there are also other policies and practices I would expect to see well before withholding life-saving medical care to young children were anywhere near the table:

  • In expectancy, the kids of EAs will likely consume more factory-farmed meat than young children in developing countries, so choosing to not have kids seems a fairly obvious step.
  • Although I'm not usually one for calling non-billionaires out for not donating a large fraction of their income, "this problem is grave enough that we should consider saving the lives of innocent young children as a net negative" seemingly implies "this problem is grave enough that I am morally obliged to donate at least most of my material resources to mitigating it and should be criticized for failing to do so." If it's not worth a 51% donation rate from me, how is it possibly worth what we would be expecting the would-be AMF beneficiaries to sacrifice for the cause?

I feel that this is an interesting question.
 

In general, one uncomfortable realization that I feel I am approaching is kind of misanthropic. 

More than 100 billion animals are killed for meat and other animal products every year.

Considering there is 8 billion humans, if there was a hypothetical example where pressing a button would destroy the whole world (or even just humanity) would that be preferable compared to the suffering of factory farmed animals?

I know commonly animal welfare is put in a lower basis compared to human welfare.

However, considering the sheer amount of animal suffering in factory farms, does it not outweigh the total amount of human flourishing?

Without meaning to sound like an eco-terrorist, considering the amount of suffering one person that eats meat can do to animals, should murdering a meat eater be considered "ethical"? 
 

I feel like these are important questions to answer and depending on your answer to these questions, can probably determine what to focus (whether animal based ethics, or human improvement).

This doesn't really solve the problem, but most animal suffering is likely not in factory farms but in nature, so getting rid of humans isn't necessarily net good for animals. (To be clear, I am strongly against murdering humans even if it is net good for animals.) 

Thanks for your response, im in the same boat as you it seems.

The "meat eater problem" raises an intriguing ethical question, but I'm inclined to think (with low confidence) that even if the concern is valid, the proliferation of this idea could have a negative expected value. By focusing on such a divisive concept, we risk alienating potential supporters of the animal welfare movement, which could ultimately hinder efforts to reduce animal suffering. That said, this is distinct from whether the impact of the average human on factory farming would alter personal donation decisions.

That said, this is distinct from whether the impact of the average human on factory farming would alter personal donation decisions.

The bonus question that this sentence raises for me is whether the impact of the average human on factory farming should factor into other decisions, like our votes in a democracy.

If we choose not to save infants from malaria because they may turn out to consume factory-farmed animals, should we then use the same logic to choose not to prevent deaths to adults in our own country by not voting for (e.g) stronger auto-safety legislation or stricter tobacco regulation? Yeah, the proliferation of this idea could definitely have negative expected value!

Right, you'd also have to oppose healthcare expansion, vaccines (against lethal illnesses), pandemic mitigation efforts, etc.  I guess if you really believed it, you would take the results (more early death) to have positive expected value. It's a deeply misanthropic thesis. So it's probably worth getting clearer on why it isn't ultimately credible, despite initial appearances.

its an extremely important topic that has extreme ramifications such as concluding that a large portion of global health and development could possibly be negative utility! It also entails a degree of misanthropy which affects how we think about X-risk and the utility of society today. If EA were to ignore this problem with my previous statements being true, most of the movement would be misguided. It is therefore an extremely important problem IMO. 

I don't believe the "meat eater problem" should be ignored, but rather approached with great care. It's easy to imagine the negative press and public backlash that could arise from expressing views suggesting it might be better for people to die or discouraging support for charities that save lives in the developing world.

The Effective Altruism community is very small, with estimates around 10,000 people—a tiny fraction of the nearly 8 billion people on the planet. If we want to create a world without factory farming, we need to focus on bringing more people into the fold who care about animals. Spotlighting an analysis that essentially suggests it's good when young children die and that we should discourage saving them doesn't seem like the path to growing the movement that can end the horrors of factory farming.

By treating this problem with care, we can ensure that our efforts to improve the world are effective without alienating those who might otherwise join us in the fight against animal suffering.

I agree with this. I think these conversations are important, and we should be having them in "quiet" public places where mostly EAs go like the forum and 80'000 hours podcast. I didn't think it's the just helpful topic for a public debate.

Open and public conversations can be had about difficult topics. While we still avoid posting them onTwitter....

I initially rejected this idea, but I think I've come around to this viewpoint a lot more. EA needs to have broad appeal to become a mainstream movement and we don't always need to publicly state our distasteful utilitarian conclusions!

Hiding your conclusions feels a bit sleazy and manipulative to me. 

Why are private conclusions about moral dilemmas anyone else's business? Not saying it's good to be private about that (necessarily) or that there aren't contexts where it would be relevant to disclose but I really don't understand why it would be sleezy or manipulative.

Only an ancillary point, but some poor countries have higher baseline meat consumption than others. It stands to reason that lifting Indians out of extreme poverty would cause less meat consumption than for Chinese people (all else equal)—although I haven't actually done the math or analysis on that one.

Curated and popular this week
Relevant opportunities