Some thoughts on whether/why it makes sense to work on animal welfare, given longtermist arguments. TLDR:
- We should only deprioritize the current suffering of billions of farmed animals, if we would similarly deprioritize comparable treatment of millions of humans; and,
- We should double-check that our arguments aren't distorted by status quo bias, especially power imbalances in our favor.
This post consists of six arguments:
- If millions of people were being kept in battery cages, how much energy should we redirect away from longtermism to work on that?
- Power is exploited, and absolute power is exploited absolutely
- Sacrificing others makes sense
- Does longtermism mean ignoring current suffering until the heat death of the universe?
- Animals are part of longtermism
- None of this refutes longtermism
Plus some context and caveats at the bottom.
A. If millions of people were being kept in battery cages, how much energy should we redirect away from longtermism to work on that?
Despite some limitations, I find this analogy compelling. Come on, picture it. Check out some images of battery cages and picture millions of humans kept in the equivalent for 100% of their adult lives, and suppose with some work we could free them: would you stick to your longtermist guns?
Three possible answers:
- Yes, the suffering of both the chickens and the humans is outweighed by longtermist concerns (the importance of improving our long-term future).
- No, the suffering of the humans is unacceptable, because it differs from the suffering of the chickens in key ways.
- No, neither is acceptable: longtermism notwithstanding, we should allocate significant resources to combating both.
I lean towards c) myself, but I can see a case for a): I just think if you're going to embrace a), you should picture the caged-humans analogy so you fully appreciate the tradeoff involved. I'm less sympathetic to b) because it feels like suspicious convergence - "That theoretical bad thing would definitely make me change my behavior, but this actual bad thing isn't actually so bad" (see section B below). Still, one could sketch some plausibly relevant differences between the caged chickens and the caged humans, eg:
- "Millions of people" are subbing here for "billions of hens", implying something like a 1:1,000 suffering ratio (1 caged chicken = 0.001 caged humans): this ratio is of course debatable based on sentience, self-awareness, etc. Still, 0.001 is a pretty tiny factor (raw neuron ratio would put 1 chicken closer to 0.002-0.005 humans) and again uncertainty does some of the work for us (the argument works even if it's only quite plausible chicken suffering matters). There is a school of thought that we can be 99+% confident that a billion chickens trapped on broken legs for years don't outweigh a single human bruising her shin; I find this view ridiculous.
- Maybe caging creatures that are "like us" differs in important ways from caging creatures that are "unlike us". Like, maybe allowing the caging of humans makes it more likely future humans will be caged too, making it (somehow?) of more interest to longtermists than the chickens case. (But again, see section B.)
- A lot of longtermism involves the idea that humans (or AIs), unlike hens, will play a special role in determining the future (I find this reasonable). Maybe this makes caging humans worse.
B. Power is exploited, and absolute power is exploited absolutely
A general principle I find useful is, when group A is exploiting group B, group A tends to come up with rationalizations, when in fact it's often just a straight-up result of a power imbalance. I sometimes picture a conversation with a time traveler from a future advanced civilization, not too familiar with ours:
TT: So what's this "gestation crate" thing? And "chick maceration", what does that even mean?
Us: Oh, well, that's all part of how our food production industry works.
TT: *stares blankly*
Or maybe not: maybe TT thinks it's fine, because her civilization has determined that actually factory farming is justifiable. After all I'm not from the future. But to me it seems quite likely that she thinks it's barbarically inhumane whereas we broadly think it's OK, or at least spend a lot more energy getting worked up about Will Smith or mask mandates or whatnot. Why do we think it's OK? Two main reasons:
- Status quo bias: this is how things have been all our lives (though not for long before them); it's normal.
- Self-interest: we like (cheap widely available) meat, so we're motivated to accept the system that gives it to us.
In particular, we're motivated to come up with reasons why this status quo is reasonable (the chickens don't suffer that much, don't value things like freedom that we value, are physically incapable of suffering, etc). If factory farming didn't exist and there was a proposal to suddenly implement it in its current form, we might find these arguments a lot less convincing; but that's not our situation (though, see octopus farming).
In general, when group A has extra power (physical, intellectual, technological, military) relative to group B, for group A to end up pushing around group B is normal. It's what will happen unless there's some sort of explicit countereffort to prevent it. And the more extreme and lasting the power imbalance, the more extreme yet normalized the exploitation becomes.
I view many historical forms of exploitation through this lens: slavery, colonialism, military conquests, patriarchy, class and caste hierarchies, etc. To me this list is encouraging! A lot of these exploitations are much reduced from their peak, thanks at least in part to some exploiters themselves rallying against them, or finding them harder and harder to defend.
So the takeaway for me is not, "exploitation is inevitable." The main takeaway is, when we observe what looks naively like exploitation, and hear (or make) ostensibly rational defenses of that apparent exploitation, we should check those arguments carefully for steps whose flimsiness may be masked by 1. status quo bias or 2. self-interest.
(Another conclusion I would not draw is "Self-interested arguments can be dismissed." Otherwise someone advocating for plant rights or rock rights would have me trapped. Who knows, maybe it will turn out we should be taking plant/rock welfare seriously: but the fact that that conclusion would be inconvenient for us, is not enough to prove it correct.)
C. Sacrificing others makes sense
My main take on self-sacrifice is a common one: it's no substitute for results. People will take cold showers for a year to help fight climate change, when a cheque to a high-impact climate org probably does much more to reduce atmospheric carbon.
That said (and this is not a fully fleshed-out thought), there is something suspicious about moral theorizing without sacrifice: especially theorizing about large sacrifices of others. There is a caricature of moral philosophers, debating the vast suffering of other species and peoples, concluding it's not worth doing much about, finishing their nice meal and heading off to a comfortable bed. (See also: the timeless final scene from Dr Strangelove.) When we edge too close to this caricature (and I certainly live something near it myself) I start to miss the social workers and grassroots activists and cold-shower-takers.
Again, the fact that a conclusion is convenient is not sufficient grounds to dismiss it. But it does mean we should scrutinize it extra closely. And the conclusion that the ongoing (at least plausible) suffering of billions of other creatures, inflicted for our species' benefit, is less pressing than relatively theoretical future suffering, is convenient enough to be worth double-checking.
I've seen elements of both extremes in the EA community: "endless fun debates over nice dinners" at one end, intense guilt-driven overwork leading to burnout at the other. We'll just have to keep watching out for both extremes.
D. Does longtermism mean ignoring current suffering until the heat death of the universe?
If current suffering is outweighed by the importance of quadrillions of potential future lives, then in say a century, won't that still be true? There's a crude inductive argument that the future will always outweigh the present, in which case we could end up like Aesop's miser, always saving for the future until eventually we die.
Of course reality might not be so crude. Eg, many have argued that we live at an especially "hingey" time (especially given AI timelines), perhaps (eg if we survive) to be followed by a "long reflection" during or after which we might finally get to take a breather and enjoy the present (and finally deal with large-scale suffering?).
But it's not really clear to me that in a 100 or 1,000 years the future won't still loom large, especially if technological progress continues at any pace at all. So perhaps, like the ageing miser, or like a longevity researcher taking some time to eat well and exercise, we should allocate some of our resources towards improving the present, while also giving the future its due.
E. Animals are part of longtermism
The simplest longermist arguments for animal work are that 1. many versions of the far future include vast numbers of animals (abrahamrowe), and 2. how they fare in the future may critically depend on values that will be "locked in" in upcoming generations (eg, before we extend factory farming to the stars - Jacy, Fai). Maybe! But the great thing about longtermist arguments is you only need a maybe. Anyway lots of other people have written about this so I won't here: those above plus Tobias_Baumann, MichaelA, and others.
F. None of this refutes longtermism
I probably sound like an anti-longermism partisan animal advocate so far, but I actually take longtermist arguments seriously. Eg some things I believe, for what it's worth:
- All future potential lives matter, in total, much more than all current lives. (But I'd argue improving current lives is much more tractable, on a per-life basis, so tractability is an offsetting factor. See also the question of how often longtermism actually diverges from short-termism in practice, and good old Pascal's mugging.)
- Giving future lives proper attention requires turning our attention away from some current suffering. It's just a question of where we draw the line.
- One human life matters much more than one chicken life.
- There are powerful biases against longtermism - above all proximity bias.
I'm not here to argue that longtermism is wrong. My argument is just that we need to watch out for the pro-longtermism biases I laid out above - biases we should, y'know, overcome...
Notes
- About me: I've been a part-time co-organizer of Effective Altruism NYC for several years, and I'm on the board of The Humane League, but I'm speaking only for myself here. I'm not an expert on any of this: after a conversation an EA pal I respect encouraged me to write up my views.
- I'm sure many of these arguments have been made and rebutted elsewhere: kindly just link them below.
- Some of these arguments could be applied more broadly, eg to global (human) health work rather than animal welfare. Extrapolate away!
- A major motivation for this post is the piles of money and attention getting allocated to longtermism these days. Whatever we conclude, kicking the tires of longtermist arguments has never been higher-stakes than it is now.
- Battery cages are just one example: eg, broiler chickens (farmed for meat not eggs) are even more numerous and arguably have worse lives, above all because of the Frankensteinian way they've been bred to grow much larger and faster than their bodies can healthily support. I used battery cages because it's easier to picture yourself in a coffin-sized cage than bred to quadruple your natural weight.
I feel sorely misunderstood by this post and I am annoyed at how highly upvoted it is. It feels like the sort of thing one writes / upvotes when one has heard of these fabled "longtermists" but has never actually met one in person.
That reaction is probably unfair, and in particular it would not surprise me to learn that some of these were relevant arguments that people newer to the community hadn't really thought about before, and so were important for them to engage with. (Whereas I mostly know people who have been in the community for longer.)
Nonetheless, I'm writing down responses to each argument that come from this unfair reaction-feeling, to give a sense of how incredibly weird all of this sounds to me (and I suspect many other longtermists I know). It's not going to be the fairest response, in that I'm not going to be particularly charitable in my interpretations, and I'm going to give the particularly emotional and selected-for-persuasion responses rather than the cleanly analytical responses, but everything I say is something I do think is true.
None of it? Current suffering is still bad! You don't get the privilege of ignoring it, you sadly set it to the side because you see opportunities to do even more good.
(I would have felt so much better about "How much current animal suffering would longtermism ignore?" It's really the framing that longtermism is doing you a favor by "letting you" ignore current animal suffering that rubs me wrong.)
Yes! This is pretty close to the actual situation we are in! There is an estimate of 24.9 million people in slavery, of which 4.8 million are sexually exploited! Very likely these estimates are exaggerated, and the conditions are not as bad as one would think hearing those words, and even if they were the conditions might not be as bad as battery cages, but my broader point is that the world really does seem like it is very broken and there are problems of huge scale even just restricting to human welfare, and you still have to prioritize, which means ignoring some truly massive problems.
????
I don't think that animal suffering is OK! I would guess that most longtermists don't think animal suffering is OK (except for those who have very confident views about particular animals not being moral patients).
Why on earth would you think that longtermists think that animal suffering is OK? Because they don't personally work on it? I assume you don't personally work on ending human slavery, presumably that doesn't mean you think slavery is OK??
Convenient??? I feel like this is just totally misunderstanding how altruistic people tend to feel? It is not convenient for me that the correct response to hearing about millions of people in sexual slavery or watching baby chicks be fed into a large high-speed grinder is to say "sorry, I need to look at these plots to figure out why my code isn't doing what I want it to do, that's more important".
Many of the longtermists I know were dragged to longtermism kicking and screaming, because of all of their intuitions telling them about how they were ignoring obvious moral atrocities right in front of them, and how it isn't a bad thing if some people don't get to exist in the future. I don't know if this is a majority of longtermists.
(It's probably a much lower fraction of people focused on x-risk reduction -- you don't need to be a longtermist to focus on x-risk reduction, I'm focusing here on the people who would continue to work on longtermist stuff even if it was unlikely to make a difference within their lifetimes.)
I guess maybe it's supposed to be convenient in that you can have a more comfortable life or something? Idk, I feel like my life would be more comfortable if I just earned-to-give and donated to global poverty / animal welfare causes. And I've had to make significantly fewer sacrifices than other longtermists; I already had an incredibly useful background and an interest in computer science and AI before buying into longtermism.
Obviously not? That means you never reduced suffering? What the heck was the point of all your longtermism?
(EDIT: JackM points out that longtermists could increase total suffering, e.g. through population growth that increases both suffering and happiness, so my "obviously not" is technically false. Imagine that the question was about ignoring current utility instead of ignoring current suffering, which is how I interpreted it and how I expect the OP meant it to be interpreted.)
Yes, the future will still loom large? And this just seems fine?
Here's an analogous argument:
"You say to me that I shouldn't help my neighbor, and instead I should use it to help people in Africa. But it's not really clear to me that after we've successfully helped people in Africa, the rest of the world's problems won't still loom large. Wouldn't you then want to help, say, people in South America?"
(I generated this argument by taking your argument and replacing the time dimension with a space dimension.)
(Switching to analysis instead of emotion / persuasion because I don't really know what your claim is here)
Given that your current post title is "How much current animal suffering does longtermism let us ignore?" I'm assuming that in this section you are trying to say that reducing current animal suffering is an important longtermist priority. (If you're just saying "there exists some longtermist stuff that has something to do with animals", I agree, but I'm also not sure why you'd bother talking about that.) I think this is mostly false. Looking at the posts you cite, they seem to be in two categories:
First, claims that animal welfare is a significant part of the far future, and so should be optimized (abrahamrowe and Fai). Both posts neglect the possibility that we transition to a world of digital people that doesn't want biological animals any more (see this comment and a point added in the summary of Fai's post after I had a conversation with them), and I think their conclusions are basically wrong for that reason.
Second, moral circle expansion is a part of longtermism, and animals are plausibly a good way to currently do moral circle expansion. But this doesn't mean a focus on reducing current animal suffering! Some quotes from the posts:
Tobias: "a longtermist outlook implies a much stronger focus on achieving long-term social change, and (comparatively) less emphasis on the immediate alleviation of animal suffering"
Tobias: "If we take the longtermist perspective seriously, we will likely arrive at different priorities and focus areas: it would be a remarkable coincidence if short-term-focused work were also ideal from this different perspective."
Jacy: "Therefore, I’m not particularly concerned about the factory farming of biological animals continuing into the far future."
That's not true! You want the best possible maybe you can get; it's not enough to just say "maybe this has a beneficial effect" and go do the thing.
I'm just making an observation that longtermists tend to be total utilitarians in which case they will want loads of beings in the future. They will want to use AI to help fulfill this purpose.
Of course maybe in the long reflection we will think more about population ethics and decide total utilitarianism isn't right, or AI will decide this for us, in which case we may not work towards a huge future. But I happen to think total utilitarianism will win out, so I'm sceptical of this.