Hide table of contents

Some thoughts on whether/why it makes sense to work on animal welfare, given longtermist arguments.  TLDR:

  1. We should only deprioritize the current suffering of billions of farmed animals, if we would similarly deprioritize comparable treatment of millions of humans; and,
  2. We should double-check that our arguments aren't distorted by status quo bias, especially power imbalances in our favor.

This post consists of six arguments:

  1. If millions of people were being kept in battery cages, how much energy should we redirect away from longtermism to work on that?
  2. Power is exploited, and absolute power is exploited absolutely
  3. Sacrificing others makes sense
  4. Does longtermism mean ignoring current suffering until the heat death of the universe?
  5. Animals are part of longtermism
  6. None of this refutes longtermism

Plus some context and caveats at the bottom.
 

A. If millions of people were being kept in battery cages, how much energy should we redirect away from longtermism to work on that?

Despite some limitations, I find this analogy compelling.  Come on, picture it.  Check out some images of battery cages and picture millions of humans kept in the equivalent for 100% of their adult lives, and suppose with some work we could free them: would you stick to your longtermist guns?

Three possible answers:

  1. Yes, the suffering of both the chickens and the humans is outweighed by longtermist concerns (the importance of improving our long-term future).
  2. No, the suffering of the humans is unacceptable, because it differs from the suffering of the chickens in key ways.
  3. No, neither is acceptable: longtermism notwithstanding, we should allocate significant resources to combating both.

I lean towards c) myself, but I can see a case for a): I just think if you're going to embrace a), you should picture the caged-humans analogy so you fully appreciate the tradeoff involved.  I'm less sympathetic to b) because it feels like suspicious convergence - "That theoretical bad thing would definitely make me change my behavior, but this actual bad thing isn't actually so bad" (see section B below).  Still, one could sketch some plausibly relevant differences between the caged chickens and the caged humans, eg:

  1. "Millions of people" are subbing here for "billions of hens", implying something like a 1:1,000 suffering ratio (1 caged chicken = 0.001 caged humans): this ratio is of course debatable based on sentience, self-awareness, etc.  Still, 0.001 is a pretty tiny factor (raw neuron ratio would put 1 chicken closer to 0.002-0.005 humans) and again uncertainty does some of the work for us (the argument works even if it's only quite plausible chicken suffering matters).  There is a school of thought that we can be 99+% confident that a billion chickens trapped on broken legs for years don't outweigh a single human bruising her shin; I find this view ridiculous.
  2. Maybe caging creatures that are "like us" differs in important ways from caging creatures that are "unlike us".  Like, maybe allowing the caging of humans makes it more likely future humans will be caged too, making it (somehow?) of more interest to longtermists than the chickens case.  (But again, see section B.)
  3. A lot of longtermism involves the idea that humans (or AIs), unlike hens, will play a special role in determining the future (I find this reasonable).  Maybe this makes caging humans worse.
     

B. Power is exploited, and absolute power is exploited absolutely

A general principle I find useful is, when group A is exploiting group B, group A tends to come up with rationalizations, when in fact it's often just a straight-up result of a power imbalance.  I sometimes picture a conversation with a time traveler from a future advanced civilization, not too familiar with ours:

TT: So what's this "gestation crate" thing?  And "chick maceration", what does that even mean?

Us: Oh, well, that's all part of how our food production industry works.

TT: *stares blankly*

Or maybe not: maybe TT thinks it's fine, because her civilization has determined that actually factory farming is justifiable.  After all I'm not from the future.  But to me it seems quite likely that she thinks it's barbarically inhumane whereas we broadly think it's OK, or at least spend a lot more energy getting worked up about Will Smith or mask mandates or whatnot.  Why do we think it's OK?  Two main reasons:

  1. Status quo bias: this is how things have been all our lives (though not for long before them); it's normal.
  2. Self-interest: we like (cheap widely available) meat, so we're motivated to accept the system that gives it to us.

In particular, we're motivated to come up with reasons why this status quo is reasonable (the chickens don't suffer that much, don't value things like freedom that we value, are physically incapable of suffering, etc).  If factory farming didn't exist and there was a proposal to suddenly implement it in its current form, we might find these arguments a lot less convincing; but that's not our situation (though, see octopus farming).

In general, when group A has extra power (physical, intellectual, technological, military) relative to group B, for group A to end up pushing around group B is normal.  It's what will happen unless there's some sort of explicit countereffort to prevent it.  And the more extreme and lasting the power imbalance, the more extreme yet normalized the exploitation becomes.

I view many historical forms of exploitation through this lens: slavery, colonialism, military conquests, patriarchy, class and caste hierarchies, etc.  To me this list is encouraging!  A lot of these exploitations are much reduced from their peak, thanks at least in part to some exploiters themselves rallying against them, or finding them harder and harder to defend.

So the takeaway for me is not, "exploitation is inevitable."  The main takeaway is, when we observe what looks naively like exploitation, and hear (or make) ostensibly rational defenses of that apparent exploitation, we should check those arguments carefully for steps whose flimsiness may be masked by 1. status quo bias or 2. self-interest.

(Another conclusion I would not draw is "Self-interested arguments can be dismissed."  Otherwise someone advocating for plant rights or rock rights would have me trapped.  Who knows, maybe it will turn out we should be taking plant/rock welfare seriously: but the fact that that conclusion would be inconvenient for us, is not enough to prove it correct.)
 

C. Sacrificing others makes sense

My main take on self-sacrifice is a common one: it's no substitute for results.  People will take cold showers for a year to help fight climate change, when a cheque to a high-impact climate org probably does much more to reduce atmospheric carbon.

That said (and this is not a fully fleshed-out thought), there is something suspicious about moral theorizing without sacrifice: especially theorizing about large sacrifices of others.  There is a caricature of moral philosophers, debating the vast suffering of other species and peoples, concluding it's not worth doing much about, finishing their nice meal and heading off to a comfortable bed.  (See also: the timeless final scene from Dr Strangelove.)  When we edge too close to this caricature (and I certainly live something near it myself) I start to miss the social workers and grassroots activists and cold-shower-takers.

Again, the fact that a conclusion is convenient is not sufficient grounds to dismiss it.  But it does mean we should scrutinize it extra closely.  And the conclusion that the ongoing (at least plausible) suffering of billions of other creatures, inflicted for our species' benefit, is less pressing than relatively theoretical future suffering, is convenient enough to be worth double-checking.

I've seen elements of both extremes in the EA community: "endless fun debates over nice dinners" at one end, intense guilt-driven overwork leading to burnout at the other.  We'll just have to keep watching out for both extremes.
 

D. Does longtermism mean ignoring current suffering until the heat death of the universe?

If current suffering is outweighed by the importance of quadrillions of potential future lives, then in say a century, won't that still be true?  There's a crude inductive argument that the future will always outweigh the present, in which case we could end up like Aesop's miser, always saving for the future until eventually we die.

Of course reality might not be so crude.  Eg, many have argued that we live at an especially "hingey" time (especially given AI timelines), perhaps (eg if we survive) to be followed by a "long reflection" during or after which we might finally get to take a breather and enjoy the present (and finally deal with large-scale suffering?).

But it's not really clear to me that in a 100 or 1,000 years the future won't still loom large, especially if technological progress continues at any pace at all.  So perhaps, like the ageing miser, or like a longevity researcher taking some time to eat well and exercise, we should allocate some of our resources towards improving the present, while also giving the future its due.
 

E. Animals are part of longtermism

The simplest longermist arguments for animal work are that 1. many versions of the far future include vast numbers of animals (abrahamrowe), and 2. how they fare in the future may critically depend on values that will be "locked in" in upcoming generations (eg, before we extend factory farming to the stars - Jacy, Fai).  Maybe!  But the great thing about longtermist arguments is you only need a maybe.  Anyway lots of other people have written about this so I won't here: those above plus Tobias_Baumann, MichaelA, and others.
 

F. None of this refutes longtermism

I probably sound like an anti-longermism partisan animal advocate so far, but I actually take longtermist arguments seriously.  Eg some things I believe, for what it's worth:

  1. All future potential lives matter, in total, much more than all current lives.  (But I'd argue improving current lives is much more tractable, on a per-life basis, so tractability is an offsetting factor.  See also the question of how often longtermism actually diverges from short-termism in practice, and good old Pascal's mugging.)
  2. Giving future lives proper attention requires turning our attention away from some current suffering.  It's just a question of where we draw the line.
  3. One human life matters much more than one chicken life.
  4. There are powerful biases against longtermism - above all proximity bias.

I'm not here to argue that longtermism is wrong.  My argument is just that we need to watch out for the pro-longtermism biases I laid out above - biases we should, y'know, overcome...
 

Notes

  1. About me: I've been a part-time co-organizer of Effective Altruism NYC for several years, and I'm on the board of The Humane League, but I'm speaking only for myself here.  I'm not an expert on any of this: after a conversation an EA pal I respect encouraged me to write up my views.
  2. I'm sure many of these arguments have been made and rebutted elsewhere: kindly just link them below.
  3. Some of these arguments could be applied more broadly, eg to global (human) health work rather than animal welfare.  Extrapolate away!
  4. A major motivation for this post is the piles of money and attention getting allocated to longtermism these days.  Whatever we conclude, kicking the tires of longtermist arguments has never been higher-stakes than it is now.
  5. Battery cages are just one example: eg, broiler chickens (farmed for meat not eggs) are even more numerous and arguably have worse lives, above all because of the Frankensteinian way they've been bred to grow much larger and faster than their bodies can healthily support.  I used battery cages because it's easier to picture yourself in a coffin-sized cage than bred to quadruple your natural weight.

40

0
0

Reactions

0
0

More posts like this

Comments50
Sorted by Click to highlight new comments since: Today at 8:08 AM

I feel sorely misunderstood by this post and I am annoyed at how highly upvoted it is. It feels like the sort of thing one writes / upvotes when one has heard of these fabled "longtermists" but has never actually met one in person.

That reaction is probably unfair, and in particular it would not surprise me to learn that some of these were relevant arguments that people newer to the community hadn't really thought about before, and so were important for them to engage with. (Whereas I mostly know people who have been in the community for longer.)

Nonetheless, I'm writing down responses to each argument that come from this unfair reaction-feeling, to give a sense of how incredibly weird all of this sounds to me (and I suspect many other longtermists I know). It's not going to be the fairest response, in that I'm not going to be particularly charitable in my interpretations, and I'm going to give the particularly emotional and selected-for-persuasion responses rather than the cleanly analytical responses, but everything I say is something I do think is true.

How much current animal suffering does longtermism let us ignore?

None of it? Current suffering is still bad! You don't get the privilege of ignoring it, you sadly set it to the side because you see opportunities to do even more good.

(I would have felt so much better about "How much current animal suffering would longtermism ignore?" It's really the framing that longtermism is doing you a favor by "letting you" ignore current animal suffering that rubs me wrong.)

A. If millions of people were being kept in battery cages, how much energy should we redirect away from longtermism to work on that?

[...]

Check out some images of battery cages and picture millions of humans kept in the equivalent for 100% of their adult lives, and suppose with some work we could free them: would you stick to your longtermist guns?

Yes! This is pretty close to the actual situation we are in! There is an estimate of 24.9 million people in slavery, of which 4.8 million are sexually exploited! Very likely these estimates are exaggerated, and the conditions are not as bad as one would think hearing those words, and even if they were the conditions might not be as bad as battery cages, but my broader point is that the world really does seem like it is very broken and there are problems of huge scale even just restricting to human welfare, and you still have to prioritize, which means ignoring some truly massive problems.

B. Power is exploited, and absolute power is exploited absolutely

[...]

But to me it seems quite likely that she thinks it's barbarically inhumane whereas we broadly think it's OK, or at least spend a lot more energy getting worked up about Will Smith or mask mandates or whatnot.  Why do we think it's OK?

????

I don't think that animal suffering is OK! I would guess that most longtermists don't think animal suffering is OK (except for those who have very confident views about particular animals not being moral patients).

Why on earth would you think that longtermists think that animal suffering is OK? Because they don't personally work on it? I assume you don't personally work on ending human slavery, presumably that doesn't mean you think slavery is OK??

C. Sacrificing others makes sense

[...]

And the conclusion that the ongoing (at least plausible) suffering of billions of other creatures, inflicted for our species' benefit, is less pressing than relatively theoretical future suffering, is convenient enough to be worth double-checking.

Convenient??? I feel like this is just totally misunderstanding how altruistic people tend to feel? It is not convenient for me that the correct response to hearing about millions of people in sexual slavery or watching baby chicks be fed into a large high-speed grinder is to say "sorry, I need to look at these plots to figure out why my code isn't doing what I want it to do, that's more important".

Many of the longtermists I know were dragged to longtermism kicking and screaming, because of all of their intuitions telling them about how they were ignoring obvious moral atrocities right in front of them, and how it isn't a bad thing if some people don't get to exist in the future. I don't know if this is a majority of longtermists.

(It's probably a much lower fraction of people focused on x-risk reduction -- you don't need to be a longtermist to focus on x-risk reduction, I'm focusing here on the people who would continue to work on longtermist stuff even if it was unlikely to make a difference within their lifetimes.)

I guess maybe it's supposed to be convenient in that you can have a more comfortable life or something? Idk, I feel like my life would be more comfortable if I just earned-to-give and donated to global poverty / animal welfare causes. And I've had to make significantly fewer sacrifices than other longtermists; I already had an incredibly useful background and an interest in computer science and AI before buying into longtermism.

D. Does longtermism mean ignoring current suffering until the heat death of the universe?

Obviously not? That means you never reduced suffering? What the heck was the point of all your longtermism?

(EDIT: JackM points out that longtermists could increase total suffering, e.g. through population growth that increases both suffering and happiness, so my "obviously not" is technically false. Imagine that the question was about ignoring current utility instead of ignoring current suffering, which is how I interpreted it and how I expect the OP meant it to be interpreted.)

But it's not really clear to me that in a 100 or 1,000 years the future won't still loom large, especially if technological progress continues at any pace at all.

Yes, the future will still loom large? And this just seems fine?

Here's an analogous argument:

"You say to me that I shouldn't help my neighbor, and instead I should use it to help people in Africa. But it's not really clear to me that after we've successfully helped people in Africa, the rest of the world's problems won't still loom large. Wouldn't you then want to help, say, people in South America?"

(I generated this argument by taking your argument and replacing the time dimension with a space dimension.)

E. Animals are part of longtermism

(Switching to analysis instead of emotion / persuasion because I don't really know what your claim is here)

Given that your current post title is "How much current animal suffering does longtermism let us ignore?" I'm assuming that in this section you are trying to say that reducing current animal suffering is an important longtermist priority. (If you're just saying "there exists some longtermist stuff that has something to do with animals", I agree, but I'm also not sure why you'd bother talking about that.)  I think this is mostly false. Looking at the posts you cite, they seem to be in two categories:

First, claims that animal welfare is a significant part of the far future, and so should be optimized (abrahamrowe and Fai). Both posts neglect the possibility that we transition to a world of digital people that doesn't want biological animals any more (see this comment and a point added in the summary of Fai's post after I had a conversation with them), and I think their conclusions are basically wrong for that reason.

Second, moral circle expansion is a part of longtermism, and animals are plausibly a good way to currently do moral circle expansion. But this doesn't mean a focus on reducing current animal suffering! Some quotes from the posts:

Tobias: "a longtermist outlook implies a much stronger focus on achieving long-term social change, and (comparatively) less emphasis on the immediate alleviation of animal suffering"

Tobias: "If we take the longtermist perspective seriously, we will likely arrive at different priorities and focus areas: it would be a remarkable coincidence if short-term-focused work were also ideal from this different perspective."

Jacy: "Therefore, I’m not particularly concerned about the factory farming of biological animals continuing into the far future."

But the great thing about longtermist arguments is you only need a maybe.

That's not true! You want the best possible maybe you can get; it's not enough to just say "maybe this has a beneficial effect" and go do the thing.

Thanks for writing this comment.

There is an estimate of 24.9 million people in slavery, of which 4.8 million are sexually exploited! Very likely these estimates are exaggerated, and the conditions are not as bad as one would think hearing those words, and even if they were the conditions might not be as bad as battery cages, but my broader point is that the world really does seem like it is very broken and there are problems of huge scale even just restricting to human welfare, and you still have to prioritize, which means ignoring some truly massive problems.

I agree, there is already a lot of human suffering that longtermists de-prioritize. More concrete examples include,

  • The 0.57% of the US population that is imprisoned at any given time this year. (This might even be more analogous to battery cages than slavery).
  • The 25.78 million people who live under the totalitarian North Korean regime.
  • The estimated 27.2% of the adult US population that who lives with more than one of these chronic health conditions: arthritis, cancer, chronic obstructive pulmonary disease, coronary heart disease, current asthma, diabetes, hepatitis, hypertension, stroke, and weak or failing kidneys.
  • The nearly 10% of the world population who lives in extreme poverty, which is defined as a level of consumption equivalent to less than $2 of spending per day, adjusting for price differences between nations.
  • The 7 million Americans who are currently having their brain rot away, bit by bit, due to Alzheimer's and other forms of dementia. Not to mention their loved ones who are forced to witness this.
  • The 6% of the US population who experienced at least one major depressive episode in the last year.
  • The estimated half a million homeless population in the United States .
  • The significant fraction of people who have profound difficulties with learning and performing work, who disproportionately live in poverty and are isolated from friends and family

EDIT: I made this comment assuming the comment I'm replying to is making a critique of longtermism but no longer convinced this is the correct reading 😅 here's the response anyway:

Well it's not so much that longtermists ignore such suffering, it's that anyone who is choosing a priority (so any EA, regardless of their stance on longtermism) in our current broken system will end up ignoring (or at least not working on alleviating) many problems.

For example the problem of adults with cancer in the US is undoubtedly tragic but is well understood and reasonably well funded by the government and charitable organizations, I would argue it fails the 'neglectedness' part of the traditional EA neglectedness, tractability, importance system. Another example, people trapped in North Korea, I think would fail on tractability, given the lack of progress over the decades. I haven't thought about those two particularly deeply and could be totally wrong but this is just the traditional EA framework for prioritizing among different problems, even if those problems are heartbreaking to have to set aside.

I upvoted OP because I think comparison to humans is a useful intuition pump, although I agree with most of your criticism here. One thing that surprised me was:

Obviously not? That means you never reduced suffering? What the heck was the point of all your longtermism?

Surprised to hear you say this. It is plausible that the EA longtermist community is increasing the expected amount of suffering in the future, but accepts this as they expect this suffering to be swamped by increases in total welfare. Remember one of the founding texts of longtermism says we should be maximising the probability that space colonisation will occur. Space colonisation will probably increase total suffering over the future simply because there will be so many more beings in total. 

When OP says :

D. Does longtermism mean ignoring current suffering until the heat death of the universe?

My answer is "pretty much yes". (Strong) longtermists will always ignore current suffering and focus on the future, provided it is vast in expectation. Of course a (strong) longtermist can simply say "So what? I'm still maximising undiscounted utility over time" (see my comment here).

(Strong) longtermists will always ignore current suffering and focus on the future, provided it is vast in expectation

But at the time of the heat death of the universe, the future is not vast in expectation? Am I missing something basic here?

(I'm ignoring weird stuff which I assume the OP was  ignoring like acausal trade / multiverse cooperation, or infinitesimal probabilities of the universe suddenly turning infinite, or already being infinite such that there's never a true full heat death and there's always some pocket of low entropy somewhere, or believing that the universe's initial state was selected such that at heat death you'll transition to a new low-entropy state from which the universe starts again.) 

It is plausible that the EA longtermist community is increasing the expected amount of suffering in the future, but accepts this as they expect this suffering to be swamped by increases in total welfare.

Oh, yes, that's plausible; just making a larger future will tend to increase the total amount of suffering (and the total amount of happiness), and this would be a bad trade in the eyes of a negative utilitarian.

In the context of the OP, I think that section was supposed to mean that longtermism would mean ignoring current utility until the heat death of the universe -- the obvious axis of difference is long-term vs current, not happiness vs suffering (for example, you can have longtermist negative utilitarians). I was responding to that interpretation of the point, and accidentally said a technically false thing in response. Will edit.

Am I missing something basic here?

No you're not missing anything that I can see. When OP says:

Does longtermism mean ignoring current suffering until the heat death of the universe?

I think they're really asking:

Does longtermism mean ignoring current suffering until near the heat death of the universe?

Certainly the closer an impartial altruist is to heat death the less forward-looking the altruist needs to be.

I have an issue with your statement that longtermists neglect suffering, because they just maximize total (symmeric) welfare. I think this statement isn't actually true, though I agree if you just mean pragmatically, most longtermists aren't suffering focused.

Hilary Greaves and William MacAskill loosely define strong longtermism as, "the view that impact on the far future is the most important feature of our actions today." Longtermism is therefore completely agnostic about whether you're a suffering-focused altruist, or a traditional welfarist in line with Jeremy Bentham. It's entirely consistent to prefer to minimize suffering over the long-run future, and be a longtermist. Or put another way, there are no major axiological commitments involved with being a longtermist, other than the view that we should treat value in the far-future similar to the way we treat value in the near-future.

Of course, in practice, longtermists are more likely to advocate a Benthamite utility function than a standard negative utilitarian. But it's still completely consistent to be a negative utilitarian and a longtermist, and in fact I consider myself one.

I agree with all of that. I was objecting to the implication that longtermists will necessarily reduce suffering. Also (although I'm unsure about this), I think that the EA longtermist community will increase expected suffering in the future, as it looks like they will look to maximise the number of beings in the universe.

What I view as the Standard Model of  Longtermism is something like the following:

  • At some point we will develop advanced AI capable of "running the show" for civilization on a high level
  • The values in our AI will determine, to a large extent, the shape of our future cosmic civilization
  • One possibility is that AI values will be alien. From a human perspective, this will either cause extinction or something equally bad.
  • To avoid that last possibility, we ought to figure out how to instill human-centered values in our machines.

This model doesn't predict that longtermists will make the future much larger than it otherwise would . It just predicts that they'll make it look a bit different than it otherwise would look like.

Of course, there are other existential risks that longtermists care about. Avoiding those will have the effect of making the future larger in expectation, but most longtermists seem to agree that non-AI x-risks are small by comparison to AI.

I'm just making an observation that longtermists tend to be total utilitarians in which case they will want loads of beings in the future. They will want to use AI to help fulfill this purpose. 

Of course maybe in the long reflection we will think more about population ethics and decide total utilitarianism isn't right, or AI will decide this for us, in which case we may not work towards a huge future. But I happen to think total utilitarianism will win out, so I'm sceptical of this.

I think that there is something to the claim being made in the post which is that longtermism as it currently is is mostly about increasing number of people in the future living good lives. It seems genuinely true that most longtermists are prioritising creating happiness over reducing suffering. This is the key factor which pushes me towards longtermist s-risk.  

I agree with this sentiment. 

As an instrumental thing, I am worried that this sort of post posts like the OP could backfire. 

As an instrumental thing, I am worried that this sort of post could backfire. 

The original post or my comment?

In either case, why?

I agree with your comment.

  • It read to me that you were upset and offended and you wrote a lot in response.
  • I didn't think the OP seemed good to me, either in content or rhetoric.

Below is a screenshot of a draft of a larger comment that I didn't share until now, raising my concerns. 

(It's a half written draft, it just contains fragments of thoughts).

 

I wish people could see what is possible and what has been costly in animal welfare. 

I wish they knew how expensive it is carry around certain beliefs and I wish they could see who is bearing the cost for that.

Thanks. One response:

  • It read to me that you were upset and offended and you wrote a lot in response.

I wouldn't say I was offended. Even if the author is wrong about some facts about me, it's not like they should know those facts about me? Which seems like it would be needed for me to feel offended?

I was maybe a bit upset? I would have called it annoyance but "slightly upset" is reasonable as a descriptor. For A, B, D and E my reaction feels mostly like "I'm confused why this seems like a decent argument for your thesis", and for C it was more like being upset.

Related to the funding point (note 4): 

It seems important to remember that even if high status (for lack of a more neutrally-valenced term) longtermist interventions like AI safety aren't currently "funding constrained," animal welfare at large most definitely is. As just one clear example, an ACE report from few months ago estimated that Faunalytics has room for more than $1m in funding. 

That means there remains a very high (in absolute terms) opportunity cost to longtermist spending, because each dollar spent is one not being donated to an animal welfare org. This doesn't make liberal longtermist spending wrong, but it does make it costly in terms of expected nearterm suffering. 

This is the main reason big longtermist spending gives me pause, even though I just about entirely buy the longtermist thesis. EA is, by and large, pretty good at giving due concern to non-salient opportunity costs, but this seems to be an area in which we're falling short. 

even though I just about entirely buy the longtermist thesis

If you buy into the longtermist thesis why are you privileging the opportunity cost of giving to longtermist causes and not the opportunity cost of giving to animal welfare?

Are you simply saying you think the marginal value of more money to animal welfare is greater than to longtermist causes?

I'm not intending to, although it's possible I'm using the term "opportunity cost" incorrectly or in a different way than you. The opportunity cost of giving a dollar to animal welfare is indeed whatever that dollar could have bought in the longtermist space (or whatever else you think is the next best option). 

However, it seems to me that at least some parts of longtermist EA , some of the time, to some extent, disregard the animal suffering opportunity cost almost entirely. Surely the same error is committed in the opposite direction by hardcore animal advocates, but the asymmetry comes from the fact that this latter group controls a way smaller share of financial pie. 

However, it seems to me that at least some parts of longtermist EA , some of the time, to some extent, disregard the animal suffering opportunity cost almost entirely.

I'm not sure how you come to this conclusion, or even what it would mean to "disregard the opportunity cost". 

Longtermist EAs generally know their money could go towards reducing animal suffering and do good. They know and generally acknowledge that there is an opportunity cost of giving to longtermist causes. They simply think their money could do the most good if given to longtermist causes.

Are you simply saying you think the marginal value of more money to animal welfare is greater than to longtermist causes?

Note that with diminishing returns, marginal utility per dollar (MU/$) is a function of the level of spending. So it could be the case that the MU/$ for the next $1M to Faunalytics is really high, but drops off above $1M. So I would rephrase your question as:

>do you think the marginal value of more money to animal welfare right now is greater than to longtermist causes?

Solving factory farming is different from other near term causes. It's not like EA focused animal charities are saving animals 1 by 1 by buying them off a farm where it won't make any future impact.  It's about making systematic changes that will impact not just the lives of animals today but the lives of animals for future generations to come.  I fear that if AGI is developed while the practice of factory farming still exists there could be a high probability that it perpetuates the problem not solve it.  We need to eliminate factory farming as soon as possible. 

It's not clear to me why we should expect (e.g.) corporate campaigns, alt-protein investments, or vegan outreach to have larger counterfactual flow-through effects than (e.g.) antimalarial bednets, deworming pills, or South Asian air quality improvements. Without a more detailed model, I can see it go either way.

As I read Bryan's point, it's that eg malaria is really unlikely to be a major problem of the future, but there are tailwinds to factory farming (though also headwinds) that could make it continue as a major problem.  It is after all a much bigger phenomenon than a century ago, and malaria isn't.

But fwiw, although other people have addressed future/longtermist implications of factory farming (section E), and I take some of those arguments seriously, by contrast in this post I was focused on arguments for working on current animal suffering, for its own sake.

YGG
2y13
0
0

Cynical thoughts related to the point that “there is something suspicious about moral theorizing without sacrifice” and that “we should scrutinize [a conclusion that is extra convenient] extra closely”:

It often feels too convenient for self-identified EAs who have always enjoyed geeking out over machine-learning and AI to conclude they are also doing the most important thing in the world. Sacrifice-less moral theorizing about the far future is much sexier than antagonizing over suffering of the mere mortals (especially voiceless mere mortals), but it does feel suspicious at times and hardly builds trust and cohesion as the community continues to grow.

Strongly agreed, and I think it’s one of the most important baseline arguments against AI risk. See Linch’s motivated reasoning critique of effective altruism:

https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism

I agree that theorizing is more fun than agonizing (for EA types), but I feel like the counterfactual should be theorizing vs theorizing, or agonizing vs agonizing.

Theorizing: Speaking for myself, I bounced off of both AI safety and animal welfare research, but I didn't find animal welfare research less intellectually engaging, nor less motivating, than AI safety research. If anything the tractability and sense of novel territory makes it more motivating. Though maybe I'd find AI safety research more fun if I'm better at math. (I'm doing my current research on longtermist megaprojects partially because I do think it's much more impactful than what I can do in the animal welfare space, but also partially because I find it more motivating and engaging, so take that however you will).

Agonizing: Descriptively, I don't think the archetypical x-risk-focused researcher type is less neurotic or prone to mental health issues than the archetypical EAA. I think it's more likely that their agonizing is different in kind rather than degree. To the extent that there is a difference that favors x-risk-focused researchers, I would guess it's more due to other causes, e.g. a) demographic factors  in which groups different cause areas draw from,  b) the (recent) influx of relative wealth/financial stability for x-risk researchers, or c) potentially cause-area- and institution-specific cultural factors.

Re point A and B: one question is how sensitive is the claim to scope? It seems to me that we're in a number of ongoing moral catastrophes (including current commonly discussed EA cause areas like hundreds of thousands of people dying annually from malaria, but also stuff like unjust penal systems, or racism in the developing world, or civil wars, or people dying from poisoned air, or genital mutilation of children), so I see two possible cluster of beliefs:

1. You are scope sensitive, in which case this just reduces back to an argument about expected value. Certainly the hypothetical of humans in battery cages is unusually bad, but how it compares to longtermism isn't structurally dissimilar to how e.g. we should relate to clean water access vs longtermism.

2. You take more of a justice-based, "any moral catastrophe is too much" approach. In that case I'm not sure how you can prioritize ("How can you focus on chickens when thousands of innocents are unjustly imprisoned?")

Re Point C: I'm not sure I understand this point. I think the argument is that there's self-serving bias in the form of "motivated reasoning is more likely to push us towards incorrectly believing that things that benefit us and is costly to others is worth the cost-benefits tradeoff." I basically think this is correct. So, on the margin this should push us to be slightly more willing to be self-sacrificial than if we didn't factor in this bias. In particular, this should slightly push us to favor being (e.g.) slightly more frugal on personal consumption than we otherwise would be, or slightly more willing to do hard/unpleasant work, for longer hours. Though of course there are also good arguments against frugality or overwork.

But the bias claim should basically be neutral towards the question of how we should relate to how we ought to judge what sacrifices others ought to make. This particular angle doesn't (to me) clearly address why we should expect there's more biases in favor of us overrepresenting future generations' interests over the interests of existing animals.

Interesting ! Another argument that we could make is :

  • Longtermism is probably not really worth it if the far future contains much more suffering than happiness
  • If we take into account humans and farmed animals, it's probable that suffering greatly outweights happiness in our current world (depending of how you measure things, of course, but if you take into account  hens and fish farmed, there are 100 times more of them than humans)
    • (Note that things get much harder to measure if you include wild animals)
  • Before caring about longtermism, we should probably care more about making the world a place where humans are not causing more suffering than happiness (so no factory farming)
Longtermism is probably not really worth it if the far future contains much more suffering than happiness

Longtermism isn't synonymous with making sure more sentient beings exist in the far future. That's one subset, which is popular in EA, but an important alternative is that you could work to reduce the suffering of beings in the far future.

Oh yeah, I just remembered that moral circle expansion was part of longtermism, that's true. 

It's just that I mosty hear about longtermism when it comes to existential risk reduction - my point above was more about that topic.

Before caring about longtermism, we should probably care more about making the world a place where humans are not causing more suffering than happiness (so no factory farming)

No, I'd argue longtermism merits significant attention right now.  Just that factory farming also merits significant attention.

I agree with you that protecting the future (eg mitigating existential risks) needs to be accompanied by trying to ensure that the future is net positive rather than negative.  But one argument I find pretty persuasive is, even if the present was hugely net negative, our power as a species is so great and still increasing (esp if you include AI), that it's quite plausible that in the future we could turn that balance positive - and, the future being such a big place, that could outweigh all present and near-term negativity.  Obviously there are big question marks here but the increasing power trend at least is convincing, and relevant.

I think the strength of these considerations depends on what sort of longtermist intervention you're comparing to, depending on your ethics. I do find the abject suffering of so many animals a compelling counter to prioritizing creating an intergalactic utopia (if the counterfactual is just that fewer sentient beings exist in the future). But some longtermist interventions are about reducing far greater scales of suffering, by beings who don't matter any less than today's animals. So when comparing to those interventions, while of course I feel really horrified by current suffering, I feel even more horrified by those greater scales in the future—we just have to triage our efforts in this bad situation.

Related thoughts in the recent 80k SBF interview (which I have only half finished, but is excellent). This link should take you directly to the audio of that part, or you can expand the full transcript on the page and ctrl/cmd-f to "Near-term giving" to read.

This is great, thank you!  I'm so behind...

Really pretty much everything Sam says in that section sounds reasonable to me, though I'd love to see some numbers/%s about what animal-related giving he/FTX are doing.

In general I don't think individuals should worry too much about their cause "portfolio": IMHO there are a lot of reasonable problems to work on (eg on the reliable-but-lower-EV to unreliable-higher-EV spectrum) - though also many other problems that are nowhere near that efficient frontier.  But like it's fine for the deworming specialist (or donor) to mostly just stay focused on that rather than fret about how much to think about chickens, pandemics, AI...  100 specialists will achieve more than 100 generalists, etc.

This just becomes less true for a behemoth donor like Sam/FTX, or leaders like MacAskill & Ord.  They have such outsized influence that if they don't fine-tune their "portfolio" a bit, important issues can end up neglected.  And at the level of the EA movement, or broader society itself, the weighting of the portfolio becomes key.

My underlying thesis above is that the movement may be underweighting animals/factory farming right now, relative to longtermism, due to the biases I laid out.  I didn't explicitly argue this: my post is about "biases to be aware of," not "proof that these biases are currently resulting in misallocation" - perhaps another day.  But anyway even if this thesis is correct, it doesn't imply that a) risks like AI safety and pandemic prevention don't deserve a significant chunk of our portfolio (I think they do), or that b) broader society isn't hugely underweight those risks (I think it is).

Thanks for this! Important questions.

I see how (B) and (C) could be arguments for veganism / for thinking factory farming is really bad, but I don't yet see how they're arguments for working on factory farming rather than for future generations.

  • (B) suggests we should consider factory farming morally horrific and inexcusable, but I think that view is totally compatible with doing (non-animal-focused) longtermist work. Since we have to triage, we can see factory farming as an immense problem without considering it the top priority.
    • And insofar as we're worried about exploitative power relationships, that's also a reason to worry about how humanity treats future generations, since we're arguably screwing over powerless future generations pretty badly - skepticism toward exploitation is not just a consideration that suggests focusing on animals.
  • (C) suggests we should be more willing to make sacrifices. This suggests veganism, but does it also suggest doing (non-animal-focused) longtermist work? My intuition is not really, since many of the latter jobs don't involve more sacrifice than many jobs focused on animal welfare.
    • If the point was one about what sacrifices humanity should collectively be willing to make, I think that's also a reason to worry about how humanity treats future generations, since giving up some things for them may be in order.

(Edited for clarity/detail)

My arguments B and C are both of the form "Hey, let's watch out for this bias that could lead us to misallocate our altruistic resources (away from current animal suffering)."  For B, the bias (well, biases) is/are status quo bias and self-interest.  For C, the bias is comfort.  (Clearly "comfort" is related to "self-interest" - possibly I should have combined B and C, I did ponder this.  Anyway...)

None of this implies we shouldn't do longtermist work!  As I say in section F, I buy core tenets of longtermism, and "Giving future lives proper attention requires turning our attention away from some current suffering.  It's just a question of where we draw the line."  The point is just to ensure these biases don't make us draw the line in the wrong place.

The question from A is meant as a sanity check.  If millions of humans were in conditions comparable to battery cages, and comparably tractable, how many of "our" (loosely, the EA movement's) resources should we devote to that - even insofar as that pulls away resources from longtermism?  I'd argue "A significant amount, more than we are now."  Some would probably argue "No, terrible though that is, the longtermist work is even more important" - OK, we can debate that.  The main stance I'd push back on is "The millions of humans would merit resources; the animals don't."

Btw none of this is meant as an argument for veganism (ie personal dietary/habit change), at all.  How best to help farmed animals, if we agreed to, is a whole nother topic (except yes, I am assuming it's quite tractable, happy to back that up).

Yup, I'm mostly sympathetic to your last three paragraphs.

What I meant to argue is that biases like status quo bias, self-interest, and comfort are not biases that could lead us to (majorly) misallocate careers away from current animal suffering and toward future generations, because (I claim) work focused on future generations often involves roughly as much opposition to the status quo, self-sacrifice, and discomfort as work focused on animals. (That comparison doesn't hold for dietary distinctions, of course, so the effects of the biases you mention depend on what resources we're worried about misallocating / what decisions we're worried about messing up.)

picture millions of humans kept in the equivalent for 100% of their adult lives, and suppose with some work we could free them: would you stick to your longtermist guns?

What is "longtermism" here? Only working on interventions aimed at helping future generations? 

If we take longtermism to mean "not discounting future lives just because they're in the future", then it seems perfectly consistent to allocate some resources to alleviating suffering of currently-living people.

I don’t think this is that informative or substantive to the issue. Longtermism says that we should value/consider future generations distant in the future as no different than being physically far away or a different species.

In most calculations/beliefs/discounting used in Longtermism, this is plausibly much larger the consideration of current generations.

(The expression of this attitude is not fanatical or unreasonable, longtermists generally take normal virtuous actions, in the same way that neartermist EAs do not loot or pillage to donate to AMF).

Longtermists probably view the current resources dedicated to the causes as small (limited to current EA monies) and while it’s technically true, it’s unlikely longtermists will find the statement “that things exist now to spend money on” convincing in isolation.

In most calculations/beliefs/discounting used in Longtermism, this is plausibly much larger the consideration of current generations.

Accounting for importance, tractability (and diminishing marginal returns), and crowdedness, I'm skeptical that every single dollar has a higher marginal utility per dollar (MU/$) when allocated to longtermist interventions (compared to other causes). More plausibly, longtermist interventions start out with the highest MU/$, but as they are funded and hit diminishing returns, then other causes have the highest MU/$ and are optimally directed the marginal funding dollar. 

Would you completely defund animal welfare causes and redirect their funding/workers to longtermist interventions?

Would you completely defund animal welfare causes and redirect their funding/workers to longtermist interventions?

I'm not sure that was the claim in the parent comment you responded to.

 

To see this another way, using an entirely adversarial framework, if that is easier to communicate: 

If you take the worldview/mindset/logic in your comment (or being formalized and maybe codified in your posts) and move on to a duel (on the spreadsheets) with certain perspectives in longtermism, there is a danger of "losing" heavily, if a person just relies on "numbers" or "marginal utility" per dollar.

 

Sort of with that background in some sense, I thought the parent comment was helpful to give perspective. 

duel (on the spreadsheets) with certain perspectives in longtermism, there is a danger of "losing" heavily, if a person just relies on "numbers" or "marginal utility" per dollar.

Can you elaborate? What do you mean by "losing"? Isn't the case for longtermism that longtermist interventions  have the highest combination of importance, tractability, and crowdedness (ie. the highest MU/$)?

No, neither is acceptable: longtermism notwithstanding, we should allocate significant resources to combating both.

Note that in practice, the EA community does allocate resources across a portfolio of causes.

The post implies a large trade off between animal welfare and other cause areas.

It seems crucial to consider the actual trade off in energy/resources/attention and it seems plausible the trade off isn’t large at all. It’s important to explain this, but it’s difficult to communicate the underlying situation, for very compelling reasons.

I think some elaboration of the current state of farm animal welfare in EA is important to show why this tradeoff might not be large.

This elaboration or further explanation of the tradeoff won’t appear in this comment. (Someone else can/should do this but I know someone who can give some insight at some point).

Some considerations, not directly related:

If there needs to be (careful, proportionate) adjustment of the political climate of animal activism in certain circles, so that the resulting activity is collegial to EA, that is possible and may not be overly costly. This should be acceptable to altruistic animal activists if it provides EA resources and results in impactful activity.

I don’t think “flow through” effects is a good argument in general but it’s worth noting that farm animal welfare was a cause that brought on some of the strongest longtermists today.

Thanks for writing this! I like the analogy to humans. I did something like this recently with respect to dietary choice. My thought experiment specified that these humans had to be mentally-challenged so that they have similarity capacities for welfare as non-human animals which isn’t something you have done here, but I think is probably important. I do note that you have been conservative in terms of the number of humans however.

Your analogy has given me pause for thought!

There's a crude inductive argument that the future will always outweigh the present, in which case we could end up like Aesop's miser, always saving for the future until eventually we die.

I would just note that, if this happens, we’ve done longtermism very badly. Remember longtermism is (usually) motivated by maximising expected undiscounted welfare over the rest of time.

Right now, longtermists think they are improving the far future in expectation. When we actually get to this far future it should (in expectation) be better than it otherwise would have been if we hadn’t done what we’re doing now i.e. reducing existential risk. So yes longtermists may always think about the future rather than the present (until the future is no longer vast in expectation), but that doesn’t mean we will never reap the gains of having done so.

EDIT: I may have misunderstood your point here

Yeah, this wasn't my strongest/most serious argument here.  See my response to @Lukas_Gloor.

Point D. sounds li, but can be avoided just by thinking carefully at each step (it only applies to very naive implementations). And you mention other counter considerations yourself. Some more thoughts in reply:  

  • If we don't get longtermism right, we'll no longer be in a position to deliberately affect the course of the future (accordingly, "future neartermists" won't be in a position to do any good, either)
    • Even worse, if we get things especially wrong, we might accidentally lock in unusually bad futures
  • If we get longtermism right, we'd use the transition to TAI to gain better control over the future to no longer live in a state where the world is metaphorically burning (in other words, future near-termists won't be as important anymore)
  • Intermediates states where things continue as they are now (people can affect things but don't have sufficient control to get the world they want) seem unstable.

The last bullet point seems right to me because technological progress increases the damage of things going wrong and it "accelerates history" – the combination of these factors leads to massive instability. Technology progress also improves our potential reach for attaining control over things and make them stable, up to the point where  someone messes it up irreversibly

I'm pessimistic about attaining the high degrees of control it would require to make the future go really well. In my view, one argument for focusing on ongoing animal suffering is "maybe the long-term future will be out of our control eventually no matter what we do." (This point applies especially to people whose comparative advantage might be near-term suffering reduction.) However, other people are more optimistic.

Point E. seems true and important, but some of the texts you cite seem one-sided to me. (Here's a counter consideration I rarely see mentioned in these texts; it relates to what I said in reply to your point D.)

The other arguments/points you make sound like "longtermists might be biased/rationalizing/speciesists." 

I wonder where that's coming from? I think it would be more potentially persuasive to focus on direct reasons why reducing animal suffering is a good opportunity for impact.  We all might be biased in various ways, so appeals to biases/irrationality rarely do much. (Also, I don't think there's "one best cause" so different people will care about different causes depending on their moral views.) 

I don't take point D that seriously.  Aesop's miser is worth keeping in mind; the "longevity researcher eating junk every day" is maybe a more relatable analogy.  I'm ambivalent on hinginess because I think the future may remain wide-open and high-stakes for centuries to come, but I'm no expert on that.  But anyway I think A, B and E are stronger.

Yeah, "Longtermists might be biased" pretty much sums it up.  Do you not find examining/becoming more self-aware of biases constructive?  To me it's pretty central to cause prioritization, drowning children, rationalism, longtermism itself...  Couldn't we see cause prioritization as peeling away our biases one by one?  But yes, it would be reasonable to accompany "Here's why we might be biased against nonhumans" with "Here are some object-level arguments that animal suffering deserves attention."

Curated and popular this week
Relevant opportunities