Hide table of contents

In this piece, I present a short, pragmatic, and EA-targeted case for complete abolition of animal exploitation, and for using abolitionist approaches to achieve this. I show that (1) a longtermist perspective leads one to aim for complete abolition as a goal, and with one key assumption, using abolitionist approaches to get there and (2) contrary to prior work, abolition is helpful to reducing wild animal suffering (and conversely, welfarism could hinder such efforts).

Preface

This post is part of series: Abolitionist in the Streets, Pragmatist in the Sheets: New Ideas for Effective Animal Advocacy.  The series' main point is to point out that (broadly speaking) animal advocacy within Effective Altruism is uniform in its welfarist thinking and approach, and that it has assumed with insufficient reason that all abolitionist thinking and approaches are ineffective

I (a pragmatic, abolitionist-leaning animal advocate and vegan) wrote this series voluntarily, in rare spare time, over the course of 14 months, with help from three other vegans, all of whom have been familiar with EA for a few years. It is intended to be a big-picture piece, surfacing and investigating common beliefs within EA animal advocacy. Necessarily, I deal with generalisations of views, which will not cover all variations, organisations, or advocates.

Introduction

To my knowledge, this is only the second time such a case has (independently[1]) been made. Prior work[2][3][4][5][6] has (also independently) argued about the importance of farmed animal suffering for longtermists

(Weak) Longtermism

The scale of factory farming (and its potentially exponentially larger scale in several possible futures) is so huge that its continuation constitutes an s-risk. If welfarist approaches are subject to diminishing marginal returns (see our section on Corporate Welfare Campaigns) and abolitionist approaches are not (see our section on Unexplored Abolitionist Approaches), then that would push advocates very strongly towards abolitionist approaches - even if tractability is low, the scale is so high it’s still a crucial problem.

If welfarist approaches lead only to a reduction of (or higher-welfare) animal farming, there is a risk that, we may get stuck at a point where a large part of the population perceives animal agriculture as ‘humane’, and/or still holds on to a speciesist worldview, and does not have the motivation to abolish animal exploitation entirely. Consequently, focusing on welfare improvements could end factory farming sooner, but delay abolition for a very long time, ultimately affecting more individuals. The gravity of such a risk is then compounded by considerations of wild animal suffering.

In contrast, as well as hastening or helping bring about abolition, abolitionist approaches may also be better suited to prevent factory farming from re-emerging if abolished. I suspect that in emergency situations (barely surviving an existential catastrophe), a strong moral culture against exploiting animals would make it less likely for society to resume animal exploitation, especially factory farming.

Wild Animal Suffering (WAS)

It is well-established that suffering in the wild, on all expectations, dominates all other forms of suffering, and so from a longtermist perspective WAS is more important than farmed animal suffering. However, WAS is beyond the limits of our current moral circle and social acceptance. Furthermore, there is psychological evidence [7][8] that eating meat reduces empathy for animals. It may be that giving up animal products in general, especially when accompanied with the adoption of a non-speciesist worldview, will have greater psychological effects in terms of extending empathy to wild animals. Hence, it may become far easier to attract support for efforts to reduce WAS if abolition is achieved. While strict veganism might not be necessary for this, it seems intuitively unlikely that eating only high-welfare animals would have the same psychological effect as giving up meat entirely. In other words, even though taking WAS may not depend physically, on abolishing all animal farming, it plausibly depends on it psychologically - a claim which can be tested empirically.

Such thinking is in conflict with a view put forward by Brian Tomasik, who has argued that when we take WAS into account, certain types of animal agriculture like grass-fed beef production might be a net positive in terms of total animal suffering, because they reduce the number of wild animals in such areas. However, Tomasik’s argument relies on the claim that most wild animals live ‘net negative’ lives, a claim which has been disputed [9][10]Above all, the argument also presents a false dichotomy between natural ecosystems and cattle farming; it would also be possible to have pastures without growing animals to slaughter them, or simply to have humans live on the land.

While a focus on WAS and farmed animal welfare can be seen to conflict, a focus on non-speciesism implicitly supports both. However, taking non-speciesism seriously points towards abolitionism. Advocating for welfare improvements arguably perpetuates speciesist ideas (that animal interests or rights matter less), and thereby undermines, or at least does not support, efforts to reduce WAS.

Learning Tactics for Moral Circle Expansion

Lastly, I note that in the wait/absence of a techno-fix, researchers actually have a good opportunity to study and collect data for moral circle expansion. Such information might be quite useful preparation for advocating for the inclusion of robots and/or aliens in our moral circles, further reducing another type of s-risk. A recently published (Dec 2022) Sentience Institute blog post arrived at a similar conclusion independently, and goes into more detail about potential counter-arguments too.

“Waiting for technology to end animal farming may set a dangerous precedent for scenarios where technology cannot solve moral problems as quickly as social change or cannot solve moral problems at all. Socially driven trajectories seem to have better outcomes for spillover into attitudes towards future farmed animals, wild animals, and artificial sentience because they would set better historical precedent for human morality.”

Comments7


Sorted by Click to highlight new comments since:

On the part about longtermism, Tobias Leenhart from ProVeg seems to think more on the lines of behaviour change(from whatever reason, health, enviorment, cheaper price of plants(due to welfare reforms for animals)) will make attitude change for animals & wild animals much easier[1].
 This makes me think that they would not be in agreement with what you said about "focusing on welfare improvements could end factory farming sooner, but delay abolition for a very long time", but rather think there would be welfare improvements sooner & abolition sooner. What are your thought on that idea?

I am in agreement though that attitude change for animals is most important in the longterm.

  1. ^

My thoughts are that it's sensible (and an important component) but insufficient by itself.

Thanks for writing this, Dhruv.

The scale of factory farming (and its potentially exponentially larger scale in several possible futures) is so huge that its continuation constitutes an s-risk.

Note all farmed animals excluding arthropods only have about 3 % of the neurons of all humans (see here). So, if neurons are a decent proxy for moral weight, human welfare may dominate. However, as argued by Adam Shriver here, neurons do not account for all relevant factors. All in all, I think the 3 % figure is an underestimate.

Another point is that ending factory farming is only good to the extent the lives of factory farmed animals are bad. I believe this is true now, but welfarist approaches may ultimately lead to net positive lives in the future. 

It is well-established that suffering in the wild, on all expectations, dominates all other forms of suffering, and so from a longtermist perspective WAS is more important than farmed animal suffering.

Although there is lots of uncertainty, I agree the total moral weight of wild animals dominates. All marine arthropods have 50 k times as many neurons as all humans (see here). However, this is from a neartermist perspective. Longterm, I expect the number of humans (or digital minds) to continue to increase relative to the number of wild animals. This has been the case in the last 300 k years, and therefore we can expect the importance of human welfare to increase relative to that of wild animal welfare. Of course, this does mean wild animal welfare should be ignored, I actually think it is underrated.

Advocating for welfare improvements arguably perpetuates speciesist ideas (that animal interests or rights matter less), and thereby undermines, or at least does not support, efforts to reduce WAS.

I am not sure about this, and guess it may depend on the magnitude of the improvement. If it is large enough to imply net positive lives in the improved conditions, welfarist approaches would be more likely to be robustly good. For example, laying hens arguably have negative lives in both conventional cages and cage-free aviaries (see here), so pushing for not eating eggs (in which case hens would not exist, and therefore have null welfare) would tend to be better than pushing for cage-free aviaries. However, transitioning to cage-free aviaries is much easier, and could also increase the likelihood of a future transition to net positive conditions (maybe free range hens).

“Waiting for technology to end animal farming may set a dangerous precedent for scenarios where technology cannot solve moral problems as quickly as social change or cannot solve moral problems at all. Socially driven trajectories seem to have better outcomes for spillover into attitudes towards future farmed animals, wild animals, and artificial sentience because they would set better historical precedent for human morality.”

I tend to agree. In addition, it seems unlikely that having factory-farmed animals with net positive lives is an efficient way to produce welfare, but I do not know.

What do you think the risk of re-emergence and the psychological argument (linked in the post) by Jeff Sebo? I believe they outweigh the benefits of any potential net-positive high welfare farming (if one is thinks non-existence is comparable and neutral wrt negative/positive existence).

And yes, I mentioned a slightly different take on your last point when I pointed out Tomasik's false dichotomy (either not slaughtering animals, or putting those resources to better use by having happy humans live on the land instead).

I have now watched Jeff's talk.

If I understood correctly, the argument is that eating animals can lead people to disregard the welfare of animals. I agree this is currently the case, as most farmed animals have net negative lives, disregarding their welfare is useful to avoid cognitive dissonance. 

However, if people started eating animals with net positive lives out of concerns about animal welfare, I would expect animal welfare to remain in people's minds. I am also unsure about whether there is a conflict between animal rights and eating high welfare animals. If these had super good lives, and were killed without any pain (this could even occur at the end of their healthy lives, in which case the killing would actually be preventing their suffering, like euthanasia), I guess no rights would be violated.

Humans have a right to life, but whenever a human is born, it is being sentenced to death (in as much as we think the lifespan of the universe is finite). This is still fine as long as the human as a good life, so I would guess the same applies to animals.

That being said, I am open to abolitionist approaches being more effective than welfarist ones. I do not think it is obvious either way.

"so pushing for not eating eggs (in which case hens would not exist, and therefore have null welfare) would tend to be better than pushing for cage-free aviaries."

I often hear this argument,  X animal would not exist if they were not intensively farmed for human products. However why wouldn't they exist? I think they would exist but in much smaller healthier numbers and their genetics would be able to recover slowly. Many people love animals and would keep them just like many keep cats and dogs. They can be good for the land etc as well. There are also many vegan farm animal sanctuaries that would keep them. Post farming they would only stop existing over time if breeding was strictly outlawed or they were outright banned. Same for many other intensively farmed animals. Some vets thought horses would go extinct when the automobile was first mass produced.

Hi Brendon,

In that sentence, I just meant to point out that not existing is better than existing in negative conditions. I agree the animals which are currently factory-farmed could continue to exist in better conditions.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f