Hide table of contents

[Edit 2024-11-05: my views have changed quite a lot since I wrote this. See here.]

This is the story of how I came to see Wild Animal Welfare (WAW) as a less promising cause than I did initially. I summarise three articles I wrote on WAW: ‘Why it’s difficult to find cost-effective WAW interventions we could do now’, ‘Lobbying governments to improve WAW’, and ‘WAW in the far future’. I then draw some more general conclusions. The articles assume some familiarity with WAW ideas. See here or here for an intro to WAW ideas.

My initial opinion

My first exposure to EA was reading Brian Tomasik’s articles about WAW. I couldn’t believe that despite constantly watching nature documentaries, I had never realized that all this natural suffering is a problem we could try solving. When I became familiar with other EA ideas, I still saw WAW as by far the most promising non-longtermist cause. I thought that EA individuals and organizations continued to focus most of the funding and work on farmed animals because of the status quo bias, risk-aversion, failure to appreciate the scale of WAW issues, misconceptions about WAW, and because they didn’t care about small animals despite evidence that they could be sentient.

There seem to be no cost-effective interventions to pursue now

In 2021, I was given the task of finding a cost-effective WAW intervention that could be pursued in the next few years. I was surprised by how difficult it was to come up with promising WAW interventions. Also, most ideas were very difficult to evaluate and their impacts were highly uncertain. To my surprise, most WAW researchers that I talked to agreed that we’re unlikely to find WAW interventions that could be as cost-effective as farmed animal welfare interventions within the next few years. It’s just much easier to change conditions and observe consequences for farmed animals because their genetics and environment are controlled by humans. I ended up spending most of my time evaluating interventions to reduce aquatic noise. While I think this is promising compared to other WAW interventions I considered, there are quite many farmed animal interventions that I would prioritize over reducing aquatic noise. I still think there is about a 20% chance that someone will find a direct WAW intervention in the next ten years that is more promising than the marginal farmed animal welfare intervention at the current funding level.

I discuss direct short-term WAW interventions in more detail here.

Influencing governments

Some WAW advocates promote research on WAW in academia. For some of them, their aim is twofold: to identify effective interventions and establish WAW as a legitimate field of study. The hope is that by gaining greater legitimacy, WAW advocates can influence government policy. For example, governments could control wild populations more humanely, vaccinate animals against some diseases, and eradicate some parasites. 

I am somewhat skeptical of this because:

  • The argument for the importance of WAW rests on the enormous numbers of small wild animals. It’s difficult to imagine politicians and voters wanting to spend taxpayer money on improving wild fish or insect welfare, especially in a scope-sensitive way. But it could have been similarly difficult to imagine governments funding species conservation efforts until it happened.
  • The consequences on the welfare of all affected wild animals seem nearly impossible to determine, even with a lot of research. Also, research in one ecosystem might not generalize to other ecosystems.
    • However, this is the same as the concern of cluelessness that applies to all causes. That is, all interventions have complicated indirect effects that are impossible to predict. To me, cluelessness seems a bigger problem in WAW because first-order effects are usually dwarfed by second and third-order effects. For example, vaccinations may increase the population of that species, which could be bad if their lives are still full of suffering. Also, when the population of one species changes, it changes populations of other species too. But overall, I’m confused about cluelessness.
  • Even if we determine consequences, people with different moral views might disagree on which consequences they prefer. For example, people may disagree on how to weigh the welfare of different animal species, happiness versus suffering, short and intense suffering versus chronic but less intense suffering, etc. This may eventually divide the WAW movement into many camps and hurt overall efforts.

See here for further discussion of the goals of lobbying governments to improve WAW, and obstacles to doing this.

Long-term future

Others have argued that what matters most in WAW is moral circle expansion and the effect we may have on the far future. But what exactly do we want to achieve in the far future with our current WAW work? In this article, I listed all the far-future scenarios where WAW seemed very important. The most important ones included scenarios where wildlife is spread beyond Earth. For example, we might develop an aligned transformative AI and the humans in charge might want to colonize space with biological human-like beings and animals, rather than machines. In that case, we could end up with quadrillions of animals suffering on billions of planets for billions of years. Compared to that, WAW interventions on Earth seem much less important.

However, to me, WAW doesn’t seem to be the most important thing for the far future - not even close. Digital minds could be much more efficient, thrive in environments where biological beings can’t, utilize more resources, and seem more likely to exist in huge numbers. Hence, some other longtermist work seems much more promising to me than longtermist animal welfare work.

If you think that the future is likely to be good, then I think that reducing x-risks is much much more promising. If you are a negative utilitarian (i.e., you only care about reducing suffering) or you are pessimistic about the future, you may want to prioritize work that aims to reduce the potential suffering of future digital minds instead (for example, the work of organizations like The Center on Long-term Risk). The tractability of trying to reduce digital mind suffering might be even lower than for longtermist animal welfare work, but the scale is much much higher. I think that there may be some worthwhile things to do in the intersection of longtermism and animal welfare but I don't think it that it should become a major focus for EA.

I discuss WAW and the far future in more detail here.

Overall opinion

After looking into these topics, I now tentatively think that WAW is not a very promising EA cause because:

  • In the short-term (the next ten years), WAW interventions we could pursue to help wild animals now seem less cost-effective than farmed animal interventions.
  • In the medium-term (10-300 years), trying to influence governments to do WAW work seems similarly speculative to other longtermist work but far less important.
  • In the long-term, WAW seems important but not nearly as important as preventing x-risks and perhaps some other work.

All that said, I’m unsure how seriously my opinions should be taken because:

  • I don’t have an ecology/biology/conservation background to competently evaluate direct short-term WAW interventions,
  • I don’t know enough about the history of social movements to evaluate how likely WAW is to succeed as a social movement, and
  • I’m not very knowledgeable about longtermism.

Hence, I see my articles on WAW as the start of a conversation, not the end of it.

Despite my concerns, if I was in charge of all EA funding, I still wouldn’t set WAW funding to zero. Since it’s very difficult to predict which interventions will be important in the future, I think it makes sense to try many different approaches. I still believe that WAW is promising enough to do some further research and movement building. For example, even though I think that corporate farmed animal welfare campaigns are very cost-effective, I would not choose to drop all WAW funding in order to fund even more corporate campaigns, because WAW work could open an entirely new world of possibilities. We won't know what's there unless we try.

However, I wouldn’t spend much more money on WAW than EA is currently spending either. My subjective probability that the WAW movement will take off with $8 million per year of funding is not that much higher than the probability that it will take off with $2 million per year of funding, as the movement’s success probably mostly depends on factors other than funding. But with $2 million, the probability would be much higher than with $0 (I’m using somewhat random numbers here to make the point). And ideally, the money that we do spend on WAW would be used to fund people with different visions about WAW to try multiple different approaches so that we could see which approaches work best. I see some of this happening now, so I mostly support the status quo. Of course, my opinion on how much funding WAW should receive might change upon seeing concrete funding proposals.

[EDIT 2023-02-21: I criticized the version of the WAW movement I saw being pursued by organizations. To my knowledge, no organization currently works on WAW by trying to help microorganisms, or decrease wild animal populations (which perhaps could be done in relatively uncontraversial ways). I simply don’t have an opinion about a WAW movement that would focus on such things. There were some restrictions on the kinds of short-term interventions I could recommend in my intervention search. Interventions that would help microbes or help wild animals by reducing their populations simply didn’t qualify. Thank you to the commenters who made me realize this.]

Opinions expressed here are solely my own. I’m not currently employed by any organization.

Comments65
Sorted by Click to highlight new comments since:

My views on WAW have changed quite a lot since I wrote this. I think there are things within WAW that could be very promising. I hope to write more about that in the future. 

How soon in the future? :P

Could you share a list of things now, without elaborating?

Sure (^-^) I'll do it in a comments below. Note that these are a little more than shower thoughts. I'd love some discussion and back and forth on these. Perhaps I will write a post with conclusions after these discussions.

I haven’t examined the screwworm eradication in detail. Someone told me that gene drives are politically infeasible. People working on it told me that it’s totally feasible. ¯\_(ツ)_/¯, political feasibility is not something I can evaluate. The cost-effectiveness in the linked article seems a lot more conservative than my estimates.

If screwworm eradication intervention is promising, then maybe there are other promising WAW interventions. Yes, so far the experience of researchers has been that it’s more difficult to find cost-effective WAW interventions compared to farmed animal interventions. This is partly because it’s so difficult to think about the indirect effects of WAW. But someone told me “unknown unknowns cancel each other out.” In other words, maybe we don’t need to think about 3rd order effects because they might be canceled out by 4th order effects, and so on. I feel very confused about this, I’d like to think more about it at some point.

Also, perhaps if we find WAW interventions, they might have a bigger scale than typical farmed animal welfare interventions. So maybe searching for FAW interventions is easier and more immediately rewarding but it’s still just as worthwhile to search for WAW interventions.

I would not want to ignore higher-order effects, and would rather try to bound their expected values, do sensitivity analysis and consider what we do at the level of portfolios of interventions instead of just interventions in isolation, and hedging.

Spreading the idea/meme that we should care about wild animals seems potentially very important. We could have AGI that might be able to do magic-like stuff soon. Or at least an unprecedented AI-fuelled economic growth. It seems possible that this would create a situation of abundance, where problems like poverty and climate change are fully solved. If the values of the society remain as they are, a lot of resources might be used for conservation, species preservation, and so on, without almost any care about the welfare of individual animals. Wildlife could also be spread to other planets with little or no thought given to the vast amount of suffering it would create. All of this seems a bit less likely to happen if we just try to spread the idea of wild animal welfare more. I’d be excited to see things like documentaries for mainstream audiences about WAW. Humane Hancock mentioned a plan for a WAW documentary, and I’m excited about it.

There may or may not be even more cost-effective things to do for the far future, like reducing x-risks and thinking about how to help digital minds. But that doesn’t mean that spreading the idea of WAW is not worthwhile. I don’t think that x-risk and digital mind stuff would get significantly less funding or talent if someone also worked on spreading the idea of WAW. So perhaps there’s not much point in comparing the two :)

On the other hand, spreading awareness of Wild Animal Welfare ideas could lead to even more polarization. In the U.S., for example, this idea could potentially resonate with some liberals but could easily become a target for ridicule in conservative media. It’s the kind of concept that could be framed as an example of 'extreme' liberal values, fueling outrage and reinforcing the perception that progressive causes are becoming increasingly detached from reality.

Hmm, ya, I could buy that more WAW support could help prevent some policies and other work that's bad for wild animal welfare, and perhaps most importantly space colonization with wild animals (or with little regard for their welfare).

I'm skeptical that WAW support would actually lead to actively intervening in the wild for wild animal welfare at a large scale, through things like gene drives, engineering ecosystems or eliminating species or reducing their populations, given the values I expect people to continue to hold. People might do these things in some cases for perceived human benefits, like screwworm eradication and some wild animal vaccines. Or adjust how we treat wild animals we're already dealing with, especially how we manage their populations.

I think that these interventions by Brian Tomasik could be promising, though I haven't examined them in detail. They’d reduce insect numbers by doing things like opposing irrigation subsidies using environmental and economical arguments. It’s unclear if insects live net negative lives to me, but this makes sense for negative utilitarians, or if you think there’s >50% chance that they live net negative lives and you’re ok with uncertainty. We discussed in these comments where we worried about PR risks because our true motivations would be different from the stated ones. But I now know multiple other organizations that do similar stuff without any problems.

No problems so far! The more such risks we take, the more likely one is to be realized. And PR risk could blow back against all of effective altruism by association and do long-lasting damage to EA and its perception, if the work is not adequately distanced from EA. The downside is not limited to the organization itself.

We might not want these kinds of interventions to be funded by the biggest EA/EAA grantmakers, or at least for these grants to be reported publicly. We might also not want them to give talks or have career booths at EAGs or animal advocacy conferences (but they could still attend and fundraise at them).

Maybe donors could coordinate so that some private donors not too big in EA take on most of the funding.

And all of this leads to worse transparency, which can mean less scrutiny of the work and the rationale behind it, increasing the risk that the work is ineffective or net negative. You can also get into unilateralist curse territory.

 

I'm not saying it couldn't be worth it anyway, maybe with some mitigating measures. But it's worth keeping all of this in mind.

I'm not sure if I agree. The worst-case scenario seems like an article titled, 'Organization Opposes Irrigation Subsidies Due to Insect Harm, Not Environmental Impact.' Realistically, would that provoke much anger? It might just come off as quirky or amusing rather than headline material. Often, lobbying arguments don’t fully reveal the underlying motivations. I think it's common for people and companies to lobby for policies that benefit them financially while framing them as sustainable or taxpayer-friendly.

What about an article along the lines of "Effective Altruists are trying to reduce insect populations"?

Hmm, yes that is a scarier headline. But I think that as long as we do it in ways that are also good from sustainability point of view, we would look really benign. Like we do a thing that many people agree is good for an unusual reason. There are definitely much more outrageous sounding scandals going around all the time.

Great post!

As comments by Max and Vasco hint at, I think it might still be the case that considering effects on wild animals is essential when evaluating any short-termist intervention (including those for farmed animals and human welfare). For example, I remain uncertain whether vegetarianism increases or decreases total suffering because of wild-animal side effects, mainly because beef may (or may not!) reduce a lot of suffering even if all other meat types increase it. (I still hope people avoid eating chicken and other small farmed animals.)

In my opinion, the most important type of WAW research is getting more clarity on big questions, like what the net impact is of cattle grazing, climate change, and crop cultivation on total invertebrate populations. These are some of the biggest impacts that humanity has on wild animals, and the answers would inform analysis of the side effects of various other interventions like meat reduction or family planning.

I haven't followed a lot of the recent WAW work, but my experience is that many other people working on WAW are less focused on these questions about how humans change total population sizes. Researchers more often think about ways to improve welfare while keeping population size constant. Those latter interventions may have more public support and are more accommodating to non-suffering-focused utilitarians who don't want to reduce the amount of happiness in nature. But as you mention, those interventions also seem more subject to cluelessness (is vaccination net good or bad considering side effects? it's super unclear) and often target big animals rather than invertebrates. From this perspective, I think efforts to improve the welfare of farmed chickens and fish may be more cost-effective (though it's still worth exploring wild-animal interventions too, as you say). Research and advocacy of less painful killing of wild-caught fish also seems extremely important, and it's unclear whether to count this as a farmed-animal or wild-animal intervention.

Apart from less painful killing of wild animals (fish, rodents, insect on crop fields), or maybe some other large-scale interventions like reducing aquatic noise, I think the cost-effectiveness of work on wild animals would come from trying to reduce (or avoid increasing) population sizes, via reducing plant productivity. Reducing the amount of plant growth in an ecosystem helps invertebrates (including mites, springtails, and nematodes, which are extremely numerous but also hard to help in ways other than preventing their existence) and is somewhat less subject to cluelessness problems because you don't have to model internal ecosystem dynamics as much -- you just have to reduce the productivity of the first trophic level. But I haven't found a lot of people who are interested in working on population-reduction interventions. This apparent lack of interest in reducing populations is one reason I've done less thinking about WAW in recent years. Another reason is that I respect the efforts of WAW organizations to put a more mainstream face on the WAW movement, and I wonder if my continuing to harp on why we should actually be focusing on reducing populations would seem to them as counterproductive.

I compiled a list of possible interventions to reduce total invertebrate populations that could possibly be lobbied for at the government level in some fashion. Some of them don't seem super cost-effective, but some might be, such as trying to reduce irrigation subsidies, which is an intervention that could be argued for on other grounds as well. Taxing fertilizer and/or water use on crop fields, pastures, and/or lawns might be pretty valuable if it could be achieved. (Some local regions do have subsidies for people who reduce their lawn's water use.) If geoengineering to fertilize oceans ever happens, opposing it would be extremely important, though doing a campaign about that now might be net harmful via increasing the salience of the idea.

Thank you for your thoughtful comment, Brian. I should’ve mentioned that I think that WAW might be tractable for people who think that reducing wild animal populations is good. I don’t think that reducing populations is good because:

  1.  I remain very uncertain whether wild animals experience more suffering than happiness (see this talk). I still think it’s more likely that there is more suffering due to painful deaths but not by much. This is partly because I give less weight to short but very intense pain than you do.
  2. Reducing wild animal populations usually goes against various human interests [EDIT: actually, this doesn't apply to some interventions in your list. I'm now thinking about whether any of them are promising.]
  3. It’s not what most people want. Hence, even if I did think that reducing wild populations is good, I’d be afraid that I’ll change my mind in 10 years.

I worry that researching the big questions you mention might be intractable. You wrote a detailed analysis about the impact of climate change on wild animal suffering, and concluded that your “probabilities are basically 50% net good vs. 50% net bad when just considering animal suffering on Earth in the next few centuries (ignoring side effects on humanity's very long-term future).” Correct me if I’m wrong, but your analysis rests on the assumption that reducing wild populations is good. An analysis without this assumption would be vastly more difficult because it would require analyzing whether various populations would become happier (instead of just analyzing changes in population). I worry that we wouldn’t have enough confidence in that analysis to inform our decisions.

All that said, I think that attempting to research WAW impacts of vegetarianism might still be worth it, though I’m unsure.

Hi Saulius, I wonder if have factored in your points 2&3 above in your view that you think digital beings are a priority for longtermism, and factory farming a priority for non-longtermist animal welfare. It seems that both cause areas, if taken consistently and seriously enough, would go against (organic) human interests and is not what most people want.

I imagine that few people would say that it’s actively harmful to try to decrease s-risks to digital minds (especially when it involves trying to prevent escalating conflicts, sadism, and retributivism). Most people would say it’s just a waste of money and effort. Most people agree that it’s important that animals used for food are well cared for. Not everyone votes for welfare improvements in ballot initiatives but a significant proportion of people do. And if we had infinite money, I don't think anyone would mind improving conditions for farmed animals. But if there was a ballot initiative asking “shall we actively try to decrease wild animal numbers?”, I imagine that almost everyone would passionately oppose it. I don't feel comfortable working on things most people would passionately oppose (and not just because it's a waste of resources, but because they think that our desired outcome is bad). It also makes it difficult to work on it as an EA cause and it could repulse some people from EA. But I now weakened (or changed) my position on reducing populations after realizing that it doesn’t always to lead to more environmental issues (see this comment). Also, people might not mind if we are only decreasing populations of tiny animals.

Hi again Brian. I agree that your vision for the WAW movement is different from what WAW organizations are currently doing. I criticized the latter and don’t have a strong opinion on your vision of focusing on very small animals and reducing populations. I said that I don’t want to reduce populations partly because that usually includes reducing plant productivity which in turn causes more climate change, which might increase s-risks, x-risks, poverty, etc. But perhaps some interventions in your list could reduce populations without causing more environmental issues. I hadn’t considered them because they didn't qualify for my WAW intervention search, and I had forgotten about them. 

I’m unsure how one would go about lobbying for these things. I’d be a bit afraid of PR risks too. Imagine a farmer lobby figuring out that the real motivation for people funding lobbying against their irrigation subsidies are weirdos from Effective Altruism who are worried about small invertebrate suffering. That could cause some bad press for EA. I also think that the few potential WAW funders I talked to wouldn’t have funded such interventions but there could be other funders.

(Sorry for being slow to return here!)

Yeah, I think some ways of reducing plant growth are often supported by environmentalists, including

  • less growing of crops in dry areas requiring irrigation (and instead growing more crops in regions where rain provides more of the water)
  • less irrigation of pastures and lawns
  • fewer artificial fertilizers
  • less nutrient pollution into water bodies
  • lowering atmospheric CO2 concentrations, which reduces the "CO2 fertilization effect" (though as you note, the overall impact of climate change on wild-animal suffering is unclear)
  • not genetically engineering plants to have higher yields.

Some other activities like encouraging palm-oil production (which destroys rainforests) are bad for the environment but may reduce poverty. (I should note that I'm unsure about the net impact of palm-oil production for wild-animal suffering.)

I agree that the question of how to lobby for these things without seeming like weirdos is tricky. It would be easier if society cared more about wild animals from a suffering-focused perspective, which can be one argument for starting with philosophical advocacy regarding those topics, though it seems unlikely that concern for wild-animal welfare or suffering-focused ethics will ever become mainstream (apart from weak forms of these things, like caring about charismatic megafauna or Buddhist philosophy about suffering). These philosophical views would also help for various far-future scenarios. But from the standpoint of trying to reduce some short-term suffering, especially if we worry about cluelessness for longer-term efforts, then this approach of doing philosophical advocacy would be too slow and indirect (except insofar as it contributes to movement building, leading some other people to pursue more concrete interventions).

So overall I may agree with you that for short-term, concrete impact, we should plausibly focus on things like stunning of wild-caught fish and so on. This is why I feel a lot of fuzzies about the Humane Slaughter Association and related efforts. That said, it does seem worth pondering more whether there are ways to direct money toward opposing irrigation subsidies and the like.

After re-reading this article, I noticed that my original summary about influencing governments was very uncharitable and heavily overstated my skepticism. I apologize for that. I now rewritten that summary to reflect what I wrote in my article about influencing governments. You can see the original summary here.

In the short-term (the next ten years), WAW interventions we could pursue to help wild animals now seem less cost-effective than farmed animal interventions.

Out of curiosity: When making claims like this, are you referring to the cost-effectiveness of farmed animal interventions when only considering the impacts on farmed animals? Or do you think this claim still holds if you also consider the indirect effects of farmed animal interventions on wild animals?


(Sorry if you say this somewhere and I missed it.)

Ok, let’s consider this for each type of farmed animal welfare intervention:

  • Humane slaughter of farmed animals and wild-caught fish. I’m guessing that it doesn’t impact WAW that much.
  • Reducing animal product production. E.g., diet change advocacy, meat alternatives. Such interventions increase wild populations a lot. If you believe that wild animals live bad lives (which is questionable but I’d give it a 65% probability), then it follows that reducing meat production is likely bad for short-term animal welfare. I personally still think that reducing meat production is good but not for short-term animal welfare reasons. For example, it reduces climate change. According to 80,000 hours, climate change could "destabilise society, destroy ecosystems, put millions into poverty, and worsen other existential threats such as engineered pandemics, risks from AI, or nuclear war." But I probably wouldn’t myself fund reducing animal product production, at least not with money dedicated to animal welfare, partly due to WAW issues.
  • Improving farmed animal welfare. Raising higher welfare animals usually requires more resources and more land.[1] So it probably decreases wild animal populations (which is maybe good for WAW?) but causes a bit more of those environmental problems. But I’d say that if environmental costs are ever worth it, this is the case (especially for chickens because they don’t require that much resources per individual either way). I’m not going to sit here with my radiator on and say that chickens should continue suffering bone fractures and not being able to extend their wings because of the small environmental cost.

It seems I hadn’t considered this enough, so thank you very much for the question :)

  1. ^

     For example, National Chicken Council argues that slower growing (and hence higher welfare) broiler breeds will have higher environmental costs: more feed, fuel, land, and water will be needed (I think it’s a biased source but the general conclusion makes sense). Similarly, according to Xin et al. (2011), “hens in noncage houses are less efficient in resource (feed, energy, and land) utilization, leading to a greater carbon footprint.” (I adapted this text from this post of mine).

Thanks for pointing this out, Max!

Based on this, I think it is plausible the nearterm effects of any intervention are driven by the effects on wild animals, namely arthropods and nematodes. For example, in the context of global health and development (see here):

I think GiveWell’s top charities may be anything from very harmful to very beneficial accounting for the effects on terrestrial arthropods.

If this is so, the expected nearterm effects of neartermist interventions (including ones attempting to improve the welfare of farmed animals) are also quite uncertain, in the sense they can easily be positive or negative. I still expect neartermist interventions to be positive due to their longterm effects. However, expecting them to be better than longtermist ones would be a surprising and suspicious convergence.

Great question. Yes, I think the claim still holds. It’s a bit tricky to explain why so you will have to stick with me. Let’s assume that:

  • Chicken welfare reforms are the most cost-effective intervention we found if we only consider the direct impact on chickens,
  • The indirect impacts of these welfare reforms on WAW are so bad that they outweigh the impact on chickens,
  • Each 1$ we spend to oppose welfare reforms negates 1$ spent on welfare reforms.

It would follow that if we ignored the impact on chickens, then opposing welfare reforms would be the new most cost-effective intervention because of its impact on WAW. But that would be a very surprising coincidence. I’d call it surprising divergence (as opposed to surprising convergence).

But ah, I’m now realizing that there is much more to this problem. It gets a lot messier. I’ll write more about this later.

Good argument. It might not work if one maintains an act-omission distinction regarding harms to the environment. For example, imagine that

  • value to farm animals of veg outreach = 1 util/$
  • value to wild animals of veg outreach = -2 util/$ (due to reducing environmental impact).

If this were the case, we shouldn't do veg outreach. It would seem that we should try to increase environmental impact by promoting beef consumption, but maybe our act-omission distinction prevents us from wanting to do that. It could also be difficult to explain why one is promoting beef production or get other people concerned for animal welfare on board.

(This example is just for illustration. In reality, if I could push a button to increase vegetarianism in the world, I probably would.)

This is a great question. I totally missed this consideration while reading this post but this question is imperative to keep in mind while thinking about this topic.

Thank you, that was very interesting Saulius. You talk a bit about comparisons with other cause areas, but I'm still not entirely sure which cause area you would personally prioritise the most right now ?

Thanks for the question Denise. Probably x-risk reduction, although on some days I’d say farmed animal welfare. Farmed animal welfare charities seem to significantly help multiple animals per dollar spent. It is my understanding that global health (and other charities that help currently living humans) spend hundreds or even thousands of dollars to help one human in the same way. I think that human individuals are more important than other animals but not thousands of times.

Sometimes when I lift my eyes from spreadsheets and see the beauty and richness of life, I really don’t want all of this to end. And then I also think about how digital minds could have even richer and better experiences: they could be designed for extreme happiness in the widest sense of the word. And if only a tiny fraction of world’s resources could be devoted to the creation of such digital minds, there could be bazillions of them thriving for billions of years. I’m not sure if we can do much to increase this possibility, maybe just spread this idea a little bit (it’s sometimes called hedonium or utilitronium). So I was thinking of switching my career to x-risk reduction if I could manage to find a way to be at least a tiny bit useful there.

But then at my last meditation retreat, I had this powerful 15 minute experience of feeling nothing but love for all beings. And then it was so clear that helping farmed animals is the most important cause, or at least the cause I personally should continue working on since I have 5 years of experience in it. It was partly because we don’t know if the future we are trying to save with x-risk reduction will contain more happiness than suffering. I don’t trust my reasoning on this topic, my opinion on the question might flip many times if I were to think about it deeply. But I know that I can help millions of animals now.

I don't know what I'll choose to do yet.

It would be amazing if you keep on working on farmed animals. Your work on it so far was extremely helpful and partially lead to the creation of some cost-effective charities. The field is also extremely talent-constrained, and I want to cry whenever I hear "I was into animals but now I want to work on AI" at EA conferences. I know you can still change your mind but just want to say, that counterfactually it seems to me that you are much more needed on the farmed animals side than you will ever be on the x-risk reduction. 

Hi Ula. I just somehow want to let you know that I used to work on animal welfare and I moved on to work on AI. But I didn't stop doing animal welfare, because I do AI&animals.

Beautiful answer, indeed ! 

I'd also strongly recommend working for farm animals : the long-term stuff is so uncertain when it comes to determining the net impact.

I second the recommendation for Saulius to continue to work on farmed animal welfare. But I disagree with the view that uncertainty alone can undermine the whole case of longtermism.

Thank you, that was a beautiful response. I'm glad I asked!

I share the experience that sometimes my personal experiences and emotions affect how I view different causes. I do think it's good to feel the impacts occasionally, though overall it leads me to being more strict about relying on spreadsheets.

Hmm, I think I ultimately rely only on my emotions. I’ve always been a proponent of “Do The Math, Then Burn The Math and Go With Your Gut”. When it comes to the question of personal cause prioritization, the question is basically “what do I want to do with my life?” No spreadsheet will tell me an answer to that, it’s all emotions. I use spreadsheets to inform my emotions because if I didn’t, a part of me would be unhappy and would nag me to do it. 

This is getting very off-topic but I’m now thinking that maybe all decisions like that. Maybe my only goal in life is to be happy in the moment. I do altruistic things because when I don’t do it for a while, a part of me nags me to do it and that makes me less happy. I don’t eat candy constantly because I’d be unhappy in the moment before eating it (or buying it) since it might ruin my health. I think that 2+2=4 because it feels good and 2+2=3 feels bad. If you disagree with some of that (and there are probably good reasons to disagree, I partly just made that up), then you might disagree with what I said in the parent comment (the one starting with "Hmm") for the same reason.[1]

  1. ^

    [EDIT, Feb 17th: I expressed this in a confusing way. Most of what I meant is that I try to drop the "shoulds", which is what many therapists recommend.  I use spreadsheets for prioritizing causes but I do it because I want to not because I should. I felt I needed to say this probably because I misinterpreted what Denise said in a weird way because I was confused. The question of how much to trust feelings vs spreadsheets does make sense. There is something else I'm saying in this comment that I still believe but I won't get into it because it's off-topic.]

I wonder if you're Goodharting yourself (as in Goodhart's law) or oversimplifying. Your emotions reflect what you care about and serve to motivate you to act on what you care about. They're one particular way your goals (and your impressions of how satisfied/frustrated they are or will be) are aggregated, but you shouldn't forget that there are separately valuable goals there.

I wouldn't say someone can't be selfless just because they want to help others and helping others satisfies this desire or makes them happy. And I definitely wouldn't say their only goal is to be happy in the moment. They have a goal to help others, and they feel good or bad depending on how much they think they're helping others.

EDIT: Also, it could be that things might feel more right/less wrong without feeling emotionally/hedonically/affectively better. I'm not sure all of my judgements have an affective component, or one that lines up with how preferable something is.

Maybe part of the brain just wants to be happy, and other parts of the brain condition rewards of happiness on alignment with various other goals like helping others or using spreadsheets.

That was beautifully put, Saulius.

Great post! It inspired me to write this, because I worry that such posts might accidentally discourage others from working on this cause area. https://forum.effectivealtruism.org/posts/e8ZJvaiuxwQraG3yL/don-t-over-update-on-others-failures

(to be clear: I really appreciate postmortems and want more content like it!)

I worry about the same thing and it's one of the reasons why I hesitated to post this for a long time. Thank you for your comment and your post. I want to paste a comment I wrote under your post because I want people who work on WAW to read it even though it's kind of trivial:

I worry less about giving up (because that can lead to people working on more important causes) and more about working on the same cause with less passion on dedication, half-assing it. It reminds me of a LessWrong post The correct response to uncertainty is *not* half-speed. Ideally, under uncertainty about whether to continue working on something, you should decide what to do, and then do it with the same dedication you were working on it before. Maybe also put into your calendar a monthly reminder to consider if you should continue working on it and try not to think about it at other times. 

Of course, we are human, and that can be difficult. If we think that what we are working on is less important, we might end up prioritizing other aspects of life at the expense of work more. But it's important to remember that even if we are not working on the most important EA cause, it is still very important and can help many many people or animals. There's no need to compare yourself to EAs that might be having even more impact most of the time, in the same way there is no need to keep comparing yourself with billionaires when trying to earn money. Let's just all do what we can. 

And to reiterate, I think that most WAW is still very promising compared to most other altruistic work, especially when you are one of only a few people working on it. I just don't think we have enough evidence that it's impactful yet to massively scale it up. But it is important to test it.

EDIT: I just want to also add that I might still recommend WAW as a career choice for some people. For example, if you are an expert in ecology and have a aptitude for handling messy research problems.

FWIW, I think there are some complicating factors, which makes me think some WAW interventions could be among my top 1-3 priorities.

Some factors:

  • Maybe us being in short-lived simulations means that the gap between near and long-term interventions is closer than expected.
  • Maybe there are nested minds, which might complicate things?
  • There may be some overlap between the digital minds issue you pointed out and WAW, e.g., simulated ecosystems which could contain a lot of suffering. It seems that a lot of people might more easily see "more complex" digital minds as sentient and deserving of moral consideration, but digital minds "only" as complex as insects wouldn't be perceived as such (which might be unfortunate if they can indeed suffer).

Also,

A while ago I wrote a post on the possibility of microorganism suffering. It was probably a bit too weird and didn't get too much attention -- but given sufficient uncertainty about philosophy of mind, the scope of the problem is potentially huge.[1] I  kind of suspect this could really be one of the biggest near term issues. To quote the piece, there are roughly "1027 to 1029 [microbe] deaths per hour on Earth" (~10 OOMs greater than the number of insects alive at any time, I believe).

The problem with possibilities like these is that it complicates the entire picture. 

For instance, if microorganism suffering was the dominant source of suffering in the near term, then the near term value of farm animal interventions is dominated by how it changes microbe suffering, which makes it a factor to consider when choosing between farm animal interventions.

I think it's less controversial to do some WAW interventions through indirect effects and/or by omission (e.g., changing the distribution of funding on different interventions that change the amount of microbe suffering in the near term). If there's the risk of people creating artificial ecosystems extraterrestrially/simulations in the medium term, then maybe advocacy of WAW would help discourage creating that wild animal suffering. And in addition to that, as Tomasik said, "actions that reduce possible plant/bacteria suffering [in a Tomasik sense of limiting NPP] are the same as those that reduce [wild] animal suffering"[2], which could suggest maintaining prioritization of WAW to potentially do good in this other area as well.

  1. ^

    FYI, I don't consider this a "Pascal's mugging". It seems wrong for me to be very confident that microbes don't suffer, but at the same time think that other humans and non-human animals do, despite huge uncertainties due to being unable to take the perspectives of any other possible mind (problem of other minds).

  2. ^

    To be clear, Tomasik gives less weight than I do to microbes: "In practice, it probably doesn't compete with the moral weight I give to animals". I think he and I would both agree that ultimately it's not a question that can be resolved, and that it's, in some sense, up to us to decide.

Hi Elias. Thank you for raising many interesting points. Here is my answer to the first part of your comment:

  • I agree about short-lived simulations. But as I said, for short-term work, farmed animal welfare currently seems more promising to me. Also, the current WAW work of research and promoting WAW in academia will have actual impact on animals much later than most farmed animal advocacy work. Hence, farmed animal advocacy impacts are better protected against our simulation shutting down.
  • If there are nested minds, then it’s likely that there are more of them in big animals rather than in small animals. And my guess would be that nested minds are usually happy when the animal they are in is healthy and happy. So this would be an argument for caring about big animals more. This would make me deprioritize WAW further because the case for WAW depends on caring about zillions of small animals. I’m not sure how ecosystems being conscious would change things though.
  • I find it somewhat unlikely that a large fraction of computational resources of the future will be used for simulating nature. Hence, I don’t think that it’s amongst the most important concerns for digital minds. I discuss this in more detail here.
  • I also worry about people not caring about small digital minds. I'm not convinced that work on WAW is best suited for addressing it. For example, promoting welfare in insect, fish, and chicken farms might be better suited for that because then we don't have to argue against arguments  like "but it's natural!!" Direct advocacy for small digital minds might be even better. I don't think that the time is ripe for that though, so we could just invest money to use it for that purpose later, or simply tackle other longtermist issues.

You could find many more complications by reading Brian Tomasik’s articles. But Brian himself seems to prioritize digital minds to a large degree these days. This suggests that those complications don’t change the conclusion that digital minds are more important.

I plan to read and think about microbes soon :)

I've updated somewhat from your response and will definitely dwell on those points :) 

And glad you plan to read and think about microbes. :) Barring the (very complicated) nested minds issue, the microbe suffering problem is the most convincing reason for me to put some weight to near-term issues (although disclaimer that I'm currently putting most of my efforts on longtermist interventions that improve the quality of the future).

Sorry, I still plan to look into microbes someday but now I don’t know when I’ll get to it anymore. I suddenly got quite busy and I am extremely slow at reading. For now I’ll just say this: I criticized the WAW movement as I currently see it. That is, a WAW movement that doesn’t focus on microbes, nor on decreasing wild animal populations. I currently simply don’t have an opinion about a WAW movement that would focus on such things. There were some restrictions on the kind of short-term interventions I could recommend in my intervention search. Interventions that would help microbes (or help wild populations just by reducing their populations) simply didn’t qualify.

Thank you for writing this! I have major disagreements with you on this.

However, to me, WAW doesn’t seem to be the most important thing for the far future - not even close. Digital minds could be much more efficient, thrive in environments where biological beings can’t, utilize more resources, and seem more likely to exist in huge numbers. 

(on a separate paragraph) The tractability of trying to reduce digital mind suffering might be even lower than for longtermist animal welfare work, but the scale is much much higher.

The first passage I quoted is plausible, or even likely to be true (I don't have informed views on this yet). But even assuming this is true there is something wrong with using this argument to claim that "Hence, some other longtermist work seems much more promising to me than longtermist animal welfare work." That something wrong is the difference in standards of rigor you applied to the two cause areas. You applied a high level of rigor in evaluating the tractability of WAW as a non-longtermist cause area (so much that you even wrote a short form on it) and concluded that "There seem to be no cost-effective interventions to pursue now". But you didn't use the same level of rigor in evaluating the tractability for helping future digital minds, in fact, I believe you didn't attempt to evaluate it at all. If you use the same standard for WAW and digital minds as cause areas, either you would evaluate none of the cause areas and lead to conclusions like "WAW is far more important than factory farming" (which I believe is a view you moved away from partly because you evaluated tractability). Alternatively, you evaluate both of them, in which case you might not necessarily conclude that WAW is far less important than digital minds from the longtermist perspective.

In fact, I think it's likely that your prioritization between digital mind and WAW might switch. First, there are still huge uncertainties about whether there will actually be digital minds who are actually sentient. The uncertainties are much higher than for wild animals. We know both that sentient wild animals can exist, and that a lot of them will exist (for a certain amount of time), but we have uncertainties on whether sentient digital minds are possible, and also, if they are possible, whether they will actually be produced in huge numbers. Also, in terms of tractability, there is little evidence for most people to think that there is anything we can do now to help future digital minds. As far as my knowledge goes, Holden Karnofsky, Sentience Institute (SI), and the Center on Long-Term Risk (CLR) are the only three EA-affiliated entities that work on digital minds. They might provide some evidence that something can be done, but I suspect the update is not much, as CLR doesn't disclose most of their research, SI is still in a very early stage of their digital mind research, and Holden Karnofsky doesn't seem to have said much about what we can do to help digital minds particularly. Of course, research to figure out whether there could be interventions could itself be an impactful intervention. But that's true for WAW too. If this is a reason that digital mind being more imporatant than longtermist animal welfare (note: this would imply digital minds welfare is also more important than "longtermist human welfare" ), then I wonder why the same argument form won't make WAW way more important than factory farming, and lead you to conclude: "The tractability of trying to reduce wild animal suffering might be lower than work in tackling factory farming, but the scale is much much higher."

Also, if you do use CLR and SI as your main evidence in believing that helping digital minds is tractable, I am afraid you might have to change another conclusion in your post. SI is not entirely optimistic that the future with digital minds is going to be positive (and from chatting with their people I believe they seem pessimistic), and CLR seems to think that astronomical suffering from digital minds is pretty much the default future scenario. If you put high credence in their views about digital minds, I can't see how you would conclude that "reducing x-risks is much much more promising". To be fair to SI and CLR, my understanding is that they are strongly opposed to holding extremely unpopular and disturbing ideas such as increasing X-risk for the reason that this will actually increase suffering-risks. I believe this is the correct position to hold for people who think the future is in expectation negative. But I think at the minimum, if you put high credence in SI and CLR's views, you should probably be at least skeptical about the view that decreasing X-risk is a top priority. 

 

NOTE 1: on the last paragraph: I struggled a lot in writing the last sentence because I am clearly being self-defeating by saying this sentence right after expressing what I called "the correct position".)

NOTE 2: Some longtermists define X-risk as the extinction of intelligent lives OR the "permanent and drastic destruction of its potential for desirable future development". In this definition S-risk seems quite clearly a form of X-risk.  So it is possible for someone who solely cares about S-risk to claim that their priority is reducing X-risk. But operationally speaking it seems that S-risk and X-risk are used entirely separately.

NOTE 3: Personally I have a different argument against increasing extinction risk than cooperative reasons. Even if one holds that the future is in expectation negative, it doesn't necessarily follow that it is better for earth-originated intelligent beings to go extinct now because it is possible that most suffering in the future will be caused by intelligent beings not originating from earth. In fact, if there are many non-earth-originated intelligent beings, it seems extremely likely that most of the future suffering (or well-being) will be created by them, not "us". Given that we are a group of intelligent beings who are already thinking S-risk (alas, we have SI and CLR), and by that we proved to be a kind of intelligent being who could at least possibly develop into beings who care about S-risk, maybe this justifies humanity to continue under the negative future view.

Hi Fai, I appreciate your disagreements. Regarding the tractability of helping digital minds, I participated in CLR’s six week S-risk Intro Fellowship and I thought that their stuff is quite promising. For example, many digital minds s-risks come from an advanced AI that could be developed soon. CLR has connections with some of the organizations that might develop advanced AI. So it seems plausible to me that they could reduce s-risks by impacting how AI is developed. You can see CLR’s ideas on how to influence the development of AI to reduce s-risks in their publications page [edit 2023-02-21: actually, I'm unsure if it is easy to learn their ideas from that page, this is not how I learnt them, so I regret mentioning it]. Some of their other stuff seems promising to me too. I don’t see such powerful levers for animals in longtermism, perhaps you can convince me otherwise. I am not familiar with the work of SI. (I'll address your other points separately)

 First, there are still huge uncertainties about whether there will actually be digital minds who are actually sentient. The uncertainties are much higher than for wild animals. We know both that sentient wild animals can exist, and that a lot of them will exist (for a certain amount of time), but we have uncertainties on whether sentient digital minds are possible, and also, if they are possible, whether they will actually be produced in huge numbers.


I agree but I don’t think this changes much. There can be so many digital minds for so long, that in terms of expected value, I think that digital minds dominate even if you think that there is only a 10% chance that they can be sentient, and a 1% chance that they will exist in high numbers (which I think is unreasonable). I explain why I think that here. Although I’ve just skimmed it and I don’t think I did a great job at it, I remember reading a much better explanation somewhere, I'll try to find it later. 

For now, I’ll just add one more argument to it: Stuart Armstrong makes it seem like it’s not that difficult to build a Dyson Sphere by disassembling a planet like Mercury. I imagine that the materials and energy from disassembling planets could also be probably used to build A LOT of digital minds. Animals are only able to use resources from the surface layer of a small fraction of planets and they are not doing it that efficiently. Anyway, I want to look into this topic deeper myself, I may write more here when I do.

Thank you for your replies Saulius. 

Participating in CLR's fellowship does make you more informed about their internal views. Thank you for sharing that. I am personally not convinced by CLR's open publications that those are things that would in expectation reduce s-risk substantially. But mabye that's due to my lack of mathematical and computer science capabilities. 

I agree but I don’t think this changes much. There can be so many digital minds for so long, that in terms of expected value, I think that digital minds dominate even if you think that there is only a 10% chance that they can be sentient, and a 1% chance that they will exist in high numbers (which I think is unreasonable). 

I would have the same conclusion if have the same probabilities you assigned, and the same meaning of "high numbers".  I believe my credence on this should depend on whether we are the only planet with civilization now. If yes, and if by high numbers it means >10,000x of that of the expected number of wild animals there will be in the universe, my current credence that there actually will be a high number of digital beings created is <1/10000 (in fact, contrary to what you believe, I think a significant portion of this would come from the urge to simulate the whole universe's history of wild animals) BTW, I change my credence on this topics rapidly and with orders or magnitudes of changes, and there are many considerations related to this. So I might have changed my mind the next time we discuss this.

But I do have other considerations that would likely make me conclude that, if there are ways to reduce digital being suffering, this is a or even the priority. These considerations can be summarized to one question: If sentient digital beings can exist and will exist, how deeply will they suffer? It seems to me that on digital (or even non-biological analog) hardware, suffering can go much stronger and run much faster than on biological hardware.

Great to get your takes Saulius, appreciate it.

I've thought about WAW much less than you, but my take is:

  • At the moment, the only WAW-related work we can do involves researching the topic. A lot. Probably for a long time.
  • That's because any real-world-implementation work on WAW would be phenomenally complex, and the sign will be very hard to know most (all?) of the time.
  • But the scale is big enough that it's worth it (except, perhaps, from a longtermist perspective)

As far as I can tell, there's nothing in your post to update away from this opinion? (I read it quickly, so sorry if I missed something)

Thanks Sanjay, that’s a great question! Here are my thoughts:

  • Scale of WAW is big because it encompasses millions of sub-problems. But unless you are looking into destroying nature (which is politically infeasible and I don’t want to do it), you are looking at things like a particular pigeon disease, or how noise from ships affects haddocks. And then the scale doesn’t look that big. Ambiguity about what constitutes a cause is one of the reasons why I now rarely think in terms of scale-neglectedness-tractability. And I’m skeptical of these sub-questions eventually leading to some grand conclusions because nature is so complicated and messy and things from one ecosystem often don’t generalize to another.
  • In the best-case scenario, there would be as much research done on WAW as there is research on ecology. But that still probably wouldn’t be nearly enough. An ecologist once told me that it’s still very difficult to predict how population sizes of animals will change given an intervention, despite there being relatively a lot of research on this. Predicting how overall welfare will change is likely to be many times more difficult. See more on this here. Maybe an advanced AI could solve these complexities, but if we’re going to have an AI that powerful soon, then WAW is not what I’d focus on.
  • Even if we figured out all the consequences, there might still be no agreement on what to do. Conclusions of an intervention might look like this: “if we do this, it will be worse for human economy, better for climate change, increase intense fox suffering during death by 30%, decrease chronic lower intensity fox suffering by 40%, increase rabbit population by 20%, decrease ant population by 10%...” And we might not know what to do with that. How to weigh these different things against each other depends on moral intuitions and people seem to disagree on these a lot (though I haven’t seen surveys on it, maybe I’m wrong). See more on this here.
  • Whenever I research something, I have an intuition about how useful what I’m researching is. When researching WAW interventions, my intuition was that it was less useful per hour spent than my research on farmed animals. Part of it was that I have no expertise in ecology or other fields related to WAW. But I felt that there was more to it. Research by others also felt a bit much less useful though I wasn’t exposed to it that much. It’s just very unclear what is the right thing to do in WAW so things we research and do seem somewhat random and I don’t know what impact on WAW they will ultimately have. So I’d rather try to improve farmed animal welfare.

I strong-upvoted this comment. I found the beginning of the comment particularly helpful:

Scale of WAW is big because it encompasses millions of sub-problems. But unless you are looking into destroying nature (which is politically infeasible and I don’t want to do it), you are looking at things like a particular pigeon disease, or how noise from ships affects haddocks. And then the scale doesn’t look that big. 

Thanks very much for writing this. I always appreciate "why I updated against what I was working on posts", and I thought this was very clear, even for someone who hasn't followed WAS closely.

Hi Saulius, thank you for the interesting post.  When you consider wild animal interventions do you include wild-caught fish?

 

e.g.

https://forum.effectivealtruism.org/posts/tykEYESbJkqT39v64/directly-purchasing-and-distributing-stunning-equipment-to

Hi Tyner. This is one of the questions that I decided to not clarify in the article for the sake of conciseness, so thank you for asking. 

Wild-caught fish die under human control. So working on killing them more humanely doesn't have any complicated uncertain consequences of WAW interventions that I discuss. Relatively to WAW issues, it is easy to research and is unambiguously good if we can do it right. To me, it is precisely the kind of intervention we should be focusing on first before tackling super complex WAW issues. So everything that I say about farmed animal welfare applies to humane fish slaughter.

Decreasing the catch of wild fish (e.g., by buying catch shares) does have complicated WAW consequences and it is very unclear whether they are good or bad. Those fish would've died anyway. Would their deaths have been better or worse if they weren't caught? Maybe we can answer that question. But more importantly, fish catch changes the populations of various wild animals. Are those changes good or bad? ¯\_(ツ)_/¯ Also, if we catch fewer fish now, maybe we make the fishery more sustainable, and hence more fish will be caught in the fishery in the long term... It feels like we are doing a random thing here. Things that I say about WAW apply to decreasing the catch of fish.

To clarify, we might be doing much more good by decreasing catch but it seems equally possible that we are doing a lot of harm. If stunning fish has an impact of +1, then I think that decreasing catch has an impact somewhere from -100 to +100, and my probability weights average out at 0.

If I could very confidently say that the impact of reducing the catch was -98 to +102, and that the median outcome is +2, I would prioritize reducing the catch over stunning. Some risk-averse people wouldn't, it's a matter of personal preference. But this is if it was like casino odds, this can't happen. 

What might happen is that I might work on a very complex cost-effectiveness model of reducing the catch for a year. In the model, I'd try to determine all the impacts on animal populations and well-being, assign subjective weights to each of them, and then average them out. In the end, I'd say that according to the model, the impact of reducing catch could be anywhere from -98 to +102 but the best guess of the model is +2. 

This is very different from casino odds. I’m unsure that I modeled what would happen correctly, whether my subjective weights are correct, and whether my model is free from mistakes. In terms of Bayesianism, I’d say that this model wouldn’t update me much on my prior of 0. In layman’s terms, I’d say that I’d still choose stunning over reducing catch for the same reason I’d choose a hotel with 1,000 reviews that average out to 4.5 stars over a hotel with one five-star review (I’m borrowing this illustration from this blog, I think). The evidence that reducing catch (or a hotel with one review) is a good choice is just too weak.


Also, note that in the end, we are still clueless about the butterfly effects of both because we are always clueless about that. I'm just choosing to ignore 100th order effects because I want to avoid analysis paralysis.

I wrote the article on reducing catch shares, and just wanted to comment saying that I strongly agree with Saulius's analysis here.

Currently, implementing humane slaughter for wild-caught fish seems like a slam dunk.

Currently, reducing the catch of wild fish seems extremely ambiguous. My catch share article mostly concluded with "we should do more research on this to reduce these uncertainties". I also wrote a later article about subsidies - abolishing fisheries subsidies seems like a fairly easy way to reduce the catch. But in many cases, it would cause the population size of the target fish population to increase, causing more deaths by fishing over time even if effort remains low. (Plus, the effects on other wild animals...)

So I strongly agree with Saulius that:

  1. Humane slaughter seems fantastic, and
  2. We probably shouldn't try to reduce the fish catch yet because we don't know if it's good or bad - though I do believe that dedicated research could quite readily make substantial progress on this question.

Thanks so much for sharing this; I'm curating it. 

I'd also encourage people to read the comments and this exchange (and also look at "The correct response to uncertainty is *not* half-speed").

Some particularly good qualities of this post:

  • +1 to "I always appreciate 'why I updated against what I was working on posts'" from Larks
  • The info and opinions expressed in the post were useful
  • This was easy to follow for people with little experience in wild animal welfare
  • This was carefully caveated

This isn't a summary, but in case people are looking for the overall opinion, I found the following a helpful excerpt (bold mine): 

After looking into these topics, I now tentatively think that WAW is not a very promising EA cause because:

  • In the short-term (the next ten years), WAW interventions we could pursue to help wild animals now seem less cost-effective than farmed animal interventions.
  • In the medium-term (10-300 years), trying to influence governments to do WAW work seems similarly speculative to other longtermist work but far less important. 
  • In the long-term, WAW seems important but not nearly as important as preventing x-risks and perhaps some other work.

[...]

My subjective probability that the WAW movement will take off with $8 million per year of funding is not that much higher than the probability that it will take off with $2 million per year of funding, as the movement’s success probably mostly depends on factors other than funding. But with $2 million, the probability would be much higher than with $0 (I’m using somewhat random numbers here to make the point). And ideally, the money that we do spend on WAW would be used to fund people with different visions about WAW to try multiple different approaches so that we could see which approaches work best. I see some of this happening now, so I mostly support the status quo. Of course, my opinion on how much funding WAW should receive might change upon seeing concrete funding proposals.

Important to note that by far most of the animals killed for human consumption are wild animals: they are the circa 8 trillion wild animals retrieved from their environments by the fishing industry (conservative scenario) x circa 1.5 trillion animals from aquaculture, and 70 billion animals coming from land production. Also, removing wild animals in large quantities from its environment has severe ripple effects to its environment and all other animals (and humans). Therefore focusing only on farmed animals as the most CE way to protect animals is the largest blindspot in this whole conversation. So if we talk about lives saved, you have a winner in WAW. If it is about improved lives, we should sum all the cruel hours of fishing processes (slowly dying from water pressure change, slowly dying from asphyxiation, slowly dying from pressing their bodies to other animals' bodies, etc) x all animals caught and then compare to farming. One other thing to consider is the suffering which comes from ecosystem destruction (ocean acidification, coral bleaching, depletion of preys, etc), all of which have a lot of pain as consequences. 

Hi Nathalie. Thank you for engaging with my post. I’ll clarify my thinking.

As I clarified here, I do think that humane slaughter reforms for wild-caught fish and invertebrates are promising. This type of work preceded the WAW movement so I didn’t really associate the two. 

Also, removing wild animals in large quantities from its environment has severe ripple effects to its environment and all other animals (and humans)

I agree. Also, humans reduce animal populations way more with things like habitat destruction. According to this report, “population sizes of wildlife decreased by 60% globally between 1970 and 2014”. But when it comes to the welfare of animals, I think that these effects are more likely positive. I think (with about 60% confidence) that animals are more likely to experience more suffering than happiness in the wild. Hence, reducing their populations is good for the animals themselves. Yes, it’s bad for humans and causes many other complications. But that’s the concern for the sustainability and environmental protection movements, not the animal advocacy movement which is what WAW is part of. 

So if we talk about lives saved, you have a winner in WAW.

I think you are comparing very different things when you say “lives saved”. For farmed animals, you probably mean saving animals from being alive on farms where they suffer a lot. For wild animals, you probably mean allowing animals to be alive and live lives that may or may not involve more suffering than happiness. I think these things are too different for the comparison to work. 

Personally, I just care about decreasing suffering and increasing happiness. By the way, I did try to estimate how many hours fish suffer due to fishing processes here. It’s a very incomplete estimate but my impression is that the numbers are much lower than the numbers of farmed animals, although the intensity of suffering is obviously higher. But as I said, I think that wild fish slaughter reforms are worth pursuing.

I hope this is helpful, let me know if you still disagree with any of my points.

Interesting to hear this update. Presumably many of these are views that people working in WAW have heard before from critics. If you were to try to persuade someone who currently feels strongly about it as a cause that they would shift focus, what would you say are the key factors that might sway them?

If you are a negative utilitarian (i.e., you only care about reducing suffering) or you are pessimistic about the future, you may want to prioritize work that aims to reduce the potential suffering of future digital minds instead (for example, the work of organizations like The Center on Long-term Risk).

I would love to see more work done by regular/totalising utilitarians on how we could improve the expected quality (rather than quantity) of future life, even on the assumption that it will be generally positive!

Regarding the first question, I’d just say what I wrote here and in the linked posts. I don’t know what they hear from other critics, I haven’t asked them that.

It seems that most people who work on improving expected quality of future life are negative (leaning) utilitarians who work on s-risks. And I think it makes sense because if you have an assumption that the future will be positive, working on x-risks seems more promising. It’s very difficult to predict how actions we take now will affect life millions of years from now (unless a value lock-in happens soon). It seems much easier to predict what will decrease x-risks in the next 50 years, and work on x-risks seems to be higher leverage.

But maybe there can be some work to improve the expected quality of future life that is promising. Do you have anything concrete in mind? I was excited about popularizing the idea of hedonium/utlitronium some time ago (but not hedonium shockwave, we don’t want to sound like terrorists, and I wouldn’t even personally want such a shockwave). But then I was too worried about various backfire risks and wasn’t sure if a future where people have the means to make hedonium but just don’t think about doing it enough is that realistic.

I think it makes sense because if you have an assumption that the future will be positive, working on x-risks seems more promising. It’s very difficult to predict how actions we take now will affect life millions of years from now (unless a value lock-in happens soon). It seems much easier to predict what will decrease x-risks in the next 50 years, and work on x-risks seems to be higher leverage.

This is the standard justification for working on immediate extinction, but I think it's weak. It seems reasonable as a case for looking at them as a cause area first, but 'it's hard to predict EV' is a very poor proxy for 'actually having low EV' - IMO the movement has been very lazy about moving on from this early heuristic.

I don't have anything concrete in mind about quality of life. I've been doing some work on refining existential risk concerns beyond short term extinction, which you can see the published stuff on here. I'm currently looking for people to have a look at the next post in that sequence, and the Python estimation script it describes. If you'd be interested in having a look at that, it's here :)

I do wonder whether a similar approach could be useful for quality of life, but haven't put any serious thought into it.

The consequences on the welfare of all affected wild animals seem nearly impossible to determine, even with a lot of research. Also, research in one ecosystem might not generalize to other ecosystems. 

However, this is the same as the concern of cluelessness that applies to all causes. To me, cluelessness seems a bigger problem in WAW because first-order effects are usually dwarfed by second and third-order effects. For example, vaccinations may increase the population of that species, which could be bad if their lives are still full of suffering. But overall, I’m confused about cluelessness.

 

I wanted to emphasise this point and how important I think it is. I feel that cluelessness about the effects of wild animal interventions (particularly as it relates to wild animal population dynamics) is one of the most important topics in EA that could be resolved by further research.

Cluelessness about wild animals comes up a lot even in my research on farmed animals - e.g. the effects of reducing meat consumption on fish caught for fishmeal, or the effects of reducing fisheries subsidies on wild fish and other wild animals.

These dynamics are extremely non-intuitive (e.g. catching fewer fish does weirdly seem bad for fish in many contexts under some philosophical views). And they're strongly context-dependent. But with some dedicated research in ecological  modelling and experimental ecology, I do think that we could make substantial progress on understanding this topic.

Thanks so very much for this. I wish I could give it more upvotes. As I've written about elsewhere, the obsession with expected value while ignoring traceability is one of the worst aspects of that corner of EA. (Why I love https://forum.effectivealtruism.org/posts/GXzT2Ei3nvyZEdWef/every-moment-of-an-electron-s-existence-is-suffering )

But didn't the OP also use expected value calculation to conclude that digital minds are going to dominate the value in the future, while admitting the tractability for helping digital minds might be even lower than helping wild animals?

Curated and popular this week
Relevant opportunities