This is the story of how I came to see Wild Animal Welfare (WAW) as a less promising cause than I did initially. I summarise three articles I wrote on WAW: ‘Why it’s difficult to find cost-effective WAW interventions we could do now’, ‘Lobbying governments to improve WAW’, and ‘WAW in the far future’. I then draw some more general conclusions. The articles assume some familiarity with WAW ideas. See here or here for an intro to WAW ideas.
My initial opinion
My first exposure to EA was reading Brian Tomasik’s articles about WAW. I couldn’t believe that despite constantly watching nature documentaries, I had never realized that all this natural suffering is a problem we could try solving. When I became familiar with other EA ideas, I still saw WAW as by far the most promising non-longtermist cause. I thought that EA individuals and organizations continued to focus most of the funding and work on farmed animals because of the status quo bias, risk-aversion, failure to appreciate the scale of WAW issues, misconceptions about WAW, and because they didn’t care about small animals despite evidence that they could be sentient.
There seem to be no cost-effective interventions to pursue now
In 2021, I was given the task of finding a cost-effective WAW intervention that could be pursued in the next few years. I was surprised by how difficult it was to come up with promising WAW interventions. Also, most ideas were very difficult to evaluate and their impacts were highly uncertain. To my surprise, most WAW researchers that I talked to agreed that we’re unlikely to find WAW interventions that could be as cost-effective as farmed animal welfare interventions within the next few years. It’s just much easier to change conditions and observe consequences for farmed animals because their genetics and environment are controlled by humans. I ended up spending most of my time evaluating interventions to reduce aquatic noise. While I think this is promising compared to other WAW interventions I considered, there are quite many farmed animal interventions that I would prioritize over reducing aquatic noise. I still think there is about a 20% chance that someone will find a direct WAW intervention in the next ten years that is more promising than the marginal farmed animal welfare intervention at the current funding level.
I discuss direct short-term WAW interventions in more detail here.
Influencing governments [rewritten on 2023-02-16]
Some WAW advocates promote research on WAW in academia. For some of them, their aim is twofold: to identify effective interventions and establish WAW as a legitimate field of study. The hope is that by gaining greater legitimacy, WAW advocates can influence government policy. For example, governments could control wild populations more humanely, vaccinate animals against some diseases, and eradicate some parasites.
I am somewhat skeptical of this because:
- The argument for the importance of WAW rests on the enormous numbers of small wild animals. It’s difficult to imagine politicians and voters wanting to spend taxpayer money on improving wild fish or insect welfare, especially in a scope-sensitive way. But it could have been similarly difficult to imagine governments funding species conservation efforts until it happened.
- The consequences on the welfare of all affected wild animals seem nearly impossible to determine, even with a lot of research. Also, research in one ecosystem might not generalize to other ecosystems.
- However, this is the same as the concern of cluelessness that applies to all causes. That is, all interventions have complicated indirect effects that are impossible to predict. To me, cluelessness seems a bigger problem in WAW because first-order effects are usually dwarfed by second and third-order effects. For example, vaccinations may increase the population of that species, which could be bad if their lives are still full of suffering. Also, when the population of one species changes, it changes populations of other species too. But overall, I’m confused about cluelessness.
- Even if we determine consequences, people with different moral views might disagree on which consequences they prefer. For example, people may disagree on how to weigh the welfare of different animal species, happiness versus suffering, short and intense suffering versus chronic but less intense suffering, etc. This may eventually divide the WAW movement into many camps and hurt overall efforts.
See here for further discussion of the goals of lobbying governments to improve WAW, and obstacles to doing this.
Others have argued that what matters most in WAW is moral circle expansion and the effect we may have on the far future. But what exactly do we want to achieve in the far future with our current WAW work? In this article, I listed all the far-future scenarios where WAW seemed very important. The most important ones included scenarios where wildlife is spread beyond Earth. For example, we might develop an aligned transformative AI and the humans in charge might want to colonize space with biological human-like beings and animals, rather than machines. In that case, we could end up with quadrillions of animals suffering on billions of planets for billions of years. Compared to that, WAW interventions on Earth seem much less important.
However, to me, WAW doesn’t seem to be the most important thing for the far future - not even close. Digital minds could be much more efficient, thrive in environments where biological beings can’t, utilize more resources, and seem more likely to exist in huge numbers. Hence, some other longtermist work seems much more promising to me than longtermist animal welfare work.
If you think that the future is likely to be good, then I think that reducing x-risks is much much more promising. If you are a negative utilitarian (i.e., you only care about reducing suffering) or you are pessimistic about the future, you may want to prioritize work that aims to reduce the potential suffering of future digital minds instead (for example, the work of organizations like The Center on Long-term Risk). The tractability of trying to reduce digital mind suffering might be even lower than for longtermist animal welfare work, but the scale is much much higher. I think that there may be some worthwhile things to do in the intersection of longtermism and animal welfare but I don't think it that it should become a major focus for EA.
I discuss WAW and the far future in more detail here.
After looking into these topics, I now tentatively think that WAW is not a very promising EA cause because:
- In the short-term (the next ten years), WAW interventions we could pursue to help wild animals now seem less cost-effective than farmed animal interventions.
- In the medium-term (10-300 years), trying to influence governments to do WAW work seems similarly speculative to other longtermist work but far less important.
- In the long-term, WAW seems important but not nearly as important as preventing x-risks and perhaps some other work.
All that said, I’m unsure how seriously my opinions should be taken because:
- I don’t have an ecology/biology/conservation background to competently evaluate direct short-term WAW interventions,
- I don’t know enough about the history of social movements to evaluate how likely WAW is to succeed as a social movement, and
- I’m not very knowledgeable about longtermism.
Hence, I see my articles on WAW as the start of a conversation, not the end of it.
Despite my concerns, if I was in charge of all EA funding, I still wouldn’t set WAW funding to zero. Since it’s very difficult to predict which interventions will be important in the future, I think it makes sense to try many different approaches. I still believe that WAW is promising enough to do some further research and movement building. For example, even though I think that corporate farmed animal welfare campaigns are very cost-effective, I would not choose to drop all WAW funding in order to fund even more corporate campaigns, because WAW work could open an entirely new world of possibilities. We won't know what's there unless we try.
However, I wouldn’t spend much more money on WAW than EA is currently spending either. My subjective probability that the WAW movement will take off with $8 million per year of funding is not that much higher than the probability that it will take off with $2 million per year of funding, as the movement’s success probably mostly depends on factors other than funding. But with $2 million, the probability would be much higher than with $0 (I’m using somewhat random numbers here to make the point). And ideally, the money that we do spend on WAW would be used to fund people with different visions about WAW to try multiple different approaches so that we could see which approaches work best. I see some of this happening now, so I mostly support the status quo. Of course, my opinion on how much funding WAW should receive might change upon seeing concrete funding proposals.
[EDIT 2023-02-21: I criticized the version of the WAW movement I saw being pursued by organizations. To my knowledge, no organization currently works on WAW by trying to help microorganisms, or decrease wild animal populations (which perhaps could be done in relatively uncontraversial ways). I simply don’t have an opinion about a WAW movement that would focus on such things. There were some restrictions on the kinds of short-term interventions I could recommend in my intervention search. Interventions that would help microbes or help wild animals by reducing their populations simply didn’t qualify. Thank you to the commenters who made me realize this.]
Opinions expressed here are solely my own. I’m not currently employed by any organization.
As comments by Max and Vasco hint at, I think it might still be the case that considering effects on wild animals is essential when evaluating any short-termist intervention (including those for farmed animals and human welfare). For example, I remain uncertain whether vegetarianism increases or decreases total suffering because of wild-animal side effects, mainly because beef may (or may not!) reduce a lot of suffering even if all other meat types increase it. (I still hope people avoid eating chicken and other small farmed animals.)
In my opinion, the most important type of WAW research is getting more clarity on big questions, like what the net impact is of cattle grazing, climate change, and crop cultivation on total invertebrate populations. These are some of the biggest impacts that humanity has on wild animals, and the answers would inform analysis of the side effects of various other interventions like meat reduction or family planning.
I haven't followed a lot of the recent WAW work, but my experience is that many other people working on WAW are less focused on these questions about how humans change total population sizes. Researchers more often think about ways to improve welfare while keeping population size constant. Those latter interventions may have more public support and are more accommodating to non-suffering-focused utilitarians who don't want to reduce the amount of happiness in nature. But as you mention, those interventions also seem more subject to cluelessness (is vaccination net good or bad considering side effects? it's super unclear) and often target big animals rather than invertebrates. From this perspective, I think efforts to improve the welfare of farmed chickens and fish may be more cost-effective (though it's still worth exploring wild-animal interventions too, as you say). Research and advocacy of less painful killing of wild-caught fish also seems extremely important, and it's unclear whether to count this as a farmed-animal or wild-animal intervention.
Apart from less painful killing of wild animals (fish, rodents, insect on crop fields), or maybe some other large-scale interventions like reducing aquatic noise, I think the cost-effectiveness of work on wild animals would come from trying to reduce (or avoid increasing) population sizes, via reducing plant productivity. Reducing the amount of plant growth in an ecosystem helps invertebrates (including mites, springtails, and nematodes, which are extremely numerous but also hard to help in ways other than preventing their existence) and is somewhat less subject to cluelessness problems because you don't have to model internal ecosystem dynamics as much -- you just have to reduce the productivity of the first trophic level. But I haven't found a lot of people who are interested in working on population-reduction interventions. This apparent lack of interest in reducing populations is one reason I've done less thinking about WAW in recent years. Another reason is that I respect the efforts of WAW organizations to put a more mainstream face on the WAW movement, and I wonder if my continuing to harp on why we should actually be focusing on reducing populations would seem to them as counterproductive.
I compiled a list of possible interventions to reduce total invertebrate populations that could possibly be lobbied for at the government level in some fashion. Some of them don't seem super cost-effective, but some might be, such as trying to reduce irrigation subsidies, which is an intervention that could be argued for on other grounds as well. Taxing fertilizer and/or water use on crop fields, pastures, and/or lawns might be pretty valuable if it could be achieved. (Some local regions do have subsidies for people who reduce their lawn's water use.) If geoengineering to fertilize oceans ever happens, opposing it would be extremely important, though doing a campaign about that now might be net harmful via increasing the salience of the idea.
Thank you for your thoughtful comment, Brian. I should’ve mentioned that I think that WAW might be tractable for people who think that reducing wild animal populations is good. I don’t think that reducing populations is good because:
I worry that researching the big questions you mention might be intractable. You wrote a detailed analysis about the impact of climate change on wild animal suffering, and concluded that your “probabilities are basically 50% net good vs. 50% net bad when just considering animal suffering on Earth in the next few centuries (ignoring side effects on humanity's very long-term future).” Correct me if I’m wrong, but your analysis rests on the assumption that reducing wild populations is good. An analysis without this assumption would be vastly more difficult because it would require analyzing whether various populations would become happier (instead of just analyzing changes in population). I worry that we wouldn’t have enough confidence in that analysis to inform our decisions.
All that said, I think that attempting to research WAW impacts of vegetarianism might still be worth it, though I’m unsure.
Hi Saulius, I wonder if have factored in your points 2&3 above in your view that you think digital beings are a priority for longtermism, and factory farming a priority for non-longtermist animal welfare. It seems that both cause areas, if taken consistently and seriously enough, would go against (organic) human interests and is not what most people want.
I imagine that few people would say that it’s actively harmful to try to decrease s-risks to digital minds (especially when it involves trying to prevent escalating conflicts, sadism, and retributivism). Most people would say it’s just a waste of money and effort. Most people agree that it’s important that animals used for food are well cared for. Not everyone votes for welfare improvements in ballot initiatives but a significant proportion of people do. And if we had infinite money, I don't think anyone would mind improving conditions for farmed animals. But if there was a ballot initiative asking “shall we actively try to decrease wild animal numbers?”, I imagine that almost everyone would passionately oppose it. I don't feel comfortable working on things most people would passionately oppose (and not just because it's a waste of resources, but because they think that our desired outcome is bad). It also makes it difficult to work on it as an EA cause and it could repulse some people from EA. But I now weakened (or changed) my position on reducing populations after realizing that it doesn’t always to lead to more environmental issues (see this comment). Also, people might not mind if we are only decreasing populations of tiny animals.
Hi again Brian. I agree that your vision for the WAW movement is different from what WAW organizations are currently doing. I criticized the latter and don’t have a strong opinion on your vision of focusing on very small animals and reducing populations. I said that I don’t want to reduce populations partly because that usually includes reducing plant productivity which in turn causes more climate change, which might increase s-risks, x-risks, poverty, etc. But perhaps some interventions in your list could reduce populations without causing more environmental issues. I hadn’t considered them because they didn't qualify for my WAW intervention search, and I had forgotten about them.
I’m unsure how one would go about lobbying for these things. I’d be a bit afraid of PR risks too. Imagine a farmer lobby figuring out that the real motivation for people funding lobbying against their irrigation subsidies are weirdos from Effective Altruism who are worried about small invertebrate suffering. That could cause some bad press for EA. I also think that the few potential WAW funders I talked to wouldn’t have funded such interventions but there could be other funders.
(Sorry for being slow to return here!)
Yeah, I think some ways of reducing plant growth are often supported by environmentalists, including
Some other activities like encouraging palm-oil production (which destroys rainforests) are bad for the environment but may reduce poverty. (I should note that I'm unsure about the net impact of palm-oil production for wild-animal suffering.)
I agree that the question of how to lobby for these things without seeming like weirdos is tricky. It would be easier if society cared more about wild animals from a suffering-focused perspective, which can be one argument for starting with philosophical advocacy regarding those topics, though it seems unlikely that concern for wild-animal welfare or suffering-focused ethics will ever become mainstream (apart from weak forms of these things, like caring about charismatic megafauna or Buddhist philosophy about suffering). These philosophical views would also help for various far-future scenarios. But from the standpoint of trying to reduce some short-term suffering, especially if we worry about cluelessness for longer-term efforts, then this approach of doing philosophical advocacy would be too slow and indirect (except insofar as it contributes to movement building, leading some other people to pursue more concrete interventions).
So overall I may agree with you that for short-term, concrete impact, we should plausibly focus on things like stunning of wild-caught fish and so on. This is why I feel a lot of fuzzies about the Humane Slaughter Association and related efforts. That said, it does seem worth pondering more whether there are ways to direct money toward opposing irrigation subsidies and the like.
After re-reading this article, I noticed that my original summary about influencing governments was very uncharitable and heavily overstated my skepticism. I apologize for that. I now rewritten that summary to reflect what I wrote in my article about influencing governments. You can see the original summary here.
Out of curiosity: When making claims like this, are you referring to the cost-effectiveness of farmed animal interventions when only considering the impacts on farmed animals? Or do you think this claim still holds if you also consider the indirect effects of farmed animal interventions on wild animals?
(Sorry if you say this somewhere and I missed it.)
Ok, let’s consider this for each type of farmed animal welfare intervention:
It seems I hadn’t considered this enough, so thank you very much for the question :)
For example, National Chicken Council argues that slower growing (and hence higher welfare) broiler breeds will have higher environmental costs: more feed, fuel, land, and water will be needed (I think it’s a biased source but the general conclusion makes sense). Similarly, according to Xin et al. (2011), “hens in noncage houses are less efficient in resource (feed, energy, and land) utilization, leading to a greater carbon footprint.” (I adapted this text from this post of mine).
Thanks for pointing this out, Max!
Based on this, I think it is plausible the nearterm effects of any intervention are driven by the effects on wild animals, namely arthropods and nematodes. For example, in the context of global health and development (see here):
If this is so, the expected nearterm effects of neartermist interventions (including ones attempting to improve the welfare of farmed animals) are also quite uncertain, in the sense they can easily be positive or negative. I still expect neartermist interventions to be positive due to their longterm effects. However, expecting them to be better than longtermist ones would be a surprising and suspicious convergence.
Great question. Yes, I think the claim still holds. It’s a bit tricky to explain why so you will have to stick with me. Let’s assume that:
It would follow that if we ignored the impact on chickens, then opposing welfare reforms would be the new most cost-effective intervention because of its impact on WAW. But that would be a very surprising coincidence. I’d call it surprising divergence (as opposed to surprising convergence).
But ah, I’m now realizing that there is much more to this problem. It gets a lot messier. I’ll write more about this later.
Good argument. It might not work if one maintains an act-omission distinction regarding harms to the environment. For example, imagine that
If this were the case, we shouldn't do veg outreach. It would seem that we should try to increase environmental impact by promoting beef consumption, but maybe our act-omission distinction prevents us from wanting to do that. It could also be difficult to explain why one is promoting beef production or get other people concerned for animal welfare on board.
(This example is just for illustration. In reality, if I could push a button to increase vegetarianism in the world, I probably would.)
This is a great question. I totally missed this consideration while reading this post but this question is imperative to keep in mind while thinking about this topic.
Thank you, that was very interesting Saulius. You talk a bit about comparisons with other cause areas, but I'm still not entirely sure which cause area you would personally prioritise the most right now ?
Thanks for the question Denise. Probably x-risk reduction, although on some days I’d say farmed animal welfare. Farmed animal welfare charities seem to significantly help multiple animals per dollar spent. It is my understanding that global health (and other charities that help currently living humans) spend hundreds or even thousands of dollars to help one human in the same way. I think that human individuals are more important than other animals but not thousands of times.
Sometimes when I lift my eyes from spreadsheets and see the beauty and richness of life, I really don’t want all of this to end. And then I also think about how digital minds could have even richer and better experiences: they could be designed for extreme happiness in the widest sense of the word. And if only a tiny fraction of world’s resources could be devoted to the creation of such digital minds, there could be bazillions of them thriving for billions of years. I’m not sure if we can do much to increase this possibility, maybe just spread this idea a little bit (it’s sometimes called hedonium or utilitronium). So I was thinking of switching my career to x-risk reduction if I could manage to find a way to be at least a tiny bit useful there.
But then at my last meditation retreat, I had this powerful 15 minute experience of feeling nothing but love for all beings. And then it was so clear that helping farmed animals is the most important cause, or at least the cause I personally should continue working on since I have 5 years of experience in it. It was partly because we don’t know if the future we are trying to save with x-risk reduction will contain more happiness than suffering. I don’t trust my reasoning on this topic, my opinion on the question might flip many times if I were to think about it deeply. But I know that I can help millions of animals now.
I don't know what I'll choose to do yet.
It would be amazing if you keep on working on farmed animals. Your work on it so far was extremely helpful and partially lead to the creation of some cost-effective charities. The field is also extremely talent-constrained, and I want to cry whenever I hear "I was into animals but now I want to work on AI" at EA conferences. I know you can still change your mind but just want to say, that counterfactually it seems to me that you are much more needed on the farmed animals side than you will ever be on the x-risk reduction.
Hi Ula. I just somehow want to let you know that I used to work on animal welfare and I moved on to work on AI. But I didn't stop doing animal welfare, because I do AI&animals.
Beautiful answer, indeed !
I'd also strongly recommend working for farm animals : the long-term stuff is so uncertain when it comes to determining the net impact.
I second the recommendation for Saulius to continue to work on farmed animal welfare. But I disagree with the view that uncertainty alone can undermine the whole case of longtermism.
Thank you, that was a beautiful response. I'm glad I asked!
I share the experience that sometimes my personal experiences and emotions affect how I view different causes. I do think it's good to feel the impacts occasionally, though overall it leads me to being more strict about relying on spreadsheets.
Hmm, I think I ultimately rely only on my emotions. I’ve always been a proponent of “Do The Math, Then Burn The Math and Go With Your Gut”. When it comes to the question of personal cause prioritization, the question is basically “what do I want to do with my life?” No spreadsheet will tell me an answer to that, it’s all emotions. I use spreadsheets to inform my emotions because if I didn’t, a part of me would be unhappy and would nag me to do it.
This is getting very off-topic but I’m now thinking that maybe all decisions like that. Maybe my only goal in life is to be happy in the moment. I do altruistic things because when I don’t do it for a while, a part of me nags me to do it and that makes me less happy. I don’t eat candy constantly because I’d be unhappy in the moment before eating it (or buying it) since it might ruin my health. I think that 2+2=4 because it feels good and 2+2=3 feels bad. If you disagree with some of that (and there are probably good reasons to disagree, I partly just made that up), then you might disagree with what I said in the parent comment (the one starting with "Hmm") for the same reason.
[EDIT, Feb 17th: I expressed this in a confusing way. Most of what I meant is that I try to drop the "shoulds", which is what many therapists recommend. I use spreadsheets for prioritizing causes but I do it because I want to not because I should. I felt I needed to say this probably because I misinterpreted what Denise said in a weird way because I was confused. The question of how much to trust feelings vs spreadsheets does make sense. There is something else I'm saying in this comment that I still believe but I won't get into it because it's off-topic.]
I wonder if you're Goodharting yourself (as in Goodhart's law) or oversimplifying. Your emotions reflect what you care about and serve to motivate you to act on what you care about. They're one particular way your goals (and your impressions of how satisfied/frustrated they are or will be) are aggregated, but you shouldn't forget that there are separately valuable goals there.
I wouldn't say someone can't be selfless just because they want to help others and helping others satisfies this desire or makes them happy. And I definitely wouldn't say their only goal is to be happy in the moment. They have a goal to help others, and they feel good or bad depending on how much they think they're helping others.
EDIT: Also, it could be that things might feel more right/less wrong without feeling emotionally/hedonically/affectively better. I'm not sure all of my judgements have an affective component, or one that lines up with how preferable something is.
Maybe part of the brain just wants to be happy, and other parts of the brain condition rewards of happiness on alignment with various other goals like helping others or using spreadsheets.
That was beautifully put, Saulius.
FWIW, I think there are some complicating factors, which makes me think some WAW interventions could be among my top 1-3 priorities.
A while ago I wrote a post on the possibility of microorganism suffering. It was probably a bit too weird and didn't get too much attention -- but given sufficient uncertainty about philosophy of mind, the scope of the problem is potentially huge. I kind of suspect this could really be one of the biggest near term issues. To quote the piece, there are roughly "1027 to 1029 [microbe] deaths per hour on Earth" (~10 OOMs greater than the number of insects alive at any time, I believe).
The problem with possibilities like these is that it complicates the entire picture.
For instance, if microorganism suffering was the dominant source of suffering in the near term, then the near term value of farm animal interventions is dominated by how it changes microbe suffering, which makes it a factor to consider when choosing between farm animal interventions.
I think it's less controversial to do some WAW interventions through indirect effects and/or by omission (e.g., changing the distribution of funding on different interventions that change the amount of microbe suffering in the near term). If there's the risk of people creating artificial ecosystems extraterrestrially/simulations in the medium term, then maybe advocacy of WAW would help discourage creating that wild animal suffering. And in addition to that, as Tomasik said, "actions that reduce possible plant/bacteria suffering [in a Tomasik sense of limiting NPP] are the same as those that reduce [wild] animal suffering", which could suggest maintaining prioritization of WAW to potentially do good in this other area as well.
FYI, I don't consider this a "Pascal's mugging". It seems wrong for me to be very confident that microbes don't suffer, but at the same time think that other humans and non-human animals do, despite huge uncertainties due to being unable to take the perspectives of any other possible mind (problem of other minds).
To be clear, Tomasik gives less weight than I do to microbes: "In practice, it probably doesn't compete with the moral weight I give to animals". I think he and I would both agree that ultimately it's not a question that can be resolved, and that it's, in some sense, up to us to decide.
Hi Elias. Thank you for raising many interesting points. Here is my answer to the first part of your comment:
You could find many more complications by reading Brian Tomasik’s articles. But Brian himself seems to prioritize digital minds to a large degree these days. This suggests that those complications don’t change the conclusion that digital minds are more important.
I plan to read and think about microbes soon :)
I've updated somewhat from your response and will definitely dwell on those points :)
And glad you plan to read and think about microbes. :) Barring the (very complicated) nested minds issue, the microbe suffering problem is the most convincing reason for me to put some weight to near-term issues (although disclaimer that I'm currently putting most of my efforts on longtermist interventions that improve the quality of the future).
Sorry, I still plan to look into microbes someday but now I don’t know when I’ll get to it anymore. I suddenly got quite busy and I am extremely slow at reading. For now I’ll just say this: I criticized the WAW movement as I currently see it. That is, a WAW movement that doesn’t focus on microbes, nor on decreasing wild animal populations. I currently simply don’t have an opinion about a WAW movement that would focus on such things. There were some restrictions on the kind of short-term interventions I could recommend in my intervention search. Interventions that would help microbes (or help wild populations just by reducing their populations) simply didn’t qualify.
Great post! It inspired me to write this, because I worry that such posts might accidentally discourage others from working on this cause area. https://forum.effectivealtruism.org/posts/e8ZJvaiuxwQraG3yL/don-t-over-update-on-others-failures
(to be clear: I really appreciate postmortems and want more content like it!)
I worry about the same thing and it's one of the reasons why I hesitated to post this for a long time. Thank you for your comment and your post. I want to paste a comment I wrote under your post because I want people who work on WAW to read it even though it's kind of trivial:
And to reiterate, I think that most WAW is still very promising compared to most other altruistic work, especially when you are one of only a few people working on it. I just don't think we have enough evidence that it's impactful yet to massively scale it up. But it is important to test it.
EDIT: I just want to also add that I might still recommend WAW as a career choice for some people. For example, if you are an expert in ecology and have a aptitude for handling messy research problems.
Thank you for writing this! I have major disagreements with you on this.
The first passage I quoted is plausible, or even likely to be true (I don't have informed views on this yet). But even assuming this is true there is something wrong with using this argument to claim that "Hence, some other longtermist work seems much more promising to me than longtermist animal welfare work." That something wrong is the difference in standards of rigor you applied to the two cause areas. You applied a high level of rigor in evaluating the tractability of WAW as a non-longtermist cause area (so much that you even wrote a short form on it) and concluded that "There seem to be no cost-effective interventions to pursue now". But you didn't use the same level of rigor in evaluating the tractability for helping future digital minds, in fact, I believe you didn't attempt to evaluate it at all. If you use the same standard for WAW and digital minds as cause areas, either you would evaluate none of the cause areas and lead to conclusions like "WAW is far more important than factory farming" (which I believe is a view you moved away from partly because you evaluated tractability). Alternatively, you evaluate both of them, in which case you might not necessarily conclude that WAW is far less important than digital minds from the longtermist perspective.
In fact, I think it's likely that your prioritization between digital mind and WAW might switch. First, there are still huge uncertainties about whether there will actually be digital minds who are actually sentient. The uncertainties are much higher than for wild animals. We know both that sentient wild animals can exist, and that a lot of them will exist (for a certain amount of time), but we have uncertainties on whether sentient digital minds are possible, and also, if they are possible, whether they will actually be produced in huge numbers. Also, in terms of tractability, there is little evidence for most people to think that there is anything we can do now to help future digital minds. As far as my knowledge goes, Holden Karnofsky, Sentience Institute (SI), and the Center on Long-Term Risk (CLR) are the only three EA-affiliated entities that work on digital minds. They might provide some evidence that something can be done, but I suspect the update is not much, as CLR doesn't disclose most of their research, SI is still in a very early stage of their digital mind research, and Holden Karnofsky doesn't seem to have said much about what we can do to help digital minds particularly. Of course, research to figure out whether there could be interventions could itself be an impactful intervention. But that's true for WAW too. If this is a reason that digital mind being more imporatant than longtermist animal welfare (note: this would imply digital minds welfare is also more important than "longtermist human welfare" ), then I wonder why the same argument form won't make WAW way more important than factory farming, and lead you to conclude: "The tractability of trying to reduce wild animal suffering might be lower than work in tackling factory farming, but the scale is much much higher."
Also, if you do use CLR and SI as your main evidence in believing that helping digital minds is tractable, I am afraid you might have to change another conclusion in your post. SI is not entirely optimistic that the future with digital minds is going to be positive (and from chatting with their people I believe they seem pessimistic), and CLR seems to think that astronomical suffering from digital minds is pretty much the default future scenario. If you put high credence in their views about digital minds, I can't see how you would conclude that "reducing x-risks is much much more promising". To be fair to SI and CLR, my understanding is that they are strongly opposed to holding extremely unpopular and disturbing ideas such as increasing X-risk for the reason that this will actually increase suffering-risks. I believe this is the correct position to hold for people who think the future is in expectation negative. But I think at the minimum, if you put high credence in SI and CLR's views, you should probably be at least skeptical about the view that decreasing X-risk is a top priority.
NOTE 1: on the last paragraph: I struggled a lot in writing the last sentence because I am clearly being self-defeating by saying this sentence right after expressing what I called "the correct position".)
NOTE 2: Some longtermists define X-risk as the extinction of intelligent lives OR the "permanent and drastic destruction of its potential for desirable future development". In this definition S-risk seems quite clearly a form of X-risk. So it is possible for someone who solely cares about S-risk to claim that their priority is reducing X-risk. But operationally speaking it seems that S-risk and X-risk are used entirely separately.
NOTE 3: Personally I have a different argument against increasing extinction risk than cooperative reasons. Even if one holds that the future is in expectation negative, it doesn't necessarily follow that it is better for earth-originated intelligent beings to go extinct now because it is possible that most suffering in the future will be caused by intelligent beings not originating from earth. In fact, if there are many non-earth-originated intelligent beings, it seems extremely likely that most of the future suffering (or well-being) will be created by them, not "us". Given that we are a group of intelligent beings who are already thinking S-risk (alas, we have SI and CLR), and by that we proved to be a kind of intelligent being who could at least possibly develop into beings who care about S-risk, maybe this justifies humanity to continue under the negative future view.
Hi Fai, I appreciate your disagreements. Regarding the tractability of helping digital minds, I participated in CLR’s six week S-risk Intro Fellowship and I thought that their stuff is quite promising. For example, many digital minds s-risks come from an advanced AI that could be developed soon. CLR has connections with some of the organizations that might develop advanced AI. So it seems plausible to me that they could reduce s-risks by impacting how AI is developed. You can see CLR’s ideas on how to influence the development of AI to reduce s-risks in their publications page [edit 2023-02-21: actually, I'm unsure if it is easy to learn their ideas from that page, this is not how I learnt them, so I regret mentioning it]. Some of their other stuff seems promising to me too. I don’t see such powerful levers for animals in longtermism, perhaps you can convince me otherwise. I am not familiar with the work of SI. (I'll address your other points separately)
I agree but I don’t think this changes much. There can be so many digital minds for so long, that in terms of expected value, I think that digital minds dominate even if you think that there is only a 10% chance that they can be sentient, and a 1% chance that they will exist in high numbers (which I think is unreasonable). I explain why I think that here. Although I’ve just skimmed it and I don’t think I did a great job at it, I remember reading a much better explanation somewhere, I'll try to find it later.
For now, I’ll just add one more argument to it: Stuart Armstrong makes it seem like it’s not that difficult to build a Dyson Sphere by disassembling a planet like Mercury. I imagine that the materials and energy from disassembling planets could also be probably used to build A LOT of digital minds. Animals are only able to use resources from the surface layer of a small fraction of planets and they are not doing it that efficiently. Anyway, I want to look into this topic deeper myself, I may write more here when I do.
Thank you for your replies Saulius.
Participating in CLR's fellowship does make you more informed about their internal views. Thank you for sharing that. I am personally not convinced by CLR's open publications that those are things that would in expectation reduce s-risk substantially. But mabye that's due to my lack of mathematical and computer science capabilities.
I would have the same conclusion if have the same probabilities you assigned, and the same meaning of "high numbers". I believe my credence on this should depend on whether we are the only planet with civilization now. If yes, and if by high numbers it means >10,000x of that of the expected number of wild animals there will be in the universe, my current credence that there actually will be a high number of digital beings created is <1/10000 (in fact, contrary to what you believe, I think a significant portion of this would come from the urge to simulate the whole universe's history of wild animals) BTW, I change my credence on this topics rapidly and with orders or magnitudes of changes, and there are many considerations related to this. So I might have changed my mind the next time we discuss this.
But I do have other considerations that would likely make me conclude that, if there are ways to reduce digital being suffering, this is a or even the priority. These considerations can be summarized to one question: If sentient digital beings can exist and will exist, how deeply will they suffer? It seems to me that on digital (or even non-biological analog) hardware, suffering can go much stronger and run much faster than on biological hardware.
Great to get your takes Saulius, appreciate it.
I've thought about WAW much less than you, but my take is:
As far as I can tell, there's nothing in your post to update away from this opinion? (I read it quickly, so sorry if I missed something)
Thanks Sanjay, that’s a great question! Here are my thoughts:
I strong-upvoted this comment. I found the beginning of the comment particularly helpful:
Thanks very much for writing this. I always appreciate "why I updated against what I was working on posts", and I thought this was very clear, even for someone who hasn't followed WAS closely.
Hi Saulius, thank you for the interesting post. When you consider wild animal interventions do you include wild-caught fish?
Hi Tyner. This is one of the questions that I decided to not clarify in the article for the sake of conciseness, so thank you for asking.
Wild-caught fish die under human control. So working on killing them more humanely doesn't have any complicated uncertain consequences of WAW interventions that I discuss. Relatively to WAW issues, it is easy to research and is unambiguously good if we can do it right. To me, it is precisely the kind of intervention we should be focusing on first before tackling super complex WAW issues. So everything that I say about farmed animal welfare applies to humane fish slaughter.
Decreasing the catch of wild fish (e.g., by buying catch shares) does have complicated WAW consequences and it is very unclear whether they are good or bad. Those fish would've died anyway. Would their deaths have been better or worse if they weren't caught? Maybe we can answer that question. But more importantly, fish catch changes the populations of various wild animals. Are those changes good or bad? ¯\_(ツ)_/¯ Also, if we catch fewer fish now, maybe we make the fishery more sustainable, and hence more fish will be caught in the fishery in the long term... It feels like we are doing a random thing here. Things that I say about WAW apply to decreasing the catch of fish.
To clarify, we might be doing much more good by decreasing catch but it seems equally possible that we are doing a lot of harm. If stunning fish has an impact of +1, then I think that decreasing catch has an impact somewhere from -100 to +100, and my probability weights average out at 0.
If I could very confidently say that the impact of reducing the catch was -98 to +102, and that the median outcome is +2, I would prioritize reducing the catch over stunning. Some risk-averse people wouldn't, it's a matter of personal preference. But this is if it was like casino odds, this can't happen.
What might happen is that I might work on a very complex cost-effectiveness model of reducing the catch for a year. In the model, I'd try to determine all the impacts on animal populations and well-being, assign subjective weights to each of them, and then average them out. In the end, I'd say that according to the model, the impact of reducing catch could be anywhere from -98 to +102 but the best guess of the model is +2.
This is very different from casino odds. I’m unsure that I modeled what would happen correctly, whether my subjective weights are correct, and whether my model is free from mistakes. In terms of Bayesianism, I’d say that this model wouldn’t update me much on my prior of 0. In layman’s terms, I’d say that I’d still choose stunning over reducing catch for the same reason I’d choose a hotel with 1,000 reviews that average out to 4.5 stars over a hotel with one five-star review (I’m borrowing this illustration from this blog, I think). The evidence that reducing catch (or a hotel with one review) is a good choice is just too weak.
Also, note that in the end, we are still clueless about the butterfly effects of both because we are always clueless about that. I'm just choosing to ignore 100th order effects because I want to avoid analysis paralysis.
I wrote the article on reducing catch shares, and just wanted to comment saying that I strongly agree with Saulius's analysis here.
Currently, implementing humane slaughter for wild-caught fish seems like a slam dunk.
Currently, reducing the catch of wild fish seems extremely ambiguous. My catch share article mostly concluded with "we should do more research on this to reduce these uncertainties". I also wrote a later article about subsidies - abolishing fisheries subsidies seems like a fairly easy way to reduce the catch. But in many cases, it would cause the population size of the target fish population to increase, causing more deaths by fishing over time even if effort remains low. (Plus, the effects on other wild animals...)
So I strongly agree with Saulius that:
Thanks so much for sharing this; I'm curating it.
I'd also encourage people to read the comments and this exchange (and also look at "The correct response to uncertainty is *not* half-speed").
Some particularly good qualities of this post:
This isn't a summary, but in case people are looking for the overall opinion, I found the following a helpful excerpt (bold mine):
Interesting to hear this update. Presumably many of these are views that people working in WAW have heard before from critics. If you were to try to persuade someone who currently feels strongly about it as a cause that they would shift focus, what would you say are the key factors that might sway them?
I would love to see more work done by regular/totalising utilitarians on how we could improve the expected quality (rather than quantity) of future life, even on the assumption that it will be generally positive!
Regarding the first question, I’d just say what I wrote here and in the linked posts. I don’t know what they hear from other critics, I haven’t asked them that.
It seems that most people who work on improving expected quality of future life are negative (leaning) utilitarians who work on s-risks. And I think it makes sense because if you have an assumption that the future will be positive, working on x-risks seems more promising. It’s very difficult to predict how actions we take now will affect life millions of years from now (unless a value lock-in happens soon). It seems much easier to predict what will decrease x-risks in the next 50 years, and work on x-risks seems to be higher leverage.
But maybe there can be some work to improve the expected quality of future life that is promising. Do you have anything concrete in mind? I was excited about popularizing the idea of hedonium/utlitronium some time ago (but not hedonium shockwave, we don’t want to sound like terrorists, and I wouldn’t even personally want such a shockwave). But then I was too worried about various backfire risks and wasn’t sure if a future where people have the means to make hedonium but just don’t think about doing it enough is that realistic.
This is the standard justification for working on immediate extinction, but I think it's weak. It seems reasonable as a case for looking at them as a cause area first, but 'it's hard to predict EV' is a very poor proxy for 'actually having low EV' - IMO the movement has been very lazy about moving on from this early heuristic.
I don't have anything concrete in mind about quality of life. I've been doing some work on refining existential risk concerns beyond short term extinction, which you can see the published stuff on here. I'm currently looking for people to have a look at the next post in that sequence, and the Python estimation script it describes. If you'd be interested in having a look at that, it's here :)
I do wonder whether a similar approach could be useful for quality of life, but haven't put any serious thought into it.
I wanted to emphasise this point and how important I think it is. I feel that cluelessness about the effects of wild animal interventions (particularly as it relates to wild animal population dynamics) is one of the most important topics in EA that could be resolved by further research.
Cluelessness about wild animals comes up a lot even in my research on farmed animals - e.g. the effects of reducing meat consumption on fish caught for fishmeal, or the effects of reducing fisheries subsidies on wild fish and other wild animals.
These dynamics are extremely non-intuitive (e.g. catching fewer fish does weirdly seem bad for fish in many contexts under some philosophical views). And they're strongly context-dependent. But with some dedicated research in ecological modelling and experimental ecology, I do think that we could make substantial progress on understanding this topic.
Thanks so very much for this. I wish I could give it more upvotes. As I've written about elsewhere, the obsession with expected value while ignoring traceability is one of the worst aspects of that corner of EA. (Why I love https://forum.effectivealtruism.org/posts/GXzT2Ei3nvyZEdWef/every-moment-of-an-electron-s-existence-is-suffering )
But didn't the OP also use expected value calculation to conclude that digital minds are going to dominate the value in the future, while admitting the tractability for helping digital minds might be even lower than helping wild animals?