Hide table of contents

A note on status: The following post is Wild Animal Initiative’s first foray into thinking about the intersection of transformative AI and wild animal welfare. I’m not an AI safety expert, but I’ve been largely familiar with the main ideas since 2012. I’ve done my best to read around the issues and asked several more AI-informed folks to give feedback on this draft; however, our ideas at Wild Animal Initiative remain very early days so please take this all with a grain of salt.

 

We welcome additional feedback on how we should be preparing for potential societal changes due to AI. Please feel free to contact strategy@wildanimalinitiative.org to share suggestions or thoughts; I’ll also be engaging with comments on this post.

(edited to correct some typos on 28 April)

Executive Summary

Many in the AI safety community believe transformative AI (TAI; defined here as AI tools with cognitive capabilities surpassing highly trained humans) could arrive within decades or even years. This report explores the implications of such short timelines on wild animal welfare (WAW), both directly and for scientific research in the field.

The timing of TAI's arrival could significantly influence the optimal strategy for WAW. If TAI arrives within 20 years, the value of traditional academic field building diminishes somewhat, though establishing legitimacy for WAW considerations remains crucial. A technological explosion would likely accelerate theoretical and modeling work more quickly than experimental fieldwork, suggesting a strategic shift toward prioritizing physiological research that AI agents can't easily perform.

Four key TAI challenges appear particularly relevant for wild animals: space exploration (risking animal suffering spreading beyond Earth), AI misalignment (necessitating efforts to instill good WAW values), abundance (requiring preparations to utilize AI tools for WAW), and the reality of unpredictable outcomes (demanding strategic flexibility). Some specific suggestions for wild animal welfare strategy include increased emphasis on gathering welfare-relevant datasets, fostering AI literacy within the community, addressing potential value conflicts more urgently, and maintaining strategic flexibility in the face of uncertainty.

It seems that investing in WAW science remains valuable under many scenarios despite short TAI timelines, although specific research priorities may vary. The scientific progress we make today in establishing welfare metrics, gathering baseline data, and building conceptual clarity will hopefully guide AI systems toward better outcomes for wild animals regardless of timeline. 

Transformative AI and wild animals: An exploration.

Many people in the AI safety community believe that transformative AI (TAI) — here defined as AI tools with cognitive capabilities surpassing the most highly trained human minds — could be imminent. Based on advances in AI over the past 5 or so years, leading AI scholars predict that TAI is very likely to arrive this century, and may be here in a decade or less.

While others have made significant efforts to predict the effects of transformative AI on humans, I have not seen any detailed attempts to predict its effects on wild animals. In this report, I explore two related concepts: the ways TAI and associated outcomes may impact wild animal welfare directly, and the ways TAI and associated outcomes may impact the pursuit of scientific research on wild animal welfare. Because I’m the strategy director for Wild Animal Initiative, and Wild Animal Initiative is currently focused on the acceleration of a scientific field for wild animal welfare science, my primary focus is on the latter issue.

When will TAI get here, and what will it bring?

The implications of TAI on wild animal welfare and the strategy of the associated movement are sensitive both to the time of TAI’s arrival, and the nature of the changes that TAI will bring. You can skip to the next section if you’re already pretty familiar with the AI safety discourse in general, and this report from Forethought in particular.

When will TAI get here?

Of the changes that the arrival of TAI might bring, many may have implications for wild animal welfare. But the precise implications of these changes for current wild animal welfare strategy depend heavily on exactly when TAI arrives. Generally, I think it is useful to divide the option space into two buckets: TAI arrives in < 20 years, or if it arrives later than ~ 20 years from now (including potentially never arriving). These buckets are based on my expectations about baseline progress in wild animal welfare science and how long it will take to translate into impact for animals: I suspect that it will take about 20 years for wild animal welfare science to develop from the small scientific field it is now, into a robust scientific field with regular contributions to wild animal welfare advocacy that are actually taken seriously by relevant decision-makers. There will be some things we can do for wild animal welfare in the meantime, but this is the dividing line around which I suspect we would be better off thinking about field growth over direct impact or vice versa.

Understanding the timeline for TAI matters because it helps us think about what we should be trying to accomplish before it gets here, both in terms of scientific discovery and in terms of application to animal welfare. If the timeline is extremely short, there’s probably not a ton of transformative outcomes for wild animal welfare we can achieve in that time, and we should mostly be thinking about how to get the fruits of AI progress to benefit wild animals. If the timeline is quite long, there’s probably a lot of wild animal welfare progress that can be made in that time, so we don’t want to punt urgent things to the TAI to solve. In the following sections, I also explore how timelines influence strategy across wild animal welfare work areas.

What will TAI bring?

I am not an AI expert. As such, I’m basing the scenarios I present on a recent report from Forethought, a research group focused on preparing for TAI. The report authors posit two major effects of the AI transformation: a technology explosion and a manufacturing explosion. In the technology explosion, TAI outperforms human researchers and multiplies, in terms of both the number of AI actors and the quality of the intellectual activities they can perform, leading to significant increases in global capacity for cognitive labor. In the manufacturing explosion, TAI becomes better than humans at figuring out how to build and use machines, leading to significant increases in the pace of manufacturing of things like drones and specialized robots.

In addition to the technology and manufacturing explosions described above, the authors posit several “grand challenges” that TAI could bring. These grand challenges are outcomes that TAI makes possible or that the two explosions could push us toward. The authors think we can start preparing now to try to avoid or take advantage of these potential outcomes. For a detailed overview of the grand challenges, I encourage folks to read the report or listen to the related podcast episode; this recent EA forum post might also be helpful. In the rest of this report, I mostly assume the reader is familiar with the meanings of terms like value lock-in, AI misalignment, and epistemic disruption. If that’s not you, I recommend reading the Forethought report or a different AI safety explainer before reading this report; there is also a brief explanation of each term in the Appendix.

How will a technological & manufacturing explosion affect wild animal welfare science?

TAI and the strategic importance of academic fields.

The question most relevant to my work at Wild Animal Initiative is the relevance of TAI to the idea of academic field acceleration. Essentially, we wanted to reflect on the possibility that short TAI timelines should incline us to do something besides our main strategy.

First, it will help to explain why Wild Animal Initiative focuses on academic field growth. Ultimately, what determines the ability of humans to help improve wild animal welfare is that we figure out how to measure wild animal welfare, how animal welfare is affected by various features of life in the wild, and how to improve wild animal welfare while navigating network effects and inter-animal tradeoffs. All of these questions require scientific progress. Whether this progress is made inside or outside of academia may have minimal or no impact on our ability to improve wild animal welfare, as long as that scientific progress is actually used to inform practice. And it is certainly possible to do non-academic field building. Progress studies (e.g., as practiced by the Institute for Progress Studies) is an example of a new-ish research field that has actively chosen not to pursue academic status, rejecting much of the bureaucracy and separation from impact associated with academic institutions.

Generally, I think fostering your research field outside of academia is the right course when:

  1. You’re pretty close to understanding how to achieve the outcomes you want to achieve: Progress studies is much closer to being able to translate research → policy than wild animal welfare is to being ready to translate research → welfare interventions (with a few exceptions).
  2. The work you’re doing requires more thinking than experimenting: Progress studies researchers generally don’t need access to expensive scientific equipment — already owned by universities — to do their work, while wild animal welfare scientists often do.
  3. The decision-makers you care about influencing don’t care whether you’re in academia: I don’t know enough about progress studies to say whether this is the case for them, but it seems less true in wild animal management.

In the case of wild animal welfare science, we’re pretty far away from easily translatable insights, except in a few specific cases like wildlife fertility control or pest management practice improvements. The work that needs to be done is as much experimental as it is theoretical, and requires lab space and expensive equipment. You can get around this issue if you’re extremely well-resourced, and have ~ $10M to fund your own non-academic research center; wild animal welfare, as a movement, is not in this position. Finally, wildlife biologists and wildlife academics seem to interact a lot, and have related intellectual influences (at least in the US — the context I am focused on). I’m sure academics would complain that practitioners don’t listen to them enough, but I’ve seen firsthand how academic themes often flow out into wildlife practice, particularly at conferences like The Wildlife Society’s Annual Conference, where practitioners and scientists regularly interact.

Do these conditions still hold in a world of transformative AI? Potentially not. First, and to my mind most importantly, technological advances in AI may vastly speed up technological progress in wild animal welfare, such that we can solve in a handful of years the problems we thought would take 50 to 100 years of sustained scientific research effort to address. I see the ultimate value of a scientific field — with its sense of identity, continuity, and community — as being that it helps to keep progress going over time scales longer than any particular scientist’s interest in the topic. But if we can solve major roadblocks in wild animal welfare on a relatively short timeline, it may be less important that a field develops around wild animal welfare science at all.

The two other considerations on my list may also be influenced by TAI. In the case of infrastructure needs, if TAI also makes enormous advances in manufacturing, it could reduce the need to rely on university resources — most likely because of decreased costs for the relevant equipment. In the case of getting the “academic stamp of approval”, transformative AI may change the role of academia in society. If policy-makers are all taking instruction directly from AI tools, they may not care what academic scientists have to say on the topic. However, after discussing with a few AI experts I know and consulting with Claude (the LLM) about TAI generally, I think both these outcomes are much less useful to think about than the first one, so I focus the next section on timelines to impact.

Finally, another consideration is the possibility that technology and manufacturing explosions will be asymmetric: That some work areas will experience explosive growth more quickly than others. The potential for asymmetry doesn’t obviously impact the value of an academic field, but it does point to a potential reorganization of research priorities. I describe that issue in more detail below, as well.

Shortened impact timelines for wild animal welfare science

It seems that in most of the worlds in which TAI is coming soon, technological progress towards wild animal welfare will also advance more quickly than it would have otherwise (unless we fail to take advantage of the progress; more on that in this section). If so, the most obvious issue for academic-field-building strategies would be that no field is necessary. However, I think that the possible worlds in which this is true are far fewer than the possible worlds in which academic field building has value. This is because academic field building has two kinds of value: increasing cognitive capacity dedicated to solving WAW problems, and increasing legitimacy. Currently, the best way to build cognitive capacity is to bring more people into your field. Certainly, if each of a handful of well-aligned, thoughtful wild animal welfare scientists are suddenly able to work with dozens of AI research assistants, the value of bringing more people into the field may be reduced. The extent to which this easy-to-obtain additional cognitive labor benefits the field depends on when you start reaching diminishing returns. Because WAW is a fairly new field with a tremendously broad scope, it might be quite some time before we reach the point at which more scientists means only minimally more progress. As a result, it could still be beneficial to have a larger scientific community ready to use AI assistants — 500 scientists working with 10 AI assistants each might still be quite a bit better than 50 scientists with 10 AI assistants each.

The second benefit of having a field — increasing legitimacy — is much more salient, even in worlds with short AI timelines. Right now, only a handful of decision makers even know what wild animal welfare is (in the ~ hedonistic utilitarian sense), let alone take it seriously. It seems to me that the majority of the option space post-TAI, this is still a factor preventing good outcomes for wild animals, and this risk is decreased if the field is well-established as a potentially small but meaningful and appropriate direction of inquiry by the time TAI arrives. We already optimize quite a lot of the work we do for legitimacy-building, attempting to conduct and fund research which showcases the intellectual value of wild animal welfare and help people understand what it is and why they should care about it. We also try to normalize other important points, like the idea that low-charisma, highly numerous animals might matter just as much as cute endangered species. Although building academic legitimacy isn’t the only way to get these ideas out there, it still seems like quite an important one across many of the possible worlds that seem likely post-TAI.

That said, I do think reflecting on these possibilities points to a greater value of communications work and awareness building for wild animal welfare perspectives. Currently, Animal Ethics is the only organization I am aware of that is dedicated to growing awareness and familiarity with WAW thinking. But since their core focus is on existing animal advocates, it could be valuable for the WAW community to expand their efforts at building awareness, familiarity, and legitimacy among other audiences. Those with long timelines might think it is not valuable to work on public advocacy right now, because we have little to request of the public at this time and it will take a long time to get to the point where we do. In contrast, those with short timelines might think that we’ll be close to WAW interventions at scale in a matter of years; if that’s right, public communications might be far more valuable.

Another important issue if timelines to impact are short is the potential for values-based disagreement among relevant actors. There are three value sets that come up a lot in human-wild animal relations: Biodiversity conservation values, consumptive use of nature values, and animal rights values (with the former two having much more power than the latter). Each value set conflicts in at least a few areas with a value set focused entirely on wild animal welfare. For example, both conservationists and animal rights advocates generally oppose human intervention in nature, with certain exceptions specific to their values: Conservationists may intervene to cull populations of one species in favor of another; animal rights advocates may intervene to protect animals from human activities such as poaching. But foundationally, both groups believe the best thing for wild animals is to leave them alone and let nature be nature (generally; of course, there is value heterogeneity in any community). This directly contradicts the wild animal welfarist view that we should act to reduce suffering when we can do so responsibly, regardless of its cause.

Currently, there are few (if any) specific interventions WAW folks are confident they should advocate for, but lots of research areas that are palatable to almost all these value sets. Because there aren’t many wild animal welfare interventions that are obviously possible, there also isn’t too much to disagree with among various wild animal focused factions. If suddenly quite a lot of interventions were possible, this would change. In the interest of field growth, many in the wild animal welfare community primarily focus on showcasing how much we have in common with other communities. But if a huge number of interventions that are unappealing to some of these groups are going to be available in merely a few years time, resolving these conflicts or figuring out how to work around them becomes much more urgent.

Takeaway: Although I would still probably want the community’s portfolio of activities to include academic field growth, an increased allocation of resources toward public communications is more valuable if your timelines are short. Resolving issues of contradictory values between other wild animal actors (like the conservation movement) is also more urgent, and something potentially best addressed with communications activities that are not currently prevalent in the community.

Asymmetric acceleration and research priorities

The Forethought report has a lot of useful information about the idea of asymmetric takeoff: The possibility that AI will accelerate certain work areas at different rates. One likely asymmetry, according to the reports I’ve read and some discussion with relevant experts, is that the technological explosion is likely to arrive before the manufacturing explosion. This has interesting implications for research prioritization within the field of wild animal welfare science.

Taking a step back to understand WAW prioritization currently, wild animal welfare science needs to resolve some key bottlenecks to make progress.

  • For the vast majority of wild animals, we don’t know the extent to which they are sentient or how to measure their welfare if they are. Even when we do know how to measure their welfare in theory, the methods may not be amenable to free-ranging animals.
  • In part because we don’t know how to measure welfare, and in part because so few people have tried to do it, we broadly have no idea what quality of life is like for the vast majority of wild animals. We also don’t understand the fundamental relationships between various parameters and welfare for almost any animal, so we can’t predict how a wild animal’s welfare will respond to any kind of change, except in extremely obvious cases.
  • In part due to this complete lack of knowledge of the fundamentals, and in part due to highly restricted modeling capacity compared to what we would like, we can’t predict how things we do in nature will influence wild animal welfare, so we can’t confidently act to improve it.

These challenges are surmountable with time, but they require a combination of theoretical, modeling, and experimental work. Of these, theoretical and modeling research seem most amenable to AI acceleration, while experimental work — of the kind that involves directly interacting with animals — will likely accelerate more slowly. Existing AI tools are already helping to speed up data analysis in areas like behavioral research: A researcher can train an AI model to recognize a behavior, then have it analyze large amounts of video in far less time than it would take the researcher to do it manually. These sorts of analytical advances could make it possible to process large volumes of video and audio data in much shorter periods of time. To the extent that we can draw welfare-relevant content from audio and video recordings of animals, AI can probably help us speed that up.

Right now, AI cannot send an agent into the field to collect blood, and while it could potentially, in the near future, identify candidate welfare metrics, it cannot conduct the experiments to validate those metrics. These kinds of physiological experiments broadly do not involve automatable machines at the collection level, and so seem unlikely to be immediately accelerated without dramatic advances in human-level-capacity robotics. Thus, I expect that progress on experimental work, particularly related to physiology, will accelerate much less than other areas of wild animal welfare science.

Two areas that seem likely to accelerate much more quickly are the monitoring and modeling of wild animal welfare. In the first case, the paired technological and manufacturing explosions may make autonomous drones more feasible, as well as leading to advances in existing tech like biologging, audio recording, video recording, or satellite monitoring. Imagine being able to use satellites to track the health of smaller and smaller animals, being able to deploy nearly-undetectable, animal-borne biologging devices, or using audio recorders sensitive enough to record breathing or heart rates of individual animals. Although challenging current technological capabilities, these possibilities are extensions of things we can already do, and broadly exist in areas that AI seems to be helping with to at least some extent. So, they seem the most likely areas for TAI to have significant effects.

We are currently quite limited in how much we can predict wild animal welfare outcomes using models. This is partly due to a lack of welfare-relevant data to put into the models, and philosophical uncertainty about how to evaluate welfare at the population- and community-levels. But it’s also due to computational limitations. Assuming the monitoring explosion theorized above helps us understand correlations between welfare and various ecological variables, we may be able to use technological advances in computing to make increasingly accurate and holistic models of welfare outcomes. If we can readily analyze vast amounts of monitoring data, we can potentially start looking for welfare patterns — things like whether certain species have better welfare than others, whether welfare is influenced by stage of life, how environmental factors shape welfare, and more. But this will be possible only insofar as we know how to interpret this data in welfare terms.

Strategically, the potential asymmetric acceleration of technology across wild animal welfare inquiry areas suggests at least a modest amount of prioritization towards less “puntable” projects. The idea behind this is that there is no point investing a lot of slow human energy in advancing welfare modeling technology if TAI will be much faster at that project five years from now, so we should punt the problem to the TAI. If that’s right, it might make sense to focus more on experimental and physiological work that won’t be as likely to be accelerated in the near future. If we were certain of near-term TAI, wild animal welfare scientists would need to look at their research agendas through an asymmetrical acceleration lens and potentially prioritize work that (1) makes it easier for future TAI to interpret data in welfare terms and (2) is physiological in nature, such that we are unlikely to be worse than AI or robots at collecting the data for a longer period of time.

The extent to which this prioritization makes sense depends in part on the timelines (both of TAI arrival and of how long the relevant research would take) and the extent to which progress on one question is blocking the others. Even if parts of identifying welfare metrics could be done faster by TAI, it’s hard to do anything else until you have at least some validated welfare measure for at least some species. So it could make sense to continue working on the lowest-hanging fruit in “blocking” categories of research, even if they are otherwise puntable questions. Additionally, if TAI is coming in a year, punting research questions to it is much less costly than if it’s coming in 100 years — there are quite a lot of animals we should be able to help in 100 years’ time, so it doesn’t make sense to punt in that timeline. Finally, if all questions are roughly equally valuable and we don’t have enough capacity to answer all of them, there’s almost no cost to punting the puntable questions for the next decade while we work on the seemingly less puntable questions. However, the latter doesn’t seem to be true: If I put 0% likelihood on TAI, I’d be very likely to prioritize modeling research, for a few reasons that go beyond the scope of this article. A next step here could be to do more research on what is likely to be puntable or not, and what the costs of punting might be.

Finally, in addition to prioritizing differently among research projects, a person with short AI timelines might also prioritize differently in the area of workforce development. Currently, it seems more useful to have great thinkers in wild animal welfare than it does to have great field scientists (even though the latter is still quite important) because so many of the issues are at least partly conceptual. But if you think a few scientists with a lot of AI help could solve those conceptual issues quickly, then less well-aligned field data collectors could end up being the bottleneck to progress. This consideration might speak to focusing more on early career stages (who often might have the skills to do fieldwork even if they aren’t experienced enough to make broad conceptual progress), and worrying less about spreading conceptual alignment among a large cohort.

Takeaways: In proportion to your confidence in short timelines, we should research what areas of work are most likely to be puntable, and which are least likely to be puntable, as well as continuing existing work on identifying bottlenecks. We should continue to reflect on the implications of AI for workforce development, and potentially increase community outreach to expand versatility and the ability for the community to respond to a range of outcomes. Broadly, paying attention to the extent to which AI is accelerating different types of wild animal welfare research could help guide priorities going forward, and the extent to which the assumptions made in this section seem right. For example, If AI facilitates expansions in robotics or automated data collection from animals much sooner than I anticipate, the appropriate prioritization would change.

How does TAI affect WAW directly?

The previous section focused on how TAI-related explosions in progress might affect wild animal welfare science. In general, such explosions also seem relevant to direct work on wild animal welfare (ie., wild animal welfare interventions), but in fairly unpredictable ways. My guess is that the broadly likely outcomes from TAI are semi-unpredictable increases in human or AI capacity and general destabilization. When used well, increased capacity could be great for wild animal welfare science, and to the extent that it increases abundance, it could increase human willingness to help wild animals. When used poorly, and when paired with general destabilization, it could distract humankind from helping animals at all, directly increase suffering through war, and make it harder to do interdisciplinary and cross-border science.

Since I couldn’t easily make predictions about how the technology and manufacturing explosions, broadly, would affect wild animal welfare interventions, I approached this problem by analyzing each of the “grand challenges” described in the Forethought report. For around half of the grand challenges discussed in the Forethought report, it was not clear to me that there were any relevant takeaways for the wild animal welfare community. There were two reasons for this: either the issue seemed like it would not be particularly relevant to the WAW community compared to other priorities, or the issue seemed like it could affect WAW a lot, but with completely uncertain sign (i.e., no way to know if the effect would be positive or negative). In contrast, four of the grand challenges had at least some clear takeaway for WAW strategy. I discuss the most relevant challenges below; if you want my thoughts on the less relevant challenges, see the Appendix (which also includes a table of lower-confidence musings about each challenge).

Space exploration

First, space exploration seems like an issue that is of high relevance to the wild animal welfare community, and potentially the one with the least current work and the fewest actors prepared to work on it. Basically, many in the AI community think a technological and manufacturing explosion caused by TAI could lead to major advances in space travel. If this is right, it at least somewhat increases the risk of animal life being propagated on more planets. This seems extremely bad, since we have no idea how to ensure that those animals will live good lives. For more on this topic, see here.

Currently, all the work in this space is done mostly by a few people writing posts on the internet, and a few philosophers. Because talking about it makes you sound really weird, and most of the wild animal welfare organizations that currently exist are trying to show how wild animal welfare can actually be a really normal thing to care about, no current organization is in a great position to work on this. I am not an expert in communications or media or policy, but it seems like some thinking about how best to spread the idea that animals to space = bad among the people most likely to be in charge of those outcomes would be worth working on.

Alternatively, rather than openly trying to spread pro-animal values (which has historically been challenging, to say the least), one could conduct research as to what methods of space travel are more or less likely to lead to animals being involved. For example, Wild Animal Initiative’s Science Director (who has a side interest in astrobiology) told me he would be much more concerned about plans to colonize Mars than Venus. Mars is a planet similar to Earth in many ways, and colonization would likely be planet-based. On Venus, the safe temperature zone would be in the atmosphere, so colonization would likely be more space-station based, and it's reasonable to think that station-based life would be less likely to involve live animals. Researchers who identify themes like this could then work with advocates to push for more animal-friendly space plans.

I think this issue could be quite important, but I don’t have much else to add to this stage; I say this just to be clear that the level of concern I have is not proportional to the amount of words dedicated to the issue in this piece :)

Takeaway: The community should invest at least some additional resources in explaining clearly to relevant actors why we shouldn’t bring animals to space or advance animal life in space, or researching which approaches to space exploration are least likely to harm animals.

Misalignment

Broadly, I think the major implication of possible AI misalignment is to try to contribute to AI systems having good positions toward wild animal welfare. There are two approaches to this: gathering more information to inform decision-making about wild animals, and trying to work directly on AI values themselves.

In the latter case, there are some things that might work if done carefully (although mostly, it won’t be to our community’s comparative advantage to work on AI alignment generally). Advocates with good connections to AI labs or high familiarity with the AI alignment space might be able to push for good wild animal welfare values being something developers of AI pay attention to. If done poorly, this could backfire: I don’t think saying “care about wild animals” will have a good outcome if that caring takes the place of guarding autonomy over wellbeing, for example. So, it could be worth some folks with the appropriate connections and advantages working on figuring out how to test AI for appropriate wild animal welfare values, and how to get the issue to be one of relevance to AI developers.

However, even if you succeed in getting AI to have a good philosophy toward wild animals, the things it suggests will not be right if it's basing them on bad data. Right now, we know basically nothing about status-quo wild animal welfare for the most numerous animals. We haven’t studied this in any branch of science; even related disciplines like stress physiology have focused on only a few animals (in the scheme of things) and aren’t actually that welfare relevant (stress can increase with pleasure or displeasure). So, given that people might be using AI to make decisions about wild animals sometime in the near future, we better make sure the AI has good inputs into its models, so it’s not basing its reasoning off the very limited information currently available.

In terms of actions, one might think that we could just wait for the AI to develop and then use it to gather the data. But I think this overlooks the issue of asymmetric takeoff. Basically, it seems likely to me (and several other people I spoke with) that a technology transformation will take place before a manufacturing explosion. If that’s right, there could be a meaningful amount of time when the AI is very good at making models but not good at gathering data in the field. Luckily, the wild animal welfare science community is already quite focused on this topic; the implication from TAI is merely to weigh it more in our prioritization than other priorities that might be less urgent under short TAI timelines.

Takeaway: Increase the emphasis on gathering status-quo data on wild animal welfare, and other welfare-relevant datasets, to feed into future AI models. Spend at least some resources figuring out if we can help future AI have nuanced and useful WAW values that genuinely promote wild animal wellbeing at scale.

Abundance

Abundance is the idea that, used correctly, TAI could vastly increase the wellbeing of the world through improved decision-making, better technologies, and more. There is a risk that AI doesn’t get used well, and we miss out on all that value (even if obviously negative outcomes don’t occur) or that the abundance isn’t shared.

A few ideas here seem relevant to wild animal welfare but not very actionable right now. Perhaps if we succeed in getting most people on earth to have a very good life through TAI, they will be more willing to think about things like wild animal welfare. Perhaps if we succeed at wild animal welfare AI alignment, the AI will share that abundance with wild animals. But these seem like remote enough possibilities with low enough potential for value-add by the WAW community that they aren’t my core focus here.

Instead, I think the biggest risk is that there are all sorts of advantages AI (transformative or not) could bring to wild animal welfare science and advocacy, and a huge risk that that value is not taken up because of political beliefs, bad values in human society, or technophobia. Already, some scientists are benefiting from the existing productivity enhancements offered by AI, while others are not. In fact, a few scientists are actively hostile toward AI: I have seen a small number of left-leaning scientists (a community that includes myself) dismissing all AI out of hand, saying it has no value, because they dislike AI generated art or think AI tools contribute excessively to climate change. I suspect that most of these people, if asked directly, would acknowledge that AlphaFold is valuable, so I hope that some of this is just BlueSky posturing. But if the “meme” that all AI is evil spreads, then if AI opens up entirely new pathways to studying or helping wild animals, the community will fight against us taking them up. This could be a disaster for wild animal welfare because of the value left on the table.

We already have some evidence that technophobia among relevant actors can be a problem. I believe that gene drives and other Crispr technologies could be used, already, to significantly improve the welfare of wild animals. While some opponents to that view hold appropriately risk-averse positions, others are simply wholly averse to human technological intervention or afraid of totally unrealistic risks. It is extremely difficult to speak openly about the potential value for wild animal welfare from these technologies without being taken to be a techno-utopian (which is seen as naive). I could easily imagine a similar problem arising with AI technologies.

On the other hand, some have argued that excessive adoption of mediocre AI tools could also be a significant failure mode. The solution, then, is not unthinking adoption of novel technologies, but careful thinking about the usefulness of tools along with the relevant skills to assess when something is working when and when it isn’t.

Takeaway: Increase AI literacy in the wild animal welfare community and encourage the use of AI tools where they create value for wild animal welfare science or interventions, but also to be wary of potential shortcomings of the tools. Encourage reasoned, nuanced positions on AI tools rather than extreme positions.

Unknown unknowns

Finally, there are almost certainly a large number of unknown unknowns. Most folks in the AI community believe that the advent of TAI will bring rapid society level changes. There will certainly be a lot going on we can’t predict. The implications for the wild animal welfare community are the same as for any other: prepare for uncertainty. There’s lots of literature out there on what “preparing for uncertainty” should look like, so I won’t spend too much time on it here.

Takeaway: Prepare for uncertainty in the normal ways people prepare for uncertainty. That might look like making fewer high upfront cost bets with long pay off times (either financially or with strategy), determining what your trip-wires are for guiding pivots, and generally being ready to shift focus if the world changes around you.

Summary

Overall, writing this report has made me more confident that the wild animal welfare community should take at least some actions to prepare for the possibility of near-term TAI. As described in the report, I think the following activities are low enough cost to be worth doing even if your probability of TAI is relatively low. They also don’t create enormous potential harms if TAI does not arrive soon. I divide my takeaways into “things that make sense for the wild animal welfare science community, specifically” and “things that make sense for the wild animal welfare advocacy community, generally.”

Summary of takeaways for wild animal welfare science

  • Increase emphasis on gathering status-quo data on wild animal welfare and other welfare-relevant datasets to feed into future AI models
  • Increase AI literacy in the wild animal welfare science community and encourage the use of AI tools where they create value
  • Foster reasoned, nuanced positions on AI technologies rather than extreme positions
  • Prepare for uncertainty through strategic flexibility, establishing tripwires for pivots, and avoiding high upfront cost bets with long payoff times
  • Potentially shift prioritization toward experimental and physiological work that may be less easily accelerated by AI
  • Research which areas of work might be puntable to future AI versus which require immediate human attention
  • Continue to reflect on the implications of AI for workforce development in wild animal welfare
  • Monitor how AI is accelerating different types of wild animal welfare research to guide future priorities

Summary of takeaways for the wild animal welfare community in general

  • Spend resources on determining how to align future AI systems with nuanced wild animal welfare values
  • Invest resources in explaining clearly to relevant actors why we shouldn't bring animals to space or advance animal life in space
  • Research which approaches to space exploration are least likely to harm animals
  • Increase AI literacy in the wild animal welfare community and encourage the use of AI tools where they create value
  • Foster reasoned, nuanced positions on AI technologies rather than extreme positions
  • Prepare for uncertainty through strategic flexibility, establishing tripwires for pivots, and avoiding high upfront cost bets with long payoff times
  • Consider increasing allocation of resources toward public communications if timelines to impact are shortened
  • Address potential value conflicts with other stakeholders (conservation, animal rights) more urgently if intervention timelines are shortened
  • Continue to reflect on the implications of AI for workforce development in wild animal welfare
  • Increase community outreach to expand versatility and the ability to respond to a range of outcomes

Closing thoughts

All the above said, it’s a little trickier to decide what kinds of research to prioritize if your estimate of the probability of TAI coming in < 20 years is anything other than zero or one, because there could be more direct trade-offs between different prioritization choices. A short summary of my best guess, depending on what you expect from TAI, is as follows:

  • If you expect TAI will not arrive in < 20 years, focusing research on the field-building impact seems appropriate.
  • If you expect TAI will arrive in < 20 years, and believe that there are a range of possibilities for what could occur after that, you should have a balanced research portfolio that includes some research that is expected to benefit animals directly before TAI arrives, some research targeted at accelerating the growth of the field, and some research designed to make us more ready to take advantage of TAI when it arrives.
  • If you expect TAI will arrive in < 20 years, and you strongly expect there to be a post-TAI world in which an academic field of WAW is totally irrelevant, you should mostly be working on research that prepares us to take advantage of TAI or which might benefit animals directly before TAI arrives.

The middle scenario seems most reasonable to me: In many possible worlds, even post-TAI, it seems like wild animals will still exist, academic credibility will still matter, and scientists will still need to do essential fieldwork. The scientific progress we make today—establishing welfare metrics, gathering baseline data, and building conceptual clarity—may guide AI systems toward better outcomes for wild animals.

Overall, the extent to which someone should adjust their work on the basis of these suggestions depends on their confidence in near-term TAI, and exactly how short their timeline is. For Wild Animal Initiative, we don’t yet have an institutional position on this, although we plan to work on that in the near future.

Regardless of exact timelines, developing legitimacy for wild animal welfare considerations before major technological transformations occur seems essential. By maintaining strategic flexibility, investing in AI literacy, and continuing to gather crucial welfare data, the wild animal welfare community can position itself to navigate an uncertain but potentially transformative future for the quadrillions of animals living in the wild.

Acknowledgments

Thanks to Simon Eckerstöm Liedholm, Luke Hecht, Cameron Meyer Shorb, Peter Wildeford, Jesse Clifton, Michael St. Jules, Miranda Zhang, and Fin Moorhouse for feedback and discussion. Thanks to Cat Kerr and Shannon Ray for copy-editing and clarity suggestions. Note that feedback does not imply endorsement; these folks suggested many smart edits I did not incorporate, because I’m stubborn.

AI acknowledgement: I used Claude for help with writing the Executive Summary and with proofreading. It was much better at the summary than the proofreading.

Appendix

TAI and wild animal welfare science: potential effects.

I made this table when doing my initial brainstorming. I think the summary in the report is better, but am leaving the table here in case it is a useful format for some folks.

Effect of TAICorresponding optimal strategyLikelihood
Modeling and monitoring technological process happens much more quickly

Regardless of when TAI occurs, it will be useful to have done the following before it happens:

  • Increase emphasis on AI tool use in the scientific community to be able to take advantage of developments.
  • Start encouraging the development of relevant training data and AI models that could be beneficial for wild animal welfare, so that if AI accelerates, so will wild animal welfare tools.
  • Adjust workforce & research prioritization in response to asymmetric acceleration.

Seems highly likely to be at least partially true within 10 years; increasing likelihood and increasing degree as time goes on.

 

These activities are fairly low cost, so seem worth doing now anyway.

Timelines to impact for wild animal welfare science are dramatically shortened

If it’s coming in < 20 years:

  • Increase focus on growing a community of wild animal welfare scientists outside of academia to avoid bureaucracy and lag.
  • Address values issues sooner rather than later.

 

Regardless of when TAI occurs, it will be useful to have done the following before it happens:

  • Increased comfort with intervening in nature to help wild animals in advance of the capacity to do so arising.
Some degree of shortening appears highly likely; however, shortening to the extent that focusing on academic field building wasn’t a good idea seems only 5% likely to me, in part because we’re not that far away from having a robust academic field anyway.
TAI brings unpredictable consequences, but likely involves increases in human capacity and decreases in international stability.

If it’s coming in < 20 years:

  • Prepare for uncertainty
    • Don’t invest in high upfront-cost, long time to payoff strategies or financial investments.
    • Monitor AI developments.
    • Maintain strategic flexibility and responsiveness.
  • Try to ensure AI cares about WAW
    • Connect with AI experts on how to do this.
    • Flood likely training data sources with pro-WAW content.
    • Treat AI well so it likes us.
  • Do more research on likely acceleration asymmetries, and prioritize research agendas accordingly, and potentially as follows:
    • Focus on experimental and physiological research.
    • Punt modeling issues to future AI.
    • Punt monitoring issues to future AI.

 

If it’s coming in > 20 years:

  • Might still be worth punting certain particularly challenging questions, but generally, business as usual.

How much of a strategy adjustment to make here depends on how likely you think AI is to arrive in < 20 years.

 

If you think it’s very unlikely, that points to only a modest favoring of non-AI-amenable research agendas, a very minor disfavoring of long term bets and investments, and perhaps an allocation of a small percentage of resources towards AI wild animal welfare alignment.

 

However, leading AI experts (1, 2, 3) put the probability of TAI within the next 20 years as more like  10%–50% likely. If that’s right, then we should take these recommendations much more seriously.

Grand challenges

Table 2: Definitions of the grand challenges and low-confidence musings on their relevance to WAW. Please note that the below are just my quick, low-confidence reflections. It could be useful for more thinkers to reflect on how these sorts of challenges would influence wild animal welfare and wild animal welfare science.
ChallengeThoughtsCategory
AI takeover: Misaligned AI suppresses human efforts to detect misalignment and ultimately disempowers humans, taking partial or total control of global governance.

Depends on the goals of the AI and the scale of misalignment. Global domination scenarios leave very little room for human response.

 

Scenarios with lesser degrees of takeover with AI that doesn’t care about wild animal welfare could lead to challenges implementing interventions or decreased quality of decision-making.

Unclear sign (although the alignment part is high relevance)
Highly destructive technologies: AI enables technological innovations like new bioweapons, drone swarms, huge arsenals of nuclear weapons, or atomically precise manufacturing could make it much easier than before to have massive destructive impact on the world.

Increases in conflicts (nuclear and otherwise) would lead to more wild animals harmed by such conflicts.

 

If humans become functionally disempowered, and AI doesn’t care about wild animal welfare, no one will work on preventing wild animal suffering.

 

Certain technological advances could benefit wild animal welfare and the associated science in many ways, if used wisely.

 

Used unwisely, the same advances could harm wild animal welfare significantly (e.g., advances in pest control technology could have welfare consequences and/or unintended effects; unintentional releases of bioagents could affect wild animals), although net effects are broadly unclear.

 

Generally: Human capacity to make things worse and better for wild animals will both improve, with ensuing strategic consequences.

Unclear sign
Power-concentrating mechanisms: AI-enabled concentration of power due to things like increased surveillance and automated soldiers programmed to be loyal.Depends on the WAW values of the factions in power. Given that WAW values are not widespread, the likelihood of it being a WAW-focused faction seems small. So, could lead to increased challenges pursuing WAW research agendas or implementing interventions. If those in power broadly favor wild animal welfare, it would make wild animal welfare interventions easier — but given the predominant value sets in the world today I find this much less likely.Lower relevance
Value lock-in mechanisms: Technologies that could entrench particular views and values for extremely long periods through surveillance, permanent AI values, commitment technology, or preference-shaping.

If values that tend to improve the wellbeing of as many animals as possible are locked in, this could be very good. If values toward wild animals that favor various anthropocentric values like autonomy or freedom are favored, and it turns out that these things don’t actually improve the lives of wild animals, this would be very bad.

 

Given that WAW values are evolving in response to improving empirical knowledge, many positions that seem good now could be overturned, and lock-in of any sub-optimal positions would therefore be bad. So it seems the best orientation toward wild animal welfare would be one of continuous learning, openness to the idea that some lives might not be worth living, and promotion of positive welfare states.

 

If AI provides tools for manipulating humans, that seems generally bad for society, even if it could, in theory, be used to manipulate people into caring about WAW.

Unclear sign
AI agents and digital minds: Challenges around infrastructure for AI agents and moral questions about rights and welfare of potentially sentient AI systems.

In theory, no relationship: We could treat AI/digital minds terribly and wild animal minds well, or vice versa.

 

In practice, precedent-setting and multinational agreement work on digital minds could open a door to getting wild animal minds included as well, if we’re lucky (although given the extent to which I’ve observed various people get excited about digital minds while caring absolutely not at all about animals, I’m skeptical).

Lower relevance
Space governance: Issues around acquiring off-world resources and interstellar settlement that could lock in power distributions for the long-term future.If we replicate wild animal suffering in space, that could be very bad.High relevance

New competitive pressures: Safety-growth tradeoffs that reward reckless actors, value erosion through automation, and new coercion technologies.

 

Race-to-the-bottom dynamics could lead to ecological exploitation at greater scales. For example, the Forethought report illustrates a scenario where a nation with first-mover advantages convert their temporary lead into a permanent dominance by securing physical resources like those needed for semiconductors, or securing land. This could lead to massive ecosystem level “damage” of natural systems, but as I explained above, this would have unknown effects on wild animal welfare if you take the possibility of net-negative lives seriously.Unclear sign
Epistemic disruption: Challenges to collective reasoning from AI super-persuasion, human stubbornness against persuasion, viral ideologies, and the risk of missing crucial considerations.

AI persuasion tools could be used to persuade relevant decision-makers toward or away from wild animal welfare values. A particularly pernicious form of this was explained to me by Luke Hecht (our Science Director), so I’ll just copy his comment here:

 

“AI taking over many other important and/or high-profile research areas (e.g., drug discovery, or policy forecasting in general) could generally lead to skepticism about the marginal value of real empirical research and lead decision-makers who don't necessarily have a good grasp of the science (especially if future decision-makers are less qualified than they have been in the past...) to feel that an AI's 'best guess' at these difficult questions in wild animal welfare science are good enough to make the actual empirical research not worth funding.”

 

Other potential issues include:

 

Generally, it may become more difficult to convince people of views distant from their own, which applies to WAW.

 

AI thought partners could help individuals understand wild animal welfare values, reasoning, and interventions.

 

AI forecasts could far outperform existing human predictions about how interventions in nature will affect wild animal welfare on net.

Lower relevance
Abundance: The challenge of ensuring that massive material gains from AI are widely shared and lead to positive societal outcomes.

More human flourishing could lead to humans being less opposed to spending resources on wild animals.

 

Increases in robotic or AI labor could massively increase capacity to collect and analyze wild animal welfare data and monitor the effects of various actions in nature on wild animal welfare.

 

Advances in technologies that are currently not even on the horizon could lead to direct reductions in wild animal suffering.

High relevance
Unknown unknowns: Generally, there’s no way to predict most of the effects of TAI.No predictable effects on wild animal welfare; generally pushes towards flexibility and shorter-term planning.High relevance

Issues of lower relevance

The possibilities around digital sentience, epistemic disruption, and power-concentration do not appear to be particularly relevant to the wild animal welfare community.

Power concentration seems most likely to act at the level of states; while I suppose that certain wild animal-relevant actors could concentrate their power by accessing AI tools earlier than other relevant actors, this doesn’t seem particularly likely to me given the relatively similar inclinations towards AI across the space. I suppose if one thought it was inevitable that some country would get AI first and concentrate power, you could try to do things to ensure that it was a country with better WAW values than other options (India over the United States, for example). But it doesn’t seem likely to me that WAW advocates have much chance of being impactful here, nor of preparing much for the options on the table — most AI people think the US or China will get there first, and both countries have pretty bad WAW norms.

Epistemic disruption seems likely to affect everything at once rather than certain communities piecemeal. I think there is a reasonable argument to be made that, if epistemic disruption is occurring at large scales, we should try to insulate our communities from it as much as we can (e.g., by encouraging AI literacy). But I think there are other reasons to encourage AI literacy which are more important, and which I detail below.

Finally, it would be nice to think that digital sentience work could benefit wild animals. For example, if states actually work to think about the implications of digital sentience, and take action to protect digital minds from harm, the relevant policies opening up could be a chance to also advocate for animal sentience to be taken more seriously. But given how often I meet people who are very interested in digital sentience, and not at all interested in animal sentience, I find this unlikely. It seems like once we’re already torturing large numbers of animals, no one finds it that fun to think about the fact that they are sentient. Given that we aren’t torturing digital minds yet (as far as I know), people seem to be much more willing to consider how to prevent it from happening in the future, presumably in part because it's (1) less sad and (2) our economy and eating habits don’t at least somewhat depend on bad behavior continuing.

Issues of uncertain sign

Highly destructive technologies and new competitive pressures appear highly relevant for wild animal welfare, despite the fact that the sign of their impact may be uncertain. In the first case, if AI tools are used to develop extremely deadly bioweapons, the risk of accidental human population declines seems much more likely (because the more deadly your tools, the more harm they cause if they are used mistakenly). I don’t think it is particularly likely that the AI would intentionally kill all the humans or all the animals — it’s not clear what value would be gained from that under almost any value system except extreme negative utilitarianism. However, an accidental release of a biological weapon could certainly reduce the population of, say, all mammals by an enormous degree.

What would that mean for wild animal welfare? Well, unfortunately we don’t know. If it results in the disempowerment of humans, such that AIs are left holding the reins of the world, the implications for wild animal welfare will depend on what the AI does with its power, and we’re back to the alignment question. Even the direct deaths of many animals could be of uncertain sign, if you ascribe to value sets that believe that wild animals mostly suffer in nature. Similarly, in the case of competitive pressures, the part that is relevant to wild animal welfare is that TAI may increase the likelihood of nations massively disrupting their natural ecosystems, either in a race to get relevant minerals to support AI needs, or after getting TAI to support a potential manufacturing explosion. Because we don’t know what the quality of animal lives are in the wild, and we don’t know whether some ways of extracting natural resources are more harmful to wild animals than others (especially when the possibility of net-negative lives is taken into account), we don’t know how this will affect wild animal welfare. And, if we push this question to the AI by assuming that any such actor would ask the TAI what to do, we’re basically back to the alignment problem, with the same implications as I discussed in that section.

Very similarly to the above, value lock-in and AI takeover are very important issues with no clear sign for the wild animal welfare community. Based on my discussions with various researchers and leaders in the AI safety field, there is no common understanding of what alignment will look like, and in particular, whether “well-aligned” or “misaligned” AI would care about wild animals. Just as an illustration, an AI system that was “well-aligned” with the median American would probably not care at all about wild animal welfare. Does this count as misaligned AI or not? Overall, a misaligned AI could be misaligned in ways that are good for wild animal welfare or ways that are not, those values might get locked in or they might not, and there doesn’t seem much that the wild animal welfare community can usefully do to prepare for such a totally uncertain set of possibilities.

Similarly, when most people think about AI takeover, they seem to be thinking about the total disempowerment of human actors in favor of AI agents, such that AI agents are making all the decisions about what happens in the world. But there can be degrees of disempowerment (some of which are illustrated in this report on Gradual Disempowerment), and some of those variations could be more or less wild animal welfare relevant. But here again, the extent to which disempowerment is bad for wild animal welfare depends on what the AI does with its power, and once again, that brings us back to actions suggested related to the alignment issue.

Other than those suggestions, I don’t think it makes sense for the community as a whole to put a large number of resources toward AI misalignment and takeover issues. Firstly, it simply isn’t to our comparative advantage, since there are other people way better placed to work on this. Secondly, I think a lot of the things we are doing (developing wild animal welfare measures, gathering data on what life is like in the wild) remain relevant even in short-term AI scenarios, and even if the AI is somewhat misaligned. And then it doesn’t make too much sense to worry about more dramatic misaligned or takeover scenarios, since there basically isn’t anything the wild animal welfare community can do to stop that or prepare for it if that’s the path we’re on.

76

0
0
3

Reactions

0
0
3

More posts like this

Comments6
Sorted by Click to highlight new comments since:

Thanks for this excellent post! The distinction between 'puntable' and 'less puntable' ideas seems like a really helpful way for advocates to think about tactic prioritisation. 

On the point about AI-enabled modelling of wild animal welfare and implications of different interventions: are there any existing promising examples of this? The one example I've come across is the model described in the paper 'Predicting predator–prey interactions in terrestrial endotherms using random forest' but the predictions seem pretty basic and not necessarily any better than non-AI modelling. 

Also, why did you decide that TAI's role in 'infrastructure needs' and 'getting the “academic stamp of approval”' weren't useful to think about?

Hi Max, thanks for the positive feedback and for the question. 

I will ask our research team if they are aware of any specific papers I could point to; several of them are more familiar with this landscape than I am. My general idea that AI-enabled modeling would be beneficial is more from the very basic guess that given that AI is pretty good at coding, stuff that relies on coding might get a lot better if we had TAI. If that's right, then even if we don't see currently great examples of modeling work being useful now, it could nevertheless get a lot better sooner than we think. 

Thanks for bringing up the usefulness sentence, I think I could have been a lot clearer there and will revise it in future versions. I think I mainly meant that I was less confident about what TAI would mean for infrastructure and academic influence, and so any possible implications for WAW strategy would be more tentative. However, thinking about it a bit more now, I think the two cases are a bit different.

For infrastructure:  In part, I down-weighted this issue because I find the idea that the manufacturing explosion will allow every scientist to have a lab in their house less probable, at least on short timelines, than software-based takeoffs. But also, and perhaps more importantly, I generally think that on my list of reasons to do science within academia, 1 and 3 are stronger reasons than 2. Infrastructure can be solved with more money, while the others can't. So even if thinking about TAI caused me to throw out the infrastructure consideration, I might still choose to focus on growing WAWS inside academia, and that makes figuring out exactly what TAI means for infrastructure less useful for strategy. 

For "academic stamp of approval": I think I probably just shouldn't have mentioned this here, because I do end up talking about legitimacy in the piece quite a bit. But here's an attempt at articulating more clearly what I was getting at: 

  • Assume TAI makes academic legitimacy less important after TAI arrives.
  • You still want decision-makers to care about wild animal welfare before TAI arrives, so that they use it well etc.
  • Most decision-makers don't know much about WAW now, and one of the main pathways now that wildlife decision-makers become familiar with a new issue is through academia.
  • So, academic legitimacy is still useful in the interim.
  • And, if academic legitimacy is still important after TAI arrives, you also want to work on academic legitimacy now.
  • So, it isn't worth spending too much time thinking about how TAI will influence academic legitimacy, because you'd do the same thing either way. 

That said, I find this argument suspiciously convenient, given that as an academic, of course I'm inclined to think academic legitimacy is important. This is definitely an area where I'm interested in getting more perspectives. At minimum, taking TAI seriously suggests to me that you should diversify the types of legitimacy you try to build, to better prepare for uncertainty. 

“it at least somewhat increases the risk of animal life being propagated on more planets. This seems extremely bad, since we have no idea how to ensure that those animals will live good lives.”

Do you assume that wild animal life is net negative?

If given a magic button that instantaneously wiped out all wild animals, ignoring the consequences for humans of doing this, would you press it?

Hi Henry, thanks for your question. I should be clear that I am speaking about my own opinions in this comment, not any institutional position of Wild Animal Initiative

I do not assume that wild animal life is net negative. I feel pretty clueless about the typical quality of life in the wild. The reason I work on wild animal welfare science is in part because I think people have been way too quick to jump from hypothesis to conclusion on the quality of life in the wild, and empirical studies are important to fill that knowledge gap.  

Given the above, the main reason for my comment about space propagation is that I feel risk averse about spreading life we don't understand well onto other planets (although I suspect there are a number of philosophical positions besides my own that could make one skeptical of bringing wild animal life to space in a thoughtless way). It seems very likely that even if life on earth for wild animals was knowably great, it could still be quite bad on other planets or in space, depending on which animals are brought to space, how they are treated, what kinds of experiments are tried on the way to successful propagation, etc. 

People are very thoughtless about wild animal welfare when reintroducing animals to habitats on Earth already (there are a number of conservation failures that come to mind), so I suspect that humans might be equally thoughtless about animal welfare when bringing animals to space. I might think the average pet dog has a great life and still be hesitant to suggest that really inexperienced owners buy pet dogs they don't know how to take care of. 

Maybe I'm misunderstanding you, but your last statement seems to imply that anyone who is concerned about wild animals having potentially net-negative lives should be a button-pusher? I'm not sure that follows except under very pure-EV-chasing utilitarianism, which is not my moral position nor a position I recommend. Personally, I would not push the button. 

Uncertainty about the net utility of wild animals is also true of human life. It’s an open question whether the average human life is net negative or net positive.

Would you therefore also say that propagating human lives on other planets is “extremely bad”?

Interesting questions, Henry! I strongly upvoted your comment[1].

“it at least somewhat increases the risk of animal life being propagated on more planets. This seems extremely bad, since we have no idea how to ensure that those animals will live good lives.”

Do you assume that wild animal life is net negative?

I share your scepticism about expanding wildlife being extremely bad. I am uncertain not only about whether wild animals have positive or negative lives, but also about whether increasing their population is easier or harder than decreasing it. I guess many are also uncertain about whether wild animals have positive or negative lives, but think that increasing the population of wild animals is easier than decreasing it, in which case not expanding wild life to other planets makes sense to maintain options more open.

If given a magic button that instantaneously wiped out all wild animals, ignoring the consequences for humans of doing this, would you press it?

Many are against expanding wild life based on the assumption that expanding it is easier than decreasing it. This suggests decreasing wild life is beneficial, but not necessarily until there is none at all. At some point, expanding wild life could become easier than decreasing it, such that decreasing it further would overall close options.

I think people like me who are very uncertain about whether future welfare is positive or negative should not have strong views about whether the permanent elimination of all sentient beings would be beneficial or harmful, which is counterintuitive. However, I believe it has the very commonsensical implication of focussing on improving existing lives instead of increasing or decreasing the number of lives (even if one strongly endorses maximising total welfare like I do).

  1. ^

    It had -2 karma before my vote. Maitaining a scout mindset is not easy!

Curated and popular this week
Relevant opportunities