Hide table of contents

Note : I'm probably using the wrong words / being imprecise at some point because I have limited knowledge of AI and longtermist concepts.

This is a question anticipating the Existential Choices Debate Week, but I've been asking myself this for the past few months - and when I've tried to look up an answer or ask friends, I haven't really found anything that pointed in either direction.

It seems to me that broadly defined, "existential risks" are very likely in the coming centuries. Many scenarios could "disempower" humanity and hinder our current technological civilization. The first thing I think of when considering about these scenarios, is what would happen to the trillions of non-human animals that will continue to live on earth at any time, with or without human or AI interference. This might not be the matter where most value lies, compared to evaluating counterfactual scenarios containing an even more astronomical number of individuals. Nonetheless when I'm thinking about what a trajectory where sufficiently intelligent does go extinct entails, this is what I am thinking of - in fact, my only post on the Forum is about this, and it already asks, at some point, the question I'm asking here. However, I've been told on some occasions that the reason these trajectories get relatively little discussion (though there has been some) is that most plausible scenarios where humans go extinct in the next centuries are ones where all non-human sentience would be wiped out too: in particular, in the case of ASI taking control of nearly all of the earth's energy and leaving none for current forms of biological life (I'm probably not using the right words, I apologize, I'm not very knowledgeable in this field but hoping to learn). Then, of course, there are cosmic risks, but they seem to be currently evaluated by EAs as quite unlikely in the next centuries.

In your distribution of existential risk, what part of the scenarios entail total extinction of animal sentience, compared to the "mere" disempowerment, radical population reduction, or extinction of humans in particular?

I also have a vaguer question that initially motivated me to finally ask my main question on the forum. Feel free to ignore that part, as it's not the main question. In any case, I'd very much appreciate to hear your thoughts on any of this. Here is the second question: if you believe that a substantial part of X-risk scenarios entail animal sentience being left behind, do you then think that estimating the current and possible future welfare of wild animals is an important factor in evaluating the value of both existential risk reduction and interventions aimed at influencing the future? I am asking this second question because I was planning to make a post on invertebrate sentience as being a possible crucial consideration when evaluating the value and disvalue of X-risk scenarios, but thought that if this factor was rarely brought up, it could be that I was personally uninformed on the reasons why the experiences of invertebrates (if they are sentient) might not actually matter that much in future trajectories (aside from the possibility that they will all go extinct soon, which is why this question hinges on the prior belief that it is likely that sentient animals will continue existing on earth for a long time).

9

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

This is a very difficult question to answer, as it depends very heavily on the specifics of each scenario, as well as which groups of animals you consider to have sentience, and your default estimations of how worthwhile their lives are. For AI, I think the standard paper-clipper/ misaligned super intelligence probably doesn’t go as far to kill all complex biological life immediately, as unlike humans, most animals would not really pose a threat to its goals/compete with it for resources. However, in the long run, I assume a lot of life would die off as AI develops industry without regard for the environmental effects (robots do not need much clean air, or water, or low-acidity oceans). In the long, long run, I do not see why an AI system would not construct a Dyson sphere.


Ultimately, however, I do not think this really changes the utility of these scenarios, as human civilization is  also mostly indifferent to animals. The existence of factory farming (which will last longer with humans, as humans enjoy meat while AI will probably not care about it) probably will out weigh any potential pro-wild-animal welfare efforts pursued by humanity. 
 

For non-AI extinction risk (nuclear war, asteroids, super volcanoes) sentient animal populations will  sharply decline and then gradually recover, just as they have done in reaction to previous mass extinction events.


TLDR: 

For essentially all extinction scenarios, the utility value calculation is based on the difference between long-term and short term human flourishing against short-term factory farming of animals farmed for humans. Wild animals have similar expected utility in all scenarios, especially if you think they have about net-neural utility in their lives on average, as they will either persist unaffected or die (maybe at some point humanity will want to intervene to help wild animals have net-positive lives, but this is highly uncertain).


 

Thank you very much for answering both questions! This was clear and helpful.

Curated and popular this week
Relevant opportunities