Note : I'm probably using the wrong words / being imprecise at some point because I have limited knowledge of AI and longtermist concepts.
This is a question anticipating the Existential Choices Debate Week, but I've been asking myself this for the past few months - and when I've tried to look up an answer or ask friends, I haven't really found anything that pointed in either direction.
It seems to me that broadly defined, "existential risks" are very likely in the coming centuries. Many scenarios could "disempower" humanity and hinder our current technological civilization. The first thing I think of when considering about these scenarios, is what would happen to the trillions of non-human animals that will continue to live on earth at any time, with or without human or AI interference. This might not be the matter where most value lies, compared to evaluating counterfactual scenarios containing an even more astronomical number of individuals. Nonetheless when I'm thinking about what a trajectory where sufficiently intelligent does go extinct entails, this is what I am thinking of - in fact, my only post on the Forum is about this, and it already asks, at some point, the question I'm asking here. However, I've been told on some occasions that the reason these trajectories get relatively little discussion (though there has been some) is that most plausible scenarios where humans go extinct in the next centuries are ones where all non-human sentience would be wiped out too: in particular, in the case of ASI taking control of nearly all of the earth's energy and leaving none for current forms of biological life (I'm probably not using the right words, I apologize, I'm not very knowledgeable in this field but hoping to learn). Then, of course, there are cosmic risks, but they seem to be currently evaluated by EAs as quite unlikely in the next centuries.
In your distribution of existential risk, what part of the scenarios entail total extinction of animal sentience, compared to the "mere" disempowerment, radical population reduction, or extinction of humans in particular?
I also have a vaguer question that initially motivated me to finally ask my main question on the forum. Feel free to ignore that part, as it's not the main question. In any case, I'd very much appreciate to hear your thoughts on any of this. Here is the second question: if you believe that a substantial part of X-risk scenarios entail animal sentience being left behind, do you then think that estimating the current and possible future welfare of wild animals is an important factor in evaluating the value of both existential risk reduction and interventions aimed at influencing the future? I am asking this second question because I was planning to make a post on invertebrate sentience as being a possible crucial consideration when evaluating the value and disvalue of X-risk scenarios, but thought that if this factor was rarely brought up, it could be that I was personally uninformed on the reasons why the experiences of invertebrates (if they are sentient) might not actually matter that much in future trajectories (aside from the possibility that they will all go extinct soon, which is why this question hinges on the prior belief that it is likely that sentient animals will continue existing on earth for a long time).
Thank you very much for answering both questions! This was clear and helpful.