If humanity goes extinct due to an existential catastrophe, it is possible that aliens will eventually colonize Earth and the surrounding regions of space that Earth-originating life otherwise would have. If aliens' values are sufficiently aligned with human values, the relative harm of an existential catastrophe may be significantly lessened if it allows for the possibility of such alien colonization.
I think the probability of such alien colonization varies substantially based on the type of existential catastrophe. An existential catastrophe due to rogue AI would make it unlikely that such alien colonization would happen since it would probably be in the AI's interest to keep the resources in Earth and the surrounding regions for itself. I suspect that an existential catastrophe due to a biotechnology or nanotechnology disaster, however, would leave alien colonization relatively probable.
I think there's a decent chance that alien values would be at least somewhat aligned with humans'. Human values, for example fun and learning, exist since they were evolutionarily beneficial. This weakly suggests that aliens would also have them due to similar evolutionary advantages.
My above reasoning suggests that we should devote more effort into averting existential risks that make such colonization less likely, for example risks from rogue AI, than from other risks.
Is my reasoning correct? Has what I'm saying already been though of? If not, would be be worthwhile to inform people working on existential risk strategy, e.g. Nick Bostrom, about this?
I'm not really considering AI ending all life in the universe. If I understand correctly, it is unlikely that we or future AI will be able to influence the universe outside of our Hubble sphere. However, there may be aliens that exist or in the future will in exist in our Hubble sphere, and I think it would be more likely than not nice if they are able to make use of our galaxy and the ones surrounding it.
As a simplified example, suppose there is on average one technologically advanced civilization for every group of 100 galaxies. And each civilization can access all surrounding 100 galaxies as well as the 100 galaxies of neighboring civilizations.
If rogue AI takes over the world, then it would probably also be able to take over the other hundred galaxies. Colonizing some galaxies sounds feasible for an agent that can single-handedly take over the world. If the rogue AI did take over the galaxies, then I'm guessing they would be converted into paperclips or something of the like and thus have approximately zero value to us. The AI would be unlikely to let any neighboring alien civilizations do anything we would value with the 100 galaxies.
Suppose instead there is an existential catastrophe due to a nanotechnology or biotechnology disaster. Then even if intelligent life never re-evolved on Earth, a neighboring alien civilization may be able to colonize those 100 galaxies and do something we would value with them.
Thus, for my reasoning to be relevant I don't think the first two ifs you listed are essential.
As for the third if, it is quite the conjunction that there isn't a single other alien civilization in the Universe and thus is unlikely. However, if the density of alien civilizations or future alien civilizations is so low that we will never be in the Hubble sphere of any of them, then this would make my reasoning less relevant.
Thoughts?