If humanity goes extinct due to an existential catastrophe, it is possible that aliens will eventually colonize Earth and the surrounding regions of space that Earth-originating life otherwise would have. If aliens' values are sufficiently aligned with human values, the relative harm of an existential catastrophe may be significantly lessened if it allows for the possibility of such alien colonization.
I think the probability of such alien colonization varies substantially based on the type of existential catastrophe. An existential catastrophe due to rogue AI would make it unlikely that such alien colonization would happen since it would probably be in the AI's interest to keep the resources in Earth and the surrounding regions for itself. I suspect that an existential catastrophe due to a biotechnology or nanotechnology disaster, however, would leave alien colonization relatively probable.
I think there's a decent chance that alien values would be at least somewhat aligned with humans'. Human values, for example fun and learning, exist since they were evolutionarily beneficial. This weakly suggests that aliens would also have them due to similar evolutionary advantages.
My above reasoning suggests that we should devote more effort into averting existential risks that make such colonization less likely, for example risks from rogue AI, than from other risks.
Is my reasoning correct? Has what I'm saying already been though of? If not, would be be worthwhile to inform people working on existential risk strategy, e.g. Nick Bostrom, about this?