If humanity goes extinct due to an existential catastrophe, it is possible that aliens will eventually colonize Earth and the surrounding regions of space that Earth-originating life otherwise would have. If aliens' values are sufficiently aligned with human values, the relative harm of an existential catastrophe may be significantly lessened if it allows for the possibility of such alien colonization.

I think the probability of such alien colonization varies substantially based on the type of existential catastrophe. An existential catastrophe due to rogue AI would make it unlikely that such alien colonization would happen since it would probably be in the AI's interest to keep the resources in Earth and the surrounding regions for itself. I suspect that an existential catastrophe due to a biotechnology or nanotechnology disaster, however, would leave alien colonization relatively probable.

I think there's a decent chance that alien values would be at least somewhat aligned with humans'. Human values, for example fun and learning, exist since they were evolutionarily beneficial. This weakly suggests that aliens would also have them due to similar evolutionary advantages.

My above reasoning suggests that we should devote more effort into averting existential risks that make such colonization less likely, for example risks from rogue AI, than from other risks.

Is my reasoning correct? Has what I'm saying already been though of? If not, would be be worthwhile to inform people working on existential risk strategy, e.g. Nick Bostrom, about this?

8

0
0

Reactions

0
0
Comments10
Sorted by Click to highlight new comments since: Today at 1:32 PM

Regarding the likelihood (not the value) of intergalactic alien civilizations you might be intrested in this post on Quantifying anthropic effects on the Fermi paradox by Lukas Finnveden. E.g., he concludes:

If you accept the self-indication assumption, you should be almost certain that we’ll encounter other civilisations if we leave the galaxy. In this case, 95 % of the reachable universe will already be colonised when Earth-originating intelligence arrives, in expectation.

The quote continues:

Of the remaining 5 %, around 70 % would eventually be reached by other civilisations, while 30 % would have remained empty in our absence.

I think the 70%/30% numbers are the relevant ones for comparing human colonization vs. extinction vs. misaligned AGI colonization. (Since 5% cuts the importance of everything equally.) 

...assuming defensive dominance in space, where you get to keep space that you acquire first. I don't know what happens without that.

This would suggest that if we're indifferent between space being totally uncolonized and being colonized by a certain misaligned AGI and if we're indifferent between aliens and humans colonizing space: then preventing that AGI is ~3x as good as preventing extinction.

If we value aliens less than humans, it's less. If we value the AGI positively, it's also less. If we value the AGI negatively, it'd be more.

We wrote a bit about a related topic in part 2.1 here: https://forum.effectivealtruism.org/posts/NfkEqssr7qDazTquW/the-expected-value-of-extinction-risk-reduction-is-positive

In there, we also cite a few posts by people who have thought about similar issues before. Most notably, as so often, this post by Brian Tomasik:

https://foundational-research.org/risks-of-astronomical-future-suffering/#What_if_human_colonization_is_more_humane_than_ET_colonization

I think your reasoning is basically correct (as far as I can tell), at least your conclusion that "we should devote more effort into averting existential risks that make such colonization less likely" (given your premise that alien civs would be somewhat valuable). I feel like your appeal to evolution is a somewhat convincing first-pass argument for your premise that alien civilizations would likely instantiate value, though I wouldn't be that surprised if that premise turned out to be wrong after more scrutiny. I feel less sure about your claims regarding which catastrophes would make alien civs more or less likely, though I would agree at least crudely/directionally. (But e.g. I think there are many 'AI catastrophes' that would be quite compatible with alien civs.)

Anecdotally, I've frequently talked about this and similar considerations with EAs interested in existential risk strategy (or "cause prioritization" or "macrostrategy"). (I don't recall having talked about this issue with Nick Bostrom specifically, but I feel maybe 95% confident that he's familiar with it.) My guess is that the extent to which people have considered questions around aliens is significantly underrepresented in published written sources, though on the other hand I'm not aware of thinking that goes much beyond your post in depth.

I'm wondering what you mean when you say, "I think there are many 'AI catastrophes' that would be quite compatible with alien civs." Do you think that there are relatively probably existential catastrophes from rogue AI that would allow for alien colonization of Earth? I'm having a hard time thinking of any and would like to know your thoughts on the matter.

Do you think that there are relatively probably existential catastrophes from rogue AI that would allow for alien colonization of Earth?

Yes, that's what I think.

First, consider the 'classic' Bostrom/Yudkowsky catastrophe scenario in which a single superintelligent agent with misaligned goals kills everyone and then, for instrumental reasons, expands into the universe. I agree that this would be a significant obstacle for alien civilization (though not totally impossible - e.g. there's some, albeit perhaps tiny, chance that an expanding alien civilization could be a more powerful adversary, or there could be some kind of trade, or ...).

However, I don't think we can be highly confident that this is how an existential catastrophe due to AI would look like. Cf. Christiano's What failure looks like, Drexler's Reframing Superintelligence, and also recent posts on AI risk arguments/scenarios by Tom Sittler and Richard Ngo. On some of the scenarios discussed there, I think it's hard to see whether they'd result in an obstacle to alien civilizations or not.

More broadly, I'd be wary to assign very high confidence to any feature of a post-AI catastrophe world. AI that could cause an existential catastrophe is a technology we don't currently possess and cannot anticipate in all its details - therefore, I think it's quite likely that an actual catastrophe based on such AI would in at least some ways have unanticipated properties, i.e., would at least not completely fall into any category of catastrophe we currently anticipate. Relatively robust high-level considerations such as Omohundro's convergent instrumental goal argument can give us good reasons to nevertheless assign significant credence to some properties (e.g., a superintelligent AI agent seems likely to acquire resources), but I don't think they suffice for >90% credence in anything.

That sounds reasonable.

I agree with the argument. If you buy into the idea of evidential cooperation in large worlds (formerly multiverse-wide superrationality), then this argument might go through even if you don't think alien values are very aligned with humans. Roughly, ECL is the idea that you should be nice to other value systems because that will (acausally via evidential/timeless/functional decision theory) make it more likely that agents with different values will also be nice to our values. Applied to the present argument: If we focus more on existential risks that take resources from other (potentially unaligned) value systems, then this makes it more likely that elsewhere in the universe other agents will focus on existential risks that take away resources from civilizations that happen to be aligned with us.

I'm not that deep into AI safety myself, so keep that in mind. But that being said, I haven't heard that thought before and basically agree with the idea of "if we fall victim to AI, we should at least do our best to ensure it doesn't end all life in the universe" (which is basically how I took it - correct me if that's a bad summary). There certainly are a few ifs involved though, and the outlined scenario may very well be unlikely:

  • probability of AI managing to spread through the universe (I'd intuitively assume that from the set of possible AIs ending human civilization the subset of AIs also conquering space is notably smaller; I may certainly be wrong here, but it may be something to take into account)
  • probability of such an AI spreading far enough and in a way as to be able to effectively prevent the emergence of what would otherwise become a space colonizing alien civilization
  • probability of alien civilizations existing and ultimately colonizing space in the first place (or developing the potential and would live up to it if it were not for our ASI preventing it)
  • probability of aliens having values sufficiently similar to ours'

I guess there's also a side to the Fermi paradox that's relevant here - it's not only that we don't see alien civilizations out there, we also don't see any signs of an ASI colonizing space. And while there may be many explanations for that, we're still here, and seemingly on the brink of becoming/creating just the kind of thing an ASI would instrumentally like to prevent, which is at least some evidence that such an ASI does not yet exist in our proximity, which again is minor evidence that we might not create such an ASI either.

In the end I don't really have any conclusive thoughts (yet). I'd be surprised though if this consideration were a surprise to Nick Bostrom.

I'm not really considering AI ending all life in the universe. If I understand correctly, it is unlikely that we or future AI will be able to influence the universe outside of our Hubble sphere. However, there may be aliens that exist or in the future will in exist in our Hubble sphere, and I think it would be more likely than not nice if they are able to make use of our galaxy and the ones surrounding it.

As a simplified example, suppose there is on average one technologically advanced civilization for every group of 100 galaxies. And each civilization can access all surrounding 100 galaxies as well as the 100 galaxies of neighboring civilizations.

If rogue AI takes over the world, then it would probably also be able to take over the other hundred galaxies. Colonizing some galaxies sounds feasible for an agent that can single-handedly take over the world. If the rogue AI did take over the galaxies, then I'm guessing they would be converted into paperclips or something of the like and thus have approximately zero value to us. The AI would be unlikely to let any neighboring alien civilizations do anything we would value with the 100 galaxies.

Suppose instead there is an existential catastrophe due to a nanotechnology or biotechnology disaster. Then even if intelligent life never re-evolved on Earth, a neighboring alien civilization may be able to colonize those 100 galaxies and do something we would value with them.

Thus, for my reasoning to be relevant I don't think the first two ifs you listed are essential.

As for the third if, it is quite the conjunction that there isn't a single other alien civilization in the Universe and thus is unlikely. However, if the density of alien civilizations or future alien civilizations is so low that we will never be in the Hubble sphere of any of them, then this would make my reasoning less relevant.

Thoughts?

More from Evira
Curated and popular this week
Relevant opportunities