Hide table of contents

A significant portion of our impact may occur through future interactions with, or influences on, civilizations or superpowers beyond Earth. Therefore, our potential impact on other cosmic actors plausibly deserves to be a key consideration in our decisions, and it plausibly deserves greater attention and priority than it receives currently.

In this essay, I will briefly seek to support and explore these claims. The core point I am making about devoting greater consideration to our potential effects on cosmic actors is not limited to AI safety. In principle, it applies more broadly, such as when it comes to how we devise and implement our institutions, norms, decision procedures, and even our values. However, the consideration becomes particularly weighty in contexts where our decisions are likely to influence a long-lasting value lock-in, and hence the consideration seems especially important in the context of AI safety. 

(The term “cosmic AI safety” might sound overly grand or lofty, yet one could alternatively call it “non-parochial AI safety”: AI safety that does not exclude or neglect potential impacts on other cosmic actors.)

The ideas I explore in this essay are similar to ideas explored in Nick Bostrom’s recent draft “AI Creation and the Cosmic Host” as well as in Anders Sandberg’s “Game Theory with Aliens on the Largest Scales”, though my focus is somewhat different. I am not claiming to say anything particularly original here; I am simply trying to say something that seems to me important and neglected.

 

1. Why influence on cosmic actors seems plausible

There are various reasons and world models that suggest that future influence on cosmic actors is quite plausible.

1.1 Our apparent earliness

We have emerged after around 13.8 billion years in a universe that may support the emergence of life for hundreds of billions of years or more. This apparent earliness lends some plausibility — perhaps even a high probability — to the prospect of encountering powerful cosmic actors in the future, even if no such actors exist today.

To be sure, it is not clear whether we are in fact early, as this depends partly on which kinds of star systems can give rise to life (cf. Burnetti, 20162017). Yet it still seems quite plausible that we have appeared early in the window in which the universe can support the emergence of life, which in itself seems a sufficient reason to take potential impacts on cosmic actors into account.

1.2 Grabby aliens

A more specific model that implies future influence on cosmic actors is the grabby aliens model (Olson, 2015; Hanson et al., 2021; Cook, 2022). The grabby aliens model is to some extent built on our apparent earliness, so it is not wholly independent of the point made above, yet it still helps to provide some added structure and salience as to what encounters with future cosmic actors might roughly look like, as can be seen in the video below.
 


In particular, the grabby aliens model implies a future in which any hypothetical Earth-originating cosmic actor would most likely encounter several other cosmic actors. This could potentially result in cooperation or conflict (or anything in between) depending in part on the actors’ respective aims and values.

Indeed, it is conceivable that such eventual “inter-cosmic-actor” encounters are the most consequential events in the cosmos from an ethical standpoint. By extension, it is conceivable that our nudges on such eventual cosmic encounters, even if only marginal, could ultimately be the most consequential part of our impact. (Though note that this is not my core claim here; my claim is about our potential impact on other cosmic actors in general — not the encounters of grabby aliens in particular — and my claim is merely that such potential impact is plausibly a key consideration.)

Additionally, as Robin Hanson has noted, even if humanity goes extinct and is not succeeded by expansionist AI, it is still conceivable that we could have a non-negligible influence on other grabby aliens. For instance, we might provide relevant information, and perhaps even serve as a coordination point, for other grabby civilizations that have each observed us but not yet encountered each other (Hanson, 2021).

1.3 Silent cosmic rulers

An alternative to the grabby aliens model is one that allows expansionist aliens to be quiet (cf. silent cosmic rulers). This model is different in some key respects. For example, on this model, we are likely to encounter just one cosmic superpower, and that superpower could already be here. In fact, if a notable fraction of expansionist aliens are quiet, such a quiet superpower would likely be here already (cf. the anthropic argument found here).

In such a scenario, there would all but surely be a vast power difference between them and humanity, yet there could still be a potential for meaningful human influence, including, and perhaps especially, through AI safety in particular. After all, given the potentially vast reach of such a hypothetical enveloping superpower, even just a marginal influence on that superpower could still be large in absolute terms.

1.4 Evidential cooperation in large worlds

On some decision theories, direct contact and causal influence are not required for the possibility of influencing and cooperating with other cosmic actors in some real sense. This is a key idea behind what has been termed “evidential cooperation in large worlds” — cooperation with causally disconnected actors elsewhere in the universe (Oesterheld, 2017; Treutlein, 2023). If such influence is possible, it is plausibly of great importance given its potential scope, including in the context of AI safety.

1.5 Other kinds of cosmic actors

In his draft on “AI Creation and the Cosmic Host”, Bostrom lists various other kinds of cosmic actors that could conceivably exist, including ones that fall outside a “conventional” naturalistic worldview, such as simulators and theistic gods. Even if we place a very low probability on the existence of these kinds of actors, it might still be a weak additional reason to give consideration to our potential impact on other cosmic actors, broadly construed.

 

2. Objections

2.1 This seems overly speculative

I sympathize strongly with this sentiment, and part of me indeed has a negative reaction to this entire topic for this very reason. That said, while many of the possibilities raised above are speculative, the probability that we can have a meaningful influence on cosmic actors in at least one of the ways outlined above does not seem extremely low. For example, it seems fairly likely that advanced life will be more prevalent in the future than it is today (e.g. since it will have had more time to emerge), and our existence at this point in time makes it seem plausible that powerful cosmic actors will be sufficiently prevalent to eventually run into each other (as suggested, for example, by the grabby aliens model).

2.2 This seems intractable

The tractability of influencing potential cosmic actors is admittedly unclear, yet this is hardly a good reason to dismiss the issue. On the contrary, given the apparent plausibility and the potential scale of such influence, we appear to have good reason to explore the issue further, so as to get a better sense of its tractability and how we may act wisely in light of this prospect.

 

3. Potential biases

It might be easy to neglect considerations about our potential impact on other cosmic agents, even for reasons that we ourselves do not endorse on reflection. For example, it may be easy to neglect such potential impact because it feels far away in time and space; such distance might make it feel less important, even if we at another level hold that what happens far away in time and space is not inherently less important than what happens nearby. In short, spatial and temporal biases may distort our assessments of the importance of this consideration.

Similarly, there might be hidden biases of partiality that distort our priorities (this can be a bias relative to a stated commitment to impartiality, or even just a commitment to some measure of impartiality). For instance, in the context of AI safety, we might be inclined to downplay considerations about far-future impacts beyond Earth, and to instead focus exclusively on, say, maximizing the chances of our own survival, or reducing human suffering. And while aims like “maximize the chances of our own survival” and “create the best outcomes over all time and space” may be convergent to some extent, it would be extremely surprising if they were anything close to perfectly convergent.

Impartial AI safety would plausibly give strong consideration to our potential impact on other cosmic agents, whereas AI safety that exclusively prioritizes, say, human survival or human suffering reduction would probably not give it strong consideration, if indeed any consideration at all. So the further we diverge from ideals of impartiality in our practical focus, the more likely we may be to neglect our potential impact on other cosmic agents.

 

4. What can we do?

In broad terms, perhaps the main thing we can do at this point to improve our expected impact on other cosmic actors is to have more of a focus on such potential impact. That is, it may be helpful to try to make such potential impact more salient, to orient more toward it, to do more research on it, and so on. In particular, it seems worth giving greater prominence to this consideration when we seek to design or otherwise shape future AI systems, such as by exploring how future AI systems could be better optimized for potential interactions with other cosmic actors.

Relatedly, it might be helpful to look at familiar problems and focus areas through the lens of “potential impact on cosmic actors”. For instance, how can we best reduce risks of astronomical future suffering (s-risks) if most of our impact will occur through, or in interaction with, other cosmic actors? This question could be explored for a variety of potential cosmic actors and types of influence.

In addition, as Bostrom also stresses, it may be worth adopting an attitude of humility, such as by updating toward an attitude of cosmic considerateness, as well as by being “modest, willing to listen and learn” (Bostrom, 2022, sec. 39d; 2024).

Those loose suggestions notwithstanding, it is very much an open question how to improve our expected impact on other cosmic actors. More research is needed.

 

5. Potential impact on cosmic actors as an opportunity

Finally, it is worth noting that while the prospect of influencing other cosmic actors might feel dizzyingly abstract and perhaps even overwhelming, it can also be viewed as an opportunity and as something that can be approached in concrete and realistic steps. After all, if our counterfactual influence on other cosmic actors represents a large fraction of our expected impact, it would indeed, almost by definition, be a massive opportunity.


22

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since:

Influence on cosmic actors seems not only "plausible" but inevitable to me. Everything we do influences them in expectation, even if extremely indirectly (e.g., anything that directly or indirectly reduces X-risks reduces the likelihood of alien counterfactuals and increases that of interaction between our civilization and alien ones). The real questions seem to be i) how crucial is this influence for evaluating whether the work we do is good or bad; and ii) whether we can predictably influence them (right now, we know we are influencing them; we simply have no idea if this is in a way that makes the future better or worse). I think your first section gives good arguments in favor of answering "plausibly quite crucial" to (i). As for (ii), your fourth section roughly responds "maybe, but we'd yet have to figure out precisely how" which seems fair (although, fwiw, I think I'm more skeptical than you that we'll ever find evidence robust enough to warrant updating away from radical agnosticism on whether our influence on cosmic actors makes the future better or worse).

Also, this is unrelated to the point of your post but I think your second section should invite us to reflect on whether longtermists can/should ignore the unpredictable (see, e.g., this recent comment thread and the references therein) since this may be a key -- and controversial -- assumption behind the objections you respond to.

Thanks for this interesting post :)

Thanks for your comment :)

fwiw, I think I'm more skeptical than you that we'll ever find evidence robust enough to warrant updating away from radical agnosticism on whether our influence on cosmic actors makes the future better or worse

I guess there are various aspects that are worth teasing apart there, such as: humanity's overall influence on other cosmic actors, a given altruistic community's influence on cosmic actors, individual actions taken (at least partly) with an eye to having a beneficial influence on (or together with) other cosmic actors, and so on. I guess our analyses, our degrees of agnosticism, and our final answers can differ greatly across different questions like these. For example, individual actions might be less difficult to optimize given their smaller scale and given that we have greater control over them (even if they're still very difficult to predict and optimize in absolute terms).

I also think a lot depends on the meaning of "radical agnosticism" here. A weak interpretation might be something like "we'll generally be pretty close to 50/50, all things considered". I'd agree that, in terms of long-term influence, that's likely to be the best we can do for the most part (though I also think it's an open question, and I don't see much reason to be firmly convinced of, or committed to, the view that we won't ever be able to do better).

A stronger interpretation might be something like "we'll practically always be exactly at — or indistinguishably close to — 50/50, all things considered". That version of radical agnosticism strikes me as too radical. On its face, it can seem like a stance of exemplary modesty, yet on closer examination, I actually think it's the opposite, namely an extremely strong claim. I mean, it seems to me like a striking "throw a ball in the air and have it land and balance perfectly on a needle" kind of coincidence to end at exactly — or indistinguishably close to — 50/50 (or at any other position of complete agnosticism, e.g. even if one rejects precise credences).[1]

In fact, I think the point about how we can't rule out that we might find better, more confident answers in the future (e.g. with the help of new empirical insights, new conceptual frameworks, better AI tools, and so on) is alone a reason not to accept such "strong" radical uncertainty, as this point suggests that further exploration is at least somewhat beneficial in expectation.

  1. ^

    For example, if you've weighed a set of considerations that point vaguely in one direction, it would seem like quite a coincidence if "unknown considerations" were to exactly cancel out those considerations. I see you've discussed whether unknown considerations might be positively correlated with known considerations, but it seems that even zero correlation (which is arguably a defensible prior) would still lead you to go with the conclusion drawn based on the known considerations; you'd seemingly need to assume a (weakly) negative correlation to consistently get back to a position of complete agnosticism.

Executive summary: Our potential influence on future cosmic civilizations and actors beyond Earth deserves greater consideration in AI safety and other decisions, as these interactions may represent a significant portion of our long-term impact.

Key points:

  1. Several models suggest future cosmic interactions are plausible: our early emergence in universe timeline, "grabby aliens" theory, possibility of silent cosmic rulers, and evidential cooperation across space-time.
  2. While speculative, the combined probability of meaningful cosmic influence isn't negligibly low, and the potential scale warrants serious consideration despite uncertain tractability.
  3. Common biases (temporal, spatial, and partiality) may cause us to undervalue potential impacts on cosmic actors relative to more immediate human-centric concerns.
  4. Key recommendation: Incorporate cosmic impact considerations more explicitly into AI safety work and research, while maintaining an attitude of humility and cosmic considerateness.
  5. This perspective represents an opportunity - if cosmic influence comprises a large portion of our expected impact, focusing on it could significantly increase our positive influence on the future.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities