Hide table of contents

Starting last year, the EA community has put more focus on space governance. This piece is part of this process and is seeking to collect the reasons why having bad or non-existing space governance might be very harmful. 

The distinction between near- and long-term risks is fuzzy. I am differentiating between the risks we face in the next few centuries and the risks which will be faced in the very long-run future. 

The individual risks are not fully disjunctive. They should be viewed as pointers in the direction of a risk rather than fully distinct buckets. 

Since long-termist space governance is a very young field, it’s not clear that this typology will stand the test of time. This post is meant to give us more clarity now to more effectively conduct the relevant research.  

Near Term

Unsustainability

Space Expansion could be conducted in a way that later proves unsustainable. The resources in the earth’s immediate vicinity are overexploited early and their absence makes space expansion harder than it otherwise would be. This could set back the project of space expansion for significant amounts of time. 

Examples of resources in near-Earth space which could be depleted include the limited number of comets that come close to earth and the ice on the lunar poles. Intensive utilization of Earth Orbit might lead to accidents causing Kessler Syndrome, which would be costly to clean up. Particularly valuable orbits (GEO/SSO/ISS zone) could also become crowded. 

We believe that risks in this category are a lot less important than the other ones.

 

Negative Impact on Governance on Earth 

Even before significant parts of human civilization resides in space, space expansion has the potential to impact the geopolitical balance on earth in a negative way. 

A malevolent actor that is in a subordinate military position on earth could use strategic investment in space capabilities as a way to put themselves in a superior position. As satellite capabilities become more and more crucial to modern war, an actor could make strategic and focussed investments in that area.  

In general the politics of space are a potentially controversial issue. A distribution of benefits that are perceived as 'unfair' could introduce political instability. Since there is no history of humanity distributing resources from space and violence has historically been associated with the conquest of new frontiers, this is a risk not to be neglected. Disagreement about it could make cooperation on other existential risk relevant issues.


Increasing Existential Risk

As more human activity shifts off earth, it may get harder to regulate, since humanity has little experience with space governance and the proper regulatory regime has yet to be set up. This means that the governance of a variety of dangerous technologies gets harder. So we can expect space expansion to increase all risks that require global governance.

We may expect Space Expansion to accelerate the deployment of AI systems, as they are more important in space than on earth. This is because space is quite inhospitable to humans and ensuring they are protected at all times while executing tasks can be expected to be quite expensive. Making AI systems harder to regulate and increasing humanity’s reliance on them might increase the risk of the "Going out with a whimper" described by Paul Christiano. 

As development of the relevant technologies proceeds, more and more actors are going to have access to space and may be able to use it to do harm. One example is the deliberate diversion of asteroids towards earth. As Carl Sagan noted, any method capable of deflecting an asteroid away from earth could also be used by malevolent actors to steer an asteroid towards it. 

For more details on this topic I recommend this paper by Carson Ezell. 

Long Term

Suboptimal Resource Use

Whatever goals humanity may have in the future, almost everything we may want to accomplish is limited by the amount of matter in the universe and the physical and cosmological laws governing it. For some moral views, the amount of energy we can utilize is directly proportional to the amount of value we can create. If space expansion is done in a chaotic, uncoordinated way, resources may be wasted, causing us to fall short of our potential. 

 

Harmful Irreversible Measures

On the current view of humanity's possible futures there are things ahead that - once they are executed - might only be reversible at very high cost. Examples are accidental biological contamination, deliberate panspermia, or space expansion via automated self-replicating probes. Those things could later prove suboptimal or even immoral upon reflection. 

Any future Space Governance Framework should ensure that such actions are alway preceded by a long period of reflection or are executed in a more reversible manner. We can expect this to be a hard challenge, since many of the examples listed above are subject to unilateral action and a lot of coordination or enforcement might be needed to avoid them. 

 

Using Control over Space to enforce Totalitarianism

On Earth it has so far been impossible for a single actor to establish control over the entire planet. Totalitarian regimes were always limited to a fraction of the Earth’s surface. If humanity establishes more control over outer space or human life even plays out in space primarily, the obstacles we have seen on earth thus far may no longer apply. 

Establishing total control over space will likely give an actor a decisive advantage over any great power limited to earth. From there it might be possible to establish a space governance framework that ensures totalitarian power for a long period of time. 

This risk is different from the claim that moving the political playing field from earth to space increases the risk for all encompassing totalitarianism - the opposite might be the case. But space expansion could present one additional opportunity for it. 

 

Permanent War 

A different scenario we should try to steer away from is one where space is inhabited by many different actors in permanent military confrontation. Instability could come from lack of trust due to the lack of communication over long distances or an OD Balance heavily favoring the offense and strongly pushing actors towards a first strike.

 

Dangerous Competition Dynamics

A politically unstable governance order could also lead to competition dynamics.

For instance, if there is no credible way of ensuring peaceful coexistence, different coalitions may be forced to allocate most of their resources to defense instead of ensuring a flourishing civilization.

Another example: different actors expend lots of resources to increase their speed of expansion, as described by Robin Hanson in “Burning the Cosmic Commons”.

 

Inability to coordinate on Large Scale Projects

In general we want to be able to coordinate on important large projects. One example of a large scale project is aestivation - the idea that civilization should delay most computation until the average temperature of the universe is lower, allowing for more efficiency. 

Another example could be the moving of galaxies into so-called hyperclusters, which could counteract the increase of the distances between them due to the expansion of the universe.

Inability to coordinate on such projects could lead to wasted resources and leave us unable to reach the most desired end states of space expansion.   
 

Inability to Enforce Rules

While we currently can’t be sure how an optimal governance order beyond earth should look, we know that we want it to be able to enforce certain rules over all inhabited parts of the universe. We want to stop malevolent actors from deliberately or instrumentally producing a lot of suffering - in physical as well as digital form. We might also want to ban actors from unilaterally moving stars and galaxies as well as wasting a lot of energy in general. Actors should also launch not be able to launch dangerous self-replicating devices or deliberately or accidentally produce vacuum decay.  

 

Suffering Risks

Space expansion allows for the existence of many more sentient beings than would be possible on earth - in physical and potentially in digital form. This means space expansion can enable a lot of good, but also contains the risk of astronomical suffering. 

Humanity may choose to terraform and spread wildlife to other planets or wildlife may be part of artificial settlements. 

Digital beings might exist for a variety of purposes and for reasons of entertainment, economic productivity or information gain they may experience a lot of suffering

If it is possible to create universes in a laboratory, this could also constitute an s-risk under certain circumstances. 
 

Bad Response to Extraterrestrial Intelligence

There is a sufficiently large chance that in the process of expanding to other stars and galaxy humanity’s descendants might encounter ETI. This would be an outcome that is very hard to plan for and entails the potential for catastrophic outcomes. 

The worst outcome is probably a very destructive war, but scenarios in which one of the parties quickly defeats and forces its preference upon the other could be very bad. 

Ideally the way space expansion is conducted and the governance framework would be set up in a way to minimize the chance of a very bad outcome. It should also entail the flexibility to arrange for the best outcome given the specific conditions of the encounter. 

 

Unknown Unknowns

Then there could still be risks that we don’t have the ability to fully grasp yet. A good space governance framework should be able to respond appropriately to such unknown unknowns. 

 

This is one of the pieces written as part of the CHERI Summer Research Program 2022. I whole-heartedly thank the organisers for giving me this opportunity and Joseph Levin for being my mentor. 

A significant proportion of work during that time went into the Space Futures Initiative research agenda. I am planning to publish the rest of my work going forward. 

Originally this was meant to be part of a larger sequence I planned to write. Since then I have decided against continuing that work and am publishing the drafts as they are.  I might write about my view on Space Governance as a cause area in the future. 

Comments1


Sorted by Click to highlight new comments since:

I'm very glad to see you working and thinking about this—it seems pretty neglected within the EA community. (I'm aware of and agree with the thought that speeding up space settlement is not a priority, but making sure it goes well if it happens does seem important.)

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f