JordanStone

Astrobiologist @ Imperial College London
602 karmaJoined Pursuing a doctoral degree (e.g. PhD)London, UK
www.imperial.ac.uk/people/j.stone22

Bio

Participation
3

Space governance

Sequences
1

Actions for Impact | Offering services examples

Comments
69

Disclaimer: I have also applied to Forethought and won't comment on the post directly due to competing interests. 

 

On space governance, you assume 2 scenarios:

  1. We don’t solve all the alignment/safety problems and everything goes very badly
  2. We solve all the problems and AGI leads to utopian effects

I agree that early space governance work is plausibly not that important in those scenarios, but in what percentage of futures do you see us reaching one of these extremes? Capabilities allowing for rapid technological progress can be achieved under various scenarios related to alignment and control that are not at the extremes:

  • Scenario A: Fast progress without AGI (scaling-limits-overcome, algorithmic breakthroughs, semi-autonomous robotics).
  • Scenario B: Uneven AGI (not catastrophically misaligned, multipolar, corporate-controlled).
  • Scenario C: AGI that’s aligned to someone but not globally aligned.
  • Scenario D: Multiple AGIs controlled by different actors with competing goals.

And capabilities allowing for rapid technological progress can be developed independently of capabilities allowing for the great wisdom to solve and reshape all our space governance problems. This independence of capabilities could happen under any of those scenarios: 

  • Maybe AI remains under control but important people don't listen to or trust the wisdom from the ASI.
  • Some challenges associated with developing AGI are not solved and AI emerges as narrowly intelligent but still capable of advanced robotic capabilities like autonomous construction and self-replication (so we still get rapid technological progress), but not the wisdom to solve all our governance problems.  
  • Legal or societal forces prevent AI from taking a leading role in governance.

So, under many scenarios, I don't expect AGI to just solve everything and reshape all our work on space governance. But even if it does reshape the governance, some space-related lock-ins remain binding even with AGI:

  • Resource distribution lock-in: which states and corporations have physical access to asteroids, propellant depots, lunar poles, launch capacity.
  • Institutional lock-in: whatever coordination mechanisms exist at AGI creation time are what AGI-augmented institutions will inherit.
  • Strategic stability lock-in: early military architectures (Lagrange-point sensors, autonomous interceptors) become entrenched.

In all these scenarios, early space industrialisation and early high-ground positions create durable asymmetries that AGI cannot trivially smooth over. AGI cannot coordinate global actors instantly. Some of these lock-ins occur before or during the emergence of transformative AI. Therefore, early space-governance work affects the post-AGI strategic landscape and cannot simply be postponed without loss. 

The disagreement could then be over whether we reach AGI in a world where space industrialisation has already begun creating irreversible power asymmetries. If a large scale asteroid mining industry or significant industry on the moon emerges before AGI, then a small group controlling this infrastructure could have a huge first mover advantage in using that infrastructure to take advantage of rapid technological progress to lock in their power forever/ take control of the long-term future through the creation of a primitive Dyson swarm or the creation of advanced space denial capabilities. So, if AI timelines are not as fast as many in this community think they are, and an intelligence explosion happens closer to 2060 than 2030, then space governance work right now is even more important. 

Space governance is also totally not arbitrary and is an essential element of AGI preparedness. AGI will operate spacecraft, build infrastructure, and manage space-based sensors. Many catastrophic failure modes (post-AGI power grabs, orbital laser arrays, autonomous swarms, asteroid deflection misuse) require both AGI and space activity. If it turns out that conceptual breakthroughs don't come about and we need ridiculous amounts of energy/compute to train superintelligence, then space expansion is also a potential pathway to achieving superintelligence. Google is already working on Project Suncatcher to scale machine learning in space, and Elon Musk, who has launched 9000 Starlink satellites into Earth orbit, has also discussed the value of solar powered satellites for machine learning. All of this ongoing activity is linked to the development of AGI and locks in physical power imbalances post-AGI.

As I argued in my post yesterday, even without the close links between space governance and AGI, it isn't an arbitrary choice of problem. I think that if a global hegemony doesn't emerge soon after the development of ASI, then it will likely emerge in outer space through the use of AI or self-replication to create large scale space infrastructure (allowing massive energy generation and access to interstellar space). So, under many scenarios related to the development of AI, competition and conflict will continue into outer space, where the winner could set the long-term trajectory of human civilisation or the ongoing conflict could squander the resources of the galaxy. This makes space governance more important than drought-resistant crops. 

All you have to admit for space governance to be exceptionally important is that some of these scenarios where AGI initiates rapid technological progress but doesn't reshape all governance are fairly likely. 

This is quite a large range, but yeah I get that this comes out of nowhere. The range I cite is based on a few things:

  1. Anders and Stuart's paper from 2013 about building a Dyson swarm from Mercury and colonising the galaxy. They estimate a ~36 year construction time. However, they don't include costs to refine materials or build infrastructure, they estimate a 5 year construction time for solar captors, which I think is too long for post-AGI, and I think the disassembly of Mercury is unlikely.
  2. So I have replicated their paper and I played around with the model a lot to look at different strategies, like disassembling asteroids instead, building the swarm a lot closer to the Sun, including costs for refining materials, and changing the construction time for each solar captor. This is what that range I cite is mostly based on, but I don't want to share it publicly because (1) I'm still working on it and (2) I'm unsure about how to handle the potential info hazard associated with arguing that Dyson swarms are easier to build than previously thought.
  3. I've also talked about the above with people who have spent many years researching Dyson swarms and similar megastructures, like Anders Sandberg and James Giammona, but I have no idea whether they would endorse the timeframe I cite. 

So yeah, good point, that Dyson swarm construction time is not well justified within the post, and the timeline I cite should be taken as just the subjective opinion of someone who's spent a decent amount of time researching it. 

From my own experience and from what I've seen, I think it's common for new contributors to the forum to underestimate the amount of previous work that the discourse here builds on. And downvotes aren't used to disagree with a post, but are supposed to be used as something like a quality assessment. So my guess from a read of your downvoted post is that the downvotes reflect the fact that the argument you're making has been made before on the forum and within the wider EA community and you haven't engaged with that.

Maybe search for stuff like "AI-enabled coups", "power grabs", and "gradual disempowerment".

JordanStone
2
1
0
20% disagree

Effective altruists should spend more time and money on global systemic change

I'm mostly worried about low tractability and, if tractable, a lack of ability to predict the final outcomes if advocating for a World government. Maybe the safe option is to pursue traditional methods to promote international collaboration: treaties, information sharing, panel discussions etc. 

Thanks for this comment! I broadly agree with it all and it was very interesting to read. Thanks in particular for advancing my initial takes on governance (I'm far more comfortable discussing quantum physics than governance systems). 

a) Preventing catastrophe seems much more important for advanced civilizations than I realized and its not enough for the universe to be defense-dominated. 

b) Robustly good governance seems attainable? It may be possible to functionally 'lock-out' catastrophic-risk and tyranny-risk on approach to tech maturity and it seems conceivable (albeit challenging) to softly lock-in definitions of 'catastrophe' and 'tyranny' which can then be amended in future as cultures evolve and circumstances change. 

Agreed on both. Locking stuff out seems possible, and then as knowledge advances (in terms of moral philosophy or fundamental physics) and new possibilities come to light and priorities change, the "governance system" could be updated from a centralised position, like a software update expanding at the speed of light. Then the main tradeoff is between ensuring no possibility of a galactic x-risk or s-risk you know about could ever happen, and being adaptable to changing knowledge and emerging risks. 

At the scale of advanced civilizations collapse/catastrophe for even a single star system seems unbearable.

I strongly agree with this. We (as in, humanity) are at a point where we can control what the long-term future in space will look like. We should not tolerate a mostly great future with some star systems falling into collapse or suffering - we are responsible for preventing that and allowing it to happen at all is inconceivably terrible even if the EV calculation is positive. We're better than naive utilitarianism. 

If we buy your argument here Jordan or my takeaways from Joe's talk then we're like, ah man we may need really strong space governance. Like excellent, robust space governance. But no, No! This is a tyranny risk.

There are ways to address the risks I outlined without a centralised government that might be prone to tyranny (echoing your "hand waving" section later):

  • Digital World creation – super capable machine with blueprints (not an AGI superintelligence) goes to each star system and creates digital sentient beings. That’s it. No need for governance of independent civs.
  • We only send out probes to collect resources from the galaxy and bring them back to our solar system. We can expand in the digital realm here and remain coordinated.
  • Right from the beginning we figure out a governance system with 100% existential security and 0% s-risk (whatever that is). The expansion into space is supervised to ensure that each independent star system begins with this super governance system, but other than that they have liberty.
  • Just implement excellent observation of inhabited star systems. Alert systems for bad behaviour to nearby star systems prevents s-risks from lasting millennia (but, of course, carries conflict risks). 

Maybe if we find the ultimate moral good and are coordinated enough to spread that, then the universe will be homogenous, so there is no need for governance to address unpredicted behaviour.

In particular it seems possible to forcibly couple the power to govern with goodness.

I think this is a crucial point. I'm hopeful of that. If it's possible to lock in that strong correlation then does that ensure absolute existential security and no s-risks? I think it depends on goodness. If the "goodness" is based on panbiotic ethics, then we have a universe full of suffering Darwinian biology. If the "goodness" is utilitarian then the universe becomes full of happiness machines... maybe that's bad. Don't know. It seems that the goodness in your USA example is defined by Christian values, which maybe don't give us the best possible long-term future. Maybe I'm reading too far into your simple model (I find it conceptually very helpful though).

There's also the sort hand-off or die roll wherein you cede/lose power to something and can't get it back unless so willed by the entity in question. I prefer my sketch of marching to decouple governmental power from competitiveness.

Yeah I agree. But I think it's dependent on the way that society evolves. If we're able to have a long reflection (which I think unlikely), then maybe we can build a good God more confidently. But your model sounds more realistic. 

I had a discussion with Robin Hanson about this post, available here: 

Hi Josh. I'm not a careers advisor but I'm working on some space governance projects.

I would recommend checking out the sections on space governance in this recent report from William MacAskill and Fin Moorhouse to get an idea of what some effective altruists are currently thinking about in relation to space governance: https://www.forethought.org/research/preparing-for-the-intelligence-explosion

I'd also really recommend getting involved with the Space Generation Advisory Council if you'd like to work on challenges in space tech and governance. They have lots of project groups you can get involved in on many different topics like space law and policy and space safety and sustainability. 

I'm happy to have a chat about space governance and effective altruism if you want to book a chat: https://savvycal.com/AstroJordanStone/2cb3cbdb

Could you expand on why you think space would be defense dominant?

Thanks for the in-depth comment. I agree with most of it. 

if interstellar colonization would predictably doom the long-term future, then people would figure out solutions to that.

Agreed, I hope this is the case. I think there are some futures where we send lots of ships out to interstellar space for some reason or act too hastily (maybe a scenario where transformative AI speeds up technological development, but not so much our wisdom). Just one mission (or set of missions) capable of self-propagating to other star systems almost inevitably leads to galactic civilisation in the end, and we'd have to catch up to it to ensure existential security, which would become challenging if they create von-Neumann probes.

"50%" in the survey was about vacuum decay being possible in principle, not about it being possible to technologically induce (at the limit of technology). The survey reported significantly lower probability that it's possible to induce. This might still be a big deal though!

Yeah this is my personal estimate based on that survey and its responses. I was particularly convinced by one responder who put 100% probability that its possible to induce (conditional on the vacuum being metastable), as anything that's permitted by the laws of physics is possible to induce with arbitrarily advanced technology (so, 50% based on that chance of the vacuum is metastable).

Thanks Jacob.

I really like this idea to get around the problem of liberty. Though, I'm not sure how rapid the response would have to be from others to someone initiating vacuum decay - could a 'bad actor' initiate vacuum decay in the time it takes for the system to send an alert and for a response to arrive? I think having a non-intrusive surveillance system would work in a world where near-instant communication between star systems is possible (e.g. wormholes or quantum coupling). 

Load more