JordanStone

Astrobiologist @ Imperial College London
611 karmaJoined Pursuing a doctoral degree (e.g. PhD)London, UK
www.linkedin.com/in/stonescience

Bio

Participation
3

Hello! I'm Jordan and I'm working on space governance and longtermism. I'm interested in finding interventions in the space domain that could lead to a flourishing long-term future.

How I can help others

If you want to know more about longtermist space governance, I'm a good person to talk to :)

Comments
74

Thanks Tom, yeah the threat model for stage two is quite similar to your post, where I'm expecting one actor to potentially outgrow the rest of the world by grabbing space resources. However, I do think there might be dynamics in space that feed into a first mover advantage, like Fin's recent post about shutting off space access to other actors, or some way to get to resources first and defend them (not sure about this yet), or just initiating an industrial explosion in space before anyone else (which maybe pays off in the long-term because Earth eventually reaches a limit or slows down in growth compared to Dyson swarm construction). 

As for the threat model of stage 1, I don't have strong opinions on whether a decisive strategic advantage on Earth is more likely to be achieved with superexponential growth or conflict, though your post is very compelling in favour of the former.

My current guess is that there are so many orders of magnitude for growth on Earth that super-exp growth would lead to a decisive strategic advantage without even going to space. If that's right (which it might not be), then it's unclear that stage two adds that much. 

I'm thinking about this sort of thing at the moment in terms of ~what percentage of worlds a decisive strategic advantage is achieved on Earth vs in space, which informs how important space governance work is. I find the 3 stages of competition model to be useful to figure that out. It's not clear to me that Earth obviously dominates and I am open to stage 2 actually not mattering very much, but I want to map out strategies here. 

I do already think that stage 3 doesn't matter very much, but I include it as a stage because I may be in a minority view in believing this, e.g. Will and Fin imply that races to other star systems are important in "Preparing for an Intelligence Explosion", which I think is an opinion based on works by Anders Sandberg and Toby Ord

Super sceptical probably very highly intractable thought that I haven't done any research on: There seem to be a lot of reasons to think we might be living in a simulation besides just Nick Bostrom's simulation argument, like:

  • All the fundamental constants and properties of the universe are perfectly suited to the emergence of sentient life. This could be explained by the Anthropic principle, or it could be explained by us living in a simulation that has been designed for us.
  • The Fermi Paradox: there don't seem to be any other civilizations in the observable universe. There are many explanations for the Fermi Paradox, but one additional explanation might be that whoever is simulating the universe created it for us, or they don't care about other civilizations, so haven't simulated them.
  • We seem to be really early on in human history. Only about 60 billion people have ever lived IIRC but we expect many trillions to live in the future. This can be explained by the Doomsday argument - that in fact we are in the time in human history where most people will live because we will soon go extinct. However, this phenomenon can also be explained by us living in a simulation - see next point.
  • Not only are we really early, but we seem to be living at a pivotal moment in human history that is super interesting. We are about to create intelligence greater than ourselves, expand into space, or probably all die. Like if any time in history were to be simulated, I think there's a high likelihood it would be now. 

If I was pushed into a corner, I might say the probability we are living in a simulation is like 60%, where most evidence seems to point towards us being in a simulation. However, the doubt comes from the high probability that I'm just thinking about this all wrong - like, of course I can come up with a motivation for a simulation to explain any feature of the universe... it would be hard to find something that doesn't line up with an explanation that the simulators just being interested in that particular thing. But in any case, that's still a really high probability of everyone I love potentially not being sentient or even real (fingers crossed we're all in the simulation together). Also, being in a simulation would change our fundamental assumptions about the universe and life, and it be really weird if that had no impact on moral decision-making. 

But everyone I talk to seems to have a relaxed approach to it, like it's impossible to make any progress on this and that it couldn't possibly be decision-relevant. But really, how many people have worked on figuring it out with a longtermist or EA-mindset? Some reasons it might be decision-relevant:

  • We may be able to infer from the nature of the universe and the natural problems ahead of us what the simulators are looking to understand or gain from the simulation (or at least we might attach percentage likelihoods to different goals). Maybe there are good arguments to aim to please the simulators, or not. Maybe we end the simulation if there are end-conditions?
  • Being in a simulation gives some weight to the probability that aliens exist (they probably have a lower probability of existing if we are in a simulation), which helps with long-term grand planning. Like, we wouldn't need to worry about integrating defenses against alien attacks or engaging in acausal trade with aliens.
  • We can disregard arguments like The Doomsday Argument, lowering our p(doom)

Some questions I'd ask is: 

  • How much effort have we put into figuring out if there is something decision-relevant to do about this from a moral impact perspective? How much effort should we put into this?
  • How much effort has gone into figuring out if we are, in fact, in a simulation, using empiricism? What might we expect to see in a simulated universe vs a real world? How we can we search for and detect that? 

Overall, this does sounds nuts to me and it probably shouldn't go further than this quick take, but I do feel like there could be something here, and it's probably worth a bit more attention than I think it has gotten (like 1 person doing a proper research project on it at least). Lots of other stuff sounded crazy but now has significant work and (arguably) great progress, like trying to help people billions of years in the future, working on problems associated with digital sentience, and addressing wild animal welfare. There could be something here and I'd be interested in hearing thoughts (especially a good counterargument to working on this so I don't have to think about it anymore) or learning about past efforts. 

Yeah, agreed on that point. Folks at Forethought aren't necessarily thinking about what a near-optimal future should look like, they're thinking about how to get civilisation to a point where we can make the best possible decisions about what to do with the long-term future. 

Yeah, lists exist for all the people working on space governance from a longtermist perspective, and they tend to list about 10-15 people. I'm like 90% sure I know of everyone working on longtermist space governance, and I'd estimate that there are the equivalent of ~3 people working full time on this. There's not as much undercover work required for space governance, but I don't like to share lists of names publicly without permission.

At the moment, the main hub for space governance is Forethought and most people contact Fin Moorhouse to learn more about space governance as he's the author of the 80K problem profile on space governance and has been publishing work with Forethought on or related to space governance. From there, people tend to get a lay of the land, introductions are made, and newcomers will get a good idea of what people are working on and where they might be able to contribute.

Thanks for this comment, very useful feedback.

In A&S' paper, they assume a 5 year construction time for solar captors, which is essentially the doubling time. That is actually extremely conservative, especially if we're considering post-AGI robotics. I imagine the construction time from asteroid material to solar captors might be on the order of days to weeks, but I definitely want to look into that. Great point. There might be other fundamental constraints though. The rate limiting factor is probably more likely to be a rare material required for something like onboard computers, or argon for ion thrusters, something like that. 

I think the comparison to economic might is interesting. But a weaker actor on Earth trailing far behind the leading economic power could initiate Dyson swarm construction and overtake the leader with either a huge investment or a lead time. It's not so much that they would wait around, but that they would change their strategy from building economic might on Earth to power generation in space[1]. In the end, if it's the long-term future that has the most moral value, then the most important strategic outcome is who has control of that future. All the economic might of Earth cannot compete with a Dyson swarm[2], so the Dyson swarm owner has control of the future. Economic might on Earth should be seen as instrumental to providing an advantage in stage 2 competition or denying access to stage 2 competition by winning stage 1 on Earth. 

 

  1. ^

    I think one nation could get away with this if:

    1. They have existing infrastructure in space related to mining and manufacturing, allowing them to go big fast.
    2. Other nations are in a competition for survival and don't have resources spare for outer space or long-term investment.
    3. They make a ridiculously large investment in space manufacturing that others wouldn't consider because of the costs to the nation's economy and security on Earth.
    4. They do it slowly and stealthily using existing space assets until a critical mass is reached. 
  2. ^

    The Sun is 99.8% the mass of the Solar System. All the energy is there.

    Aside from sheer power, I think a huge intelligence advantage on Earth could win out against a Dyson swarm. Like if I was controlling a Dyson swarm with AGI but there was an ASI on Earth, I would be scared. But I wouldn't be worried about Earth's economic might. 

Disclaimer: I have also applied to Forethought and won't comment on the post directly due to competing interests. 

 

On space governance, you assume 2 scenarios:

  1. We don’t solve all the alignment/safety problems and everything goes very badly
  2. We solve all the problems and AGI leads to utopian effects

I agree that early space governance work is plausibly not that important in those scenarios, but in what percentage of futures do you see us reaching one of these extremes? Capabilities allowing for rapid technological progress can be achieved under various scenarios related to alignment and control that are not at the extremes:

  • Scenario A: Fast progress without AGI (scaling-limits-overcome, algorithmic breakthroughs, semi-autonomous robotics).
  • Scenario B: Uneven AGI (not catastrophically misaligned, multipolar, corporate-controlled).
  • Scenario C: AGI that’s aligned to someone but not globally aligned.
  • Scenario D: Multiple AGIs controlled by different actors with competing goals.

And capabilities allowing for rapid technological progress can be developed independently of capabilities allowing for the great wisdom to solve and reshape all our space governance problems. This independence of capabilities could happen under any of those scenarios: 

  • Maybe AI remains under control but important people don't listen to or trust the wisdom from the ASI.
  • Some challenges associated with developing AGI are not solved and AI emerges as narrowly intelligent but still capable of advanced robotic capabilities like autonomous construction and self-replication (so we still get rapid technological progress), but not the wisdom to solve all our governance problems.  
  • Legal or societal forces prevent AI from taking a leading role in governance.

So, under many scenarios, I don't expect AGI to just solve everything and reshape all our work on space governance. But even if it does reshape the governance, some space-related lock-ins remain binding even with AGI:

  • Resource distribution lock-in: which states and corporations have physical access to asteroids, propellant depots, lunar poles, launch capacity.
  • Institutional lock-in: whatever coordination mechanisms exist at AGI creation time are what AGI-augmented institutions will inherit.
  • Strategic stability lock-in: early military architectures (Lagrange-point sensors, autonomous interceptors) become entrenched.

In all these scenarios, early space industrialisation and early high-ground positions create durable asymmetries that AGI cannot trivially smooth over. AGI cannot coordinate global actors instantly. Some of these lock-ins occur before or during the emergence of transformative AI. Therefore, early space-governance work affects the post-AGI strategic landscape and cannot simply be postponed without loss. 

The disagreement could then be over whether we reach AGI in a world where space industrialisation has already begun creating irreversible power asymmetries. If a large scale asteroid mining industry or significant industry on the moon emerges before AGI, then a small group controlling this infrastructure could have a huge first mover advantage in using that infrastructure to take advantage of rapid technological progress to lock in their power forever/ take control of the long-term future through the creation of a primitive Dyson swarm or the creation of advanced space denial capabilities. So, if AI timelines are not as fast as many in this community think they are, and an intelligence explosion happens closer to 2060 than 2030, then space governance work right now is even more important. 

Space governance is also totally not arbitrary and is an essential element of AGI preparedness. AGI will operate spacecraft, build infrastructure, and manage space-based sensors. Many catastrophic failure modes (post-AGI power grabs, orbital laser arrays, autonomous swarms, asteroid deflection misuse) require both AGI and space activity. If it turns out that conceptual breakthroughs don't come about and we need ridiculous amounts of energy/compute to train superintelligence, then space expansion is also a potential pathway to achieving superintelligence. Google is already working on Project Suncatcher to scale machine learning in space, and Elon Musk, who has launched 9000 Starlink satellites into Earth orbit, has also discussed the value of solar powered satellites for machine learning. All of this ongoing activity is linked to the development of AGI and locks in physical power imbalances post-AGI.

As I argued in my post yesterday, even without the close links between space governance and AGI, it isn't an arbitrary choice of problem. I think that if a global hegemony doesn't emerge soon after the development of ASI, then it will likely emerge in outer space through the use of AI or self-replication to create large scale space infrastructure (allowing massive energy generation and access to interstellar space). So, under many scenarios related to the development of AI, competition and conflict will continue into outer space, where the winner could set the long-term trajectory of human civilisation or the ongoing conflict could squander the resources of the galaxy. This makes space governance more important than drought-resistant crops. 

All you have to admit for space governance to be exceptionally important is that some of these scenarios where AGI initiates rapid technological progress but doesn't reshape all governance are fairly likely. 

This is quite a large range, but yeah I get that this comes out of nowhere. The range I cite is based on a few things:

  1. Anders and Stuart's paper from 2013 about building a Dyson swarm from Mercury and colonising the galaxy. They estimate a ~36 year construction time. However, they don't include costs to refine materials or build infrastructure, they estimate a 5 year construction time for solar captors, which I think is too long for post-AGI, and I think the disassembly of Mercury is unlikely.
  2. So I have replicated their paper and I played around with the model a lot to look at different strategies, like disassembling asteroids instead, building the swarm a lot closer to the Sun, including costs for refining materials, and changing the construction time for each solar captor. This is what that range I cite is mostly based on, but I don't want to share it publicly because (1) I'm still working on it and (2) I'm unsure about how to handle the potential info hazard associated with arguing that Dyson swarms are easier to build than previously thought.
  3. I've also talked about the above with people who have spent many years researching Dyson swarms and similar megastructures, like Anders Sandberg and James Giammona, but I have no idea whether they would endorse the timeframe I cite. 

So yeah, good point, that Dyson swarm construction time is not well justified within the post, and the timeline I cite should be taken as just the subjective opinion of someone who's spent a decent amount of time researching it. 

From my own experience and from what I've seen, I think it's common for new contributors to the forum to underestimate the amount of previous work that the discourse here builds on. And downvotes aren't used to disagree with a post, but are supposed to be used as something like a quality assessment. So my guess from a read of your downvoted post is that the downvotes reflect the fact that the argument you're making has been made before on the forum and within the wider EA community and you haven't engaged with that.

Maybe search for stuff like "AI-enabled coups", "power grabs", and "gradual disempowerment".

JordanStone
2
1
0
20% disagree

Effective altruists should spend more time and money on global systemic change

I'm mostly worried about low tractability and, if tractable, a lack of ability to predict the final outcomes if advocating for a World government. Maybe the safe option is to pursue traditional methods to promote international collaboration: treaties, information sharing, panel discussions etc. 

Thanks for this comment! I broadly agree with it all and it was very interesting to read. Thanks in particular for advancing my initial takes on governance (I'm far more comfortable discussing quantum physics than governance systems). 

a) Preventing catastrophe seems much more important for advanced civilizations than I realized and its not enough for the universe to be defense-dominated. 

b) Robustly good governance seems attainable? It may be possible to functionally 'lock-out' catastrophic-risk and tyranny-risk on approach to tech maturity and it seems conceivable (albeit challenging) to softly lock-in definitions of 'catastrophe' and 'tyranny' which can then be amended in future as cultures evolve and circumstances change. 

Agreed on both. Locking stuff out seems possible, and then as knowledge advances (in terms of moral philosophy or fundamental physics) and new possibilities come to light and priorities change, the "governance system" could be updated from a centralised position, like a software update expanding at the speed of light. Then the main tradeoff is between ensuring no possibility of a galactic x-risk or s-risk you know about could ever happen, and being adaptable to changing knowledge and emerging risks. 

At the scale of advanced civilizations collapse/catastrophe for even a single star system seems unbearable.

I strongly agree with this. We (as in, humanity) are at a point where we can control what the long-term future in space will look like. We should not tolerate a mostly great future with some star systems falling into collapse or suffering - we are responsible for preventing that and allowing it to happen at all is inconceivably terrible even if the EV calculation is positive. We're better than naive utilitarianism. 

If we buy your argument here Jordan or my takeaways from Joe's talk then we're like, ah man we may need really strong space governance. Like excellent, robust space governance. But no, No! This is a tyranny risk.

There are ways to address the risks I outlined without a centralised government that might be prone to tyranny (echoing your "hand waving" section later):

  • Digital World creation – super capable machine with blueprints (not an AGI superintelligence) goes to each star system and creates digital sentient beings. That’s it. No need for governance of independent civs.
  • We only send out probes to collect resources from the galaxy and bring them back to our solar system. We can expand in the digital realm here and remain coordinated.
  • Right from the beginning we figure out a governance system with 100% existential security and 0% s-risk (whatever that is). The expansion into space is supervised to ensure that each independent star system begins with this super governance system, but other than that they have liberty.
  • Just implement excellent observation of inhabited star systems. Alert systems for bad behaviour to nearby star systems prevents s-risks from lasting millennia (but, of course, carries conflict risks). 

Maybe if we find the ultimate moral good and are coordinated enough to spread that, then the universe will be homogenous, so there is no need for governance to address unpredicted behaviour.

In particular it seems possible to forcibly couple the power to govern with goodness.

I think this is a crucial point. I'm hopeful of that. If it's possible to lock in that strong correlation then does that ensure absolute existential security and no s-risks? I think it depends on goodness. If the "goodness" is based on panbiotic ethics, then we have a universe full of suffering Darwinian biology. If the "goodness" is utilitarian then the universe becomes full of happiness machines... maybe that's bad. Don't know. It seems that the goodness in your USA example is defined by Christian values, which maybe don't give us the best possible long-term future. Maybe I'm reading too far into your simple model (I find it conceptually very helpful though).

There's also the sort hand-off or die roll wherein you cede/lose power to something and can't get it back unless so willed by the entity in question. I prefer my sketch of marching to decouple governmental power from competitiveness.

Yeah I agree. But I think it's dependent on the way that society evolves. If we're able to have a long reflection (which I think unlikely), then maybe we can build a good God more confidently. But your model sounds more realistic. 

Load more