JordanStone

Astrobiologist @ Imperial College London
414 karmaJoined Pursuing a doctoral degree (e.g. PhD)London, UK
www.imperial.ac.uk/people/j.stone22

Bio

Participation
3

Searching for life on Mars @ Imperial College London

Lead of the Space Generation Advisory Council, Cosmic Futures project. 

Interested in: Space Governance, Great Power Conflict, Existential Risk, Cosmic threats, Academia, International policy

 

Chilling the f*** out is the path to utopia

How others can help me

If you'd like to chat about space governance or existential risk please book a meeting!

How I can help others

Wanna know about space governance? Then book a meeting!! - I'll get an email and you'll make me smile because I love talking about space governance :D

Sequences
1

Actions for Impact | Offering services examples

Comments
51

Edit: changed the title from "Galactic x-risks: Obstacles to accessing the cosmic Endowment" to "Interstellar travel will probably doom the long-term future". I wanted to highlight the main practical suggestion from the post :)

Thanks Oscar :)

Yeah there are so many horrible trade-offs to figure out around long-term resilience and liberty/diversity. I'm hopeful that these are solvable with a long reflection (and superintelligence!).

Hi Joseph, glad you found the post interesting :)

Yeah, for "the way forward" section I explicitly assume that alien civilisations have not already developed. This might be wrong, I don't know. One possible argument in line with my reasoning around galactic x-risks is that aliens don't exist because of the Anthropic principle - if they had already emerged then we would have been killed a long time ago, so if we exist then it's impossible for aliens civilisations to have emerged already. No alien civilisations exist for the same reason that the fine structure constant allows biochemistry. 

 

I'm not sure if any of these galactic level existential risks are tractable in any meaningful way at our current level of development. Maybe we should take things one step at a time?

I totally agree with this statement. I have huge uncertainty about what awaits us in the long-term future (in the post I compared myself to an Ancient Roman trying to predict AI alignment risks). But it seems that, since the universe is currently conducive to life, the unknowns may be more likely to end us than help us. So the main practical suggestion I have is that we take things one step at a time and hold off on interstellar travel (which could plausibly occur in the next few decades) until we know more about galactic x-risks and galactic governance. 

It's not necessarily that these galactic-scale considerations will happen soon or are tractable, but that we might begin a series of events (i.e., the creation of self-propagating spacefaring civilisation) that interferes with the best possible strategy for avoiding them in the long-term future. I don't claim to know what that solution is, but I suggest some requirements a governance system may have to meet. 

Yeah that's true. 

I think 1000 is where I would start to get very worried intuitively, but there would be hundreds of millions of habitable planets in the Milky Way, so theoretically a galactic civilisation could have that many if it didn't kill itself before then. 

I guess the probability of one of these civilisations initiating an s-risk or galactic x-risk would just increase with the size of the galactic civilisation. So the more that humanity expands throughout the galaxy, the greater the risk.

Yeah sure, it's like the argument that if you get infinite chimpanzees and put them in front of type writers, then one of them would write Shakespeare. If you have a galactic civilisation, it would be very dispersed and most likely each 'colony' occupying each solar system would govern itself independently. So they could be treated as independent actors sharing the same space, and there might be hundreds of millions of them. In that case, the probability that one of those millions of independent actors creates astronomical suffering becomes extremely high, near 100%. I used digital sentience as an example because its the risk of astronomical suffering that I see as the most terrifying - like IF digital sentience is possible, then the amount of suffering beings that it would be possible to create could conceivably outweigh the value of a galactic civilisation. That 'IF' contains a lot of uncertainty on my part. 

But this also applies to tyrannous governments, how many of those independent civilisations across a galaxy will become tyrannous and cause great suffering to their inhabitants? How many of those civilisations will terraform other planets and start biospheres of suffering beings?

The same logic also applies to x-risks that affect a galactic civilisation:

all it takes is one civilization of alien ass-hat griefers who send out just one Von Neumann Probe programmed to replicate, build N-D lasers, and zap any planet showing signs of technological civilization, and the result is a galaxy sterile of interplanetary civilizations until the end of the stelliferous era (at which point, stars able to power an N-D laser will presumably become rare). (Charlie Stross)

Stopping these things from happening seems really hard. It's like a galactic civilisation needs to be designed right from the beginning to make sure that no future colony does this.

JordanStone
3
0
0
40% agree

Do you expect to be more of a mentor or a mentee?

 

I'm very active in space governance and I'm excited to chat about how that crosses over with many other EA cause areas. 

Link to my swapcard

Elon Musk recently presented SpaceX's roadmap for establishing a self-sustaining civilisation on Mars (by 2033 lol). Aside from the timeline, I think there may be some important questions to consider with regards to space colonisation and s-risks: 

  1. In a galactic civilisation of thousands of independent and technologically advanced colonies, what is the probability that one of those colonies will create trillions of suffering digital sentient beings? (probably near 100% if digital sentience is possible… it only takes one)
  2. Is it possible to create a governance structure that would prevent any person in a whole galactic civilisation from creating digital sentience capable of suffering? (sounds really hard especially given the huge distances and potential time delays in messaging… no idea)
  3. What is the point of no-return where a domino is knocked over that inevitably leads to self-perpetuating human expansion and the creation of galactic civilisation? (somewhere around a self-sustaining civilisation on Mars I think). 

If the answer to question 3 is "Mars colony", then it's possible that creating a colony on Mars is a huge s-risk if we don't first answer question 2. 

Would appreciate some thoughts. 

 

Stuart Armstrong and Anders Sandberg’s article on expanding throughout the galaxy rapidly, and Charlie Stross’ blog post about griefers influenced this quick take.

Hey! I'm requesting some help with "Actions for Impact", it's a notion page with activities people can get involved in that take less than 30 minutes and can contribute to EA cause areas. This includes signing petitions, emailing MPs, voting for effective charities in competitions, responding to 'calls for evidence', or sharing something online. EA UK has the notion page linked on their website: https://www.effectivealtruism.uk/get-involved 

It should serve as a hub to leverage the size of the EA community when it's needed. 

I'm excited about the idea and I thought I'd have enough time to keep it updated and share it with organisations and people, but I really don't. If the idea sounds exciting and you have an hour or two per week spare please DM me, I'd really appreciate a couple of extra hands to get the ball rolling a bit more (especially if you have involvement in EA community building as I don't at all). 

I didn't write this post with the intention of criticising the importance of space governance, so I wouldn't go as far as you. I think reframing space governance in the context of how it supports other cause areas reveals how important it really is. But space governance also has its own problems to deal with, so it's not just a tool or a background consideration. Some (pressing) stuff that could be very bad in the 2030s (or earlier) without effective space governance:

  • China/Russia and the USA disagree over how to claim locations for a lunar base, and they both want to build one on the south pole. High potential for conflict in space (would also increase tensions on Earth). Really bad precedent for the long term future.
  • I think space mining companies have a high chance of accidentally changing the orbits of multiple asteroids, increasing the risk of short warning times from asteroids with suddenly altered orbits (or creation of lots of fragments that could damage satellites). No policy exists to protect against this risk.
  • Earth's orbit is getting very full of debris and satellites. Another few anti-satellite weapons tests or a disaster involving a meteroid shower may trigger Kessler syndrome. Will Elon Musk de-orbit all of his thousands of Starlink satellites?
  • The footprints of the first humans to ever set foot on another celestial body still exist on the moon. They will be destroyed by lunar plumes caused by mining in the 2030s - this will be a huge blow to the long term future (I think it could even be the greatest cultural heritage of all time to a spacefaring civilisation and we're gonna lose it). All it takes is one small box around some of the footprints to protect 90% of the value.  
  • Earth's orbit is filled with debris. The moon's orbit is smaller and we can't just get rid of satellites by burning them in the atmosphere. No policy exists to set a good precedent around that yet so the moon's orbit will probably end up being even worse than Earth's - people are already dodging each other's satellites around the moon, and ESA & NASA want to build whole networks for moon internet. 
Load more