JordanStone

Astrobiologist @ Imperial College London
300 karmaJoined Pursuing a doctoral degree (e.g. PhD)London, UK
www.imperial.ac.uk/people/j.stone22

Bio

Participation
3

Astrobiologist @ Imperial College London

Lead of the SGAC Cosmic Futures project. 

Partnerships Coordinator @ SGAC Space Safety and Sustainability Project Group

Interested in: Space Governance, Great Power Conflict, Existential Risk, Cosmic threats, Academia, International policy

How others can help me

If you'd like to chat about space governance or existential risk please book a meeting!

Sequences
1

Actions for Impact | Offering services examples

Comments
43

I didn't write this post with the intention of criticising the importance of space governance, so I wouldn't go as far as you. I think reframing space governance in the context of how it supports other cause areas reveals how important it really is. But space governance also has its own problems to deal with, so it's not just a tool or a background consideration. Some (pressing) stuff that could be very bad in the 2030s (or earlier) without effective space governance:

  • China/Russia and the USA disagree over how to claim locations for a lunar base, and they both want to build one on the south pole. High potential for conflict in space (would also increase tensions on Earth). Really bad precedent for the long term future.
  • I think space mining companies have a high chance of accidentally changing the orbits of multiple asteroids, increasing the risk of short warning times from asteroids with suddenly altered orbits (or creation of lots of fragments that could damage satellites). No policy exists to protect against this risk.
  • Earth's orbit is getting very full of debris and satellites. Another few anti-satellite weapons tests or a disaster involving a meteroid shower may trigger Kessler syndrome. Will Elon Musk de-orbit all of his thousands of Starlink satellites?
  • The footprints of the first humans to ever set foot on another celestial body still exist on the moon. They will be destroyed by lunar plumes caused by mining in the 2030s - this will be a huge blow to the long term future (I think it could even be the greatest cultural heritage of all time to a spacefaring civilisation and we're gonna lose it). All it takes is one small box around some of the footprints to protect 90% of the value.  
  • Earth's orbit is filled with debris. The moon's orbit is smaller and we can't just get rid of satellites by burning them in the atmosphere. No policy exists to set a good precedent around that yet so the moon's orbit will probably end up being even worse than Earth's - people are already dodging each other's satellites around the moon, and ESA & NASA want to build whole networks for moon internet. 

My conclusions are different throughout the post including in the title! I'm still not sure whether space governance is more like international policy, or more like EA community building - maybe its a mix of the two, where it's actually like international policy but we should treat it more like EA community building. 

So either space governance is a "meta-cause area" or an "area of expertise", but not a "cause area" in the sense that the term is most often used (i.e. a cause to address). 

I disagree with the suggestion but I upvoted as I think it is an important discussion to have on the forum. Especially with the Musk example, longtermism gets a lot of criticism for ideas that aren't associated with it (even in the space policy literature). But I agree with @Davidmanheim's comment. Thanks for making the post!

Thanks for your reply, lots of interesting points :)

Consciousness may not be binary, in that case, we don't know if humans are low, medium, or high consciousness, I only know that I am not at zero. We should then likely assume we are average. Then, the relevant comparison is no longer between P(humanity is "conscious") and P(aliens creating SFCs are "conscious") but between P(humanity's consciousness > 0) and P(aliens-creating-SFC's consciousness > 0)

I particularly appreciate that reframing of consciousness. I think it's probably both binary and continuous though. Binary in the sense that you need a "machinery" that's capable of producing consciousness i.e. neurons in a brain seem to work. And then if you have that capable machinery, you then have the range from low to high consciousness, like we see on Earth. If intelligence is related to consciousness level as it seems to be on Earth, then I would expect that any alien with "capable machinery" that's intelligent enough to become spacefaring would have consciousness high enough to satisfy my worries (though not necessarily at the top of the range). 

So then any alien civilisation would either be "conscious enough" or "not conscious at all", conditional on (a) the machinery of life being binary in its ability to produce a scale of consciousness and (b) consciousness being correlated with intelligence.

So I'm not betting on it. The stakes are so high (a universe devoid of sentience) that I would have to meet and test the consciousness of aliens with a 'perfect' theory of consciousness before I updated any strategy towards reducing P(ancestral-human SFC) even if there's an extremely high probability of Civ-Similarity Hypothesis being true.

The validity of this hypothesis can be studied using models estimating the frequency of Space-Faring Civilizations (SFCs) in the universe (Sandberg 2018, Finnveden 2019, Olson 2020, Hanson 2021, Snyder-Beattie 2021, Cook 2022). The validity will also depend on which decision theory we use and on our beliefs behind these

I'm very speculative about making moral decisions concerning the donations of potentially millions of dollars based on something so speculative. I think it's too far down the EA crazy train to prioritise different causes based on the density of alien civilisations. It's probably more speculative than the simulation hypothesis (which, if true, significantly increases the likelihood that you are the only sentient being in this universe), but we don't make moral decisions based on that. 

I get that there's been a lot of work on this and that we can make progress on it (I know, I'm an astrobiologist), but I'm sure there are so many unknown unknowns associated with the origin of life, development of sentience, and spacefaring civilisation that we just aren't there yet. The universe is so enormous and bonkers and our brains are so small - we can make numerical estimates sure, but creating a number doesn't necessarily mean we have more certainty.

How much counterfactual value Humanity creates then depends entirely on the utility Humanity’s spacefaring civilisation creates relative to all spacefaring civilisations. 

I've got a big moral circle (all sentient beings and their descendants), but it does not extend to aliens because of cluelessness

I think you're posing a post-understanding of consciousness question. Consciousness might be very special or it might be an emergent property of anything that synthesises information, we just don't know. But it's possible to imagine aliens with complex behaviour similar to us, but without evolving the consciousness aspect, like superintelligent AI probably will be like. For now, the safe assumption is that we're the only conscious life, and I think it's very important that we act like it until proven otherwise. 

So for now, I'm quite confident that if we're thinking about the moral utility of spacefaring civilisation, we should at least limit our scope to our own civilisation, more specifically, our own sentience and its descendants (I personally prefer to limit that scope even further to the next few thousand years, or just our Solar System to reduce the ambiguity a bit - longtermism still stands strong with this huge limitation). I think the main value in looking into the potential density of aliens in the universe helps figure out what our own future might look like. Even if humans only colonise the Solar System because alien SFCs colonise the galaxy, that's still 10^27 potential future lives (1.2 sextillion over the next 6000 years; future life equivalents based on the Solar System's carrying capacity; as opposed to 100 trillion if we stay on Earth till its destruction). We can control and predict that to an extent, and there's enough ambiguity and cluelessness already associated with how to make human civilisation's future in space good in the context of AI - but we can at least make some concrete decisions (e.g. work by Simon Institute & CLR).

 

Very interesting post though! Lots to think about and I can see that this could be the most important moral consideration... maybe... I look forward to your series and I definitely think it's worthwhile to try and figure out what that consideration might be. 

Other currently neglected agendas may increase P(Alignment | Humanity creates an SFC) while not increasing P(Alignment AND Humanity creates an SFC). Those include agendas aiming at decreasing P(Humanity creates an SFC | Misalignment). An example of intervention in such an agenda is overriding instrumental goals for space colonization and replacing them with an active desire not to colonize space. This defensive preference could be removed later, conditional on achieving corrigibility.

What's the difference between "P(Alignment | Humanity creates an SFC)" and "P(Alignment AND Humanity creates an SFC)"? 

I don't get it either. Can you maybe run us through 2 worked examples for bullet point 2? Like what is someone currently doing (or planning to do) that you think should be deprioritised? And presumably, there might be something that you think should be prioritised instead? 

 

I'm imagining here that you want to deprioritise an AI safety regime if it is focusing on making AIs that create technology that can be used for spacefaring civilisation, but aren't aligned? That wouldn't be an AI safety regime would it? That's just creating AI that wants to leave Earth

I would be really interested in a post that outlined 1-3 different scenarios for post-AGI x-risk based on increasingly strict assumptions. So the first one would assume that misaligned superintelligent AI would almost instantly emerge from AGI, and describe the x-risks associated with that. Then the assumptions become stricter and stricter, like AGI would only be able to improve itself slowly, we would be able to align it to our goals etc.

I think this could be a valuable post to link people to, as a lot of debates around whether AI poses an x-risk seem to fall on accepting or rejecting potential scenarios, but it's usually unproductive because everyone has different assumptions about what AI will be capable of. 

So with this post, to say that AI x-risk is not tangible, then for each AI development scenario (with increasingly strict assumptions), you have to either:

  1. reject at least one of the listed assumptions (e.g. argue that computer chips are a limit on exponential intelligence increases)
  2. or argue that all proposed existential risks in that scenario are so impossible that even an AI wouldn't be able to make any of them work.

If you can't do either of those, you accept AI is an x-risk. If you can, you move on to the next scenario with stricter assumptions. Eventually you find the assumptions you agree with, and have to reject all proposed x-risks in that scenario to say that AI x-risk isn't real. 

The post might also help with planning for different scenarios if it's more detailed than I'm anticipating. 

Load more