Searching for life on Mars @ Imperial College London
Lead of the Space Generation Advisory Council, Cosmic Futures project.
Interested in: Space Governance, Great Power Conflict, Existential Risk, Cosmic threats, Academia, International policy
Chilling the f*** out is the path to utopia
If you'd like to chat about space governance or existential risk please book a meeting!
Wanna know about space governance? Then book a meeting!! - I'll get an email and you'll make me smile because I love talking about space governance :D
Thanks Jacob.
I really like this idea to get around the problem of liberty. Though, I'm not sure how rapid the response would have to be from others to someone initiating vacuum decay - could a 'bad actor' initiate vacuum decay in the time it takes for the system to send an alert and for a response to arrive? I think having a non-intrusive surveillance system would work in a world where near-instant communication between star systems is possible (e.g. wormholes or quantum coupling).
I'm borrowing the term "N-D lasers" from Charlie Stross in this post: https://www.antipope.org/charlie/blog-static/2015/04/on-the-great-filter-existentia.html
N-dimensional is just referring to an arbitrarily powerful laser, potentially beyond our current understanding of physics. These lasers might be so powerful they could travel vast distances through interstellar space and destroy a star system. They would travel at the speed of light so it'd be impossible to see them coming. Kurzgesagt made a great video on this:
Hi Birk. Thank you for your very in-depth response, I found it very interesting. That's pretty much how i imagined the governance system when I wrote the post. I actually had it as a description like that originally but I hated the implications for liberalism, so i took a step back and listed requirements instead (which didn't actually help).
The "points of no return" do seem quite contingent, and I'm always sceptical about the tractability of trying to prevent something from happening - usually my approach is: it's probably gonna happen, how do we prepare? But besides that, I'm going to look into more specific "points of no return" as there could be a needle hiding in the noodles somewhere. I feel like this is the kind of area where we could be missing something, e.g. the point of no return is really close, or there could be a tractable way to influence the implementation of that point of no return.
Definitely on base :D
I think a galactic civilisation needs to have absolute existential security, or a galactic x-risk will inevitably occur (i.e., they need a coin that always lands on heads). If your galactic civilisation has survived for longer than you would have expected it to based on cumulative chances, then you can be very confident you've achieved absolute existential security (you have that coin). But a galactic civ would have to know whether they have the coin that is always heads, or the coin that is heads 99.9999999% of the time. I'm not sure how that's possible.
A lot of the reason for my disagreement stems from thinking that most galactic-scale disasters either don't actually serve as x-risks (like the von Neumann probe scenario), because they are defendable, or they require some shaky premises about physics to come true.
I think each galactic x-risk on the list can probably be disregarded, but combined, and with the knowledge that we are extremely early in thinking about this, they present a very convincing case to me that at least 1 or 2 galactic x-risks are possible.
The biggest uncertainty here is how much acausal trade lets us substitute for the vast distances that make traditional causal governance impossible.
Really interesting point, and probably a key consideration on existential security for a spacefaring civilisation. I'm not sure if we can be confident enough in acausal trade to rely on it for our long-term existential security though. I can't imagine human civilisation engaging in acausal trade if we expanded before the development of superintelligence. There are definitely some tricky questions to answer about what we should expect other spacefaring civilisations to do. I think there's also a good argument for expecting them to systematically eliminate other spacefaring civilisations rather than engage in acausal trade.
Awesome speculations. We're faced with such huge uncertainty and huge stakes. I can try and make a conclusion based on scenarios and probabilities, but I think the simplest argument for not spreading throughout the universe is that we have no idea what we're doing.
This might even apply to spreading throughout the Solar System too. If I'm recalling correctly, Daniel Deudney argued that a self-sustaining colony on Mars is the point of no return for space expansion as it would culturally diverge from Earth and their actions would be out of our control.
I'm not sure I agree that the Anthropic Principle applies here. It would if ALL alien civilizations are guaranteed to be hostile and expansionist
I'd be interested to hear why you think this. I think that based on the reasoning in my post, all it takes is one alien civilization to emerge that would initiate a galactic x-risk, maybe because they accidentally create astronomical suffering and want to end it, they are hostile, would prefer different physics for some reason, or are just irresponsible.
Thanks for the in-depth comment. I agree with most of it.
Agreed, I hope this is the case. I think there are some futures where we send lots of ships out to interstellar space for some reason or act too hastily (maybe a scenario where transformative AI speeds up technological development, but not so much our wisdom). Just one mission (or set of missions) capable of self-propagating to other star systems almost inevitably leads to galactic civilisation in the end, and we'd have to catch up to it to ensure existential security, which would become challenging if they create von-Neumann probes.
Yeah this is my personal estimate based on that survey and its responses. I was particularly convinced by one responder who put 100% probability that its possible to induce (conditional on the vacuum being metastable), as anything that's permitted by the laws of physics is possible to induce with arbitrarily advanced technology (so, 50% based on that chance of the vacuum is metastable).