Hello! I'm Jordan and I'm working on space governance and longtermism. I'm interested in finding interventions in the space domain that could lead to a flourishing long-term future.
If you want to know more about longtermist space governance, I'm a good person to talk to :)
Super sceptical probably very highly intractable thought that I haven't done any research on: There seem to be a lot of reasons to think we might be living in a simulation besides just Nick Bostrom's simulation argument, like:
If I was pushed into a corner, I might say the probability we are living in a simulation is like 60%, where most evidence seems to point towards us being in a simulation. However, the doubt comes from the high probability that I'm just thinking about this all wrong - like, of course I can come up with a motivation for a simulation to explain any feature of the universe... it would be hard to find something that doesn't line up with an explanation that the simulators just being interested in that particular thing. But in any case, that's still a really high probability of everyone I love potentially not being sentient or even real (fingers crossed we're all in the simulation together). Also, being in a simulation would change our fundamental assumptions about the universe and life, and it be really weird if that had no impact on moral decision-making.
But everyone I talk to seems to have a relaxed approach to it, like it's impossible to make any progress on this and that it couldn't possibly be decision-relevant. But really, how many people have worked on figuring it out with a longtermist or EA-mindset? Some reasons it might be decision-relevant:
Some questions I'd ask is:
Overall, this does sounds nuts to me and it probably shouldn't go further than this quick take, but I do feel like there could be something here, and it's probably worth a bit more attention than I think it has gotten (like 1 person doing a proper research project on it at least). Lots of other stuff sounded crazy but now has significant work and (arguably) great progress, like trying to help people billions of years in the future, working on problems associated with digital sentience, and addressing wild animal welfare. There could be something here and I'd be interested in hearing thoughts (especially a good counterargument to working on this so I don't have to think about it anymore) or learning about past efforts.
Yeah, agreed on that point. Folks at Forethought aren't necessarily thinking about what a near-optimal future should look like, they're thinking about how to get civilisation to a point where we can make the best possible decisions about what to do with the long-term future.
Yeah, lists exist for all the people working on space governance from a longtermist perspective, and they tend to list about 10-15 people. I'm like 90% sure I know of everyone working on longtermist space governance, and I'd estimate that there are the equivalent of ~3 people working full time on this. There's not as much undercover work required for space governance, but I don't like to share lists of names publicly without permission.
At the moment, the main hub for space governance is Forethought and most people contact Fin Moorhouse to learn more about space governance as he's the author of the 80K problem profile on space governance and has been publishing work with Forethought on or related to space governance. From there, people tend to get a lay of the land, introductions are made, and newcomers will get a good idea of what people are working on and where they might be able to contribute.
Thanks for this comment, very useful feedback.
In A&S' paper, they assume a 5 year construction time for solar captors, which is essentially the doubling time. That is actually extremely conservative, especially if we're considering post-AGI robotics. I imagine the construction time from asteroid material to solar captors might be on the order of days to weeks, but I definitely want to look into that. Great point. There might be other fundamental constraints though. The rate limiting factor is probably more likely to be a rare material required for something like onboard computers, or argon for ion thrusters, something like that.
I think the comparison to economic might is interesting. But a weaker actor on Earth trailing far behind the leading economic power could initiate Dyson swarm construction and overtake the leader with either a huge investment or a lead time. It's not so much that they would wait around, but that they would change their strategy from building economic might on Earth to power generation in space[1]. In the end, if it's the long-term future that has the most moral value, then the most important strategic outcome is who has control of that future. All the economic might of Earth cannot compete with a Dyson swarm[2], so the Dyson swarm owner has control of the future. Economic might on Earth should be seen as instrumental to providing an advantage in stage 2 competition or denying access to stage 2 competition by winning stage 1 on Earth.
I think one nation could get away with this if:
The Sun is 99.8% the mass of the Solar System. All the energy is there.
Aside from sheer power, I think a huge intelligence advantage on Earth could win out against a Dyson swarm. Like if I was controlling a Dyson swarm with AGI but there was an ASI on Earth, I would be scared. But I wouldn't be worried about Earth's economic might.
Disclaimer: I have also applied to Forethought and won't comment on the post directly due to competing interests.
On space governance, you assume 2 scenarios:
I agree that early space governance work is plausibly not that important in those scenarios, but in what percentage of futures do you see us reaching one of these extremes? Capabilities allowing for rapid technological progress can be achieved under various scenarios related to alignment and control that are not at the extremes:
And capabilities allowing for rapid technological progress can be developed independently of capabilities allowing for the great wisdom to solve and reshape all our space governance problems. This independence of capabilities could happen under any of those scenarios:
So, under many scenarios, I don't expect AGI to just solve everything and reshape all our work on space governance. But even if it does reshape the governance, some space-related lock-ins remain binding even with AGI:
In all these scenarios, early space industrialisation and early high-ground positions create durable asymmetries that AGI cannot trivially smooth over. AGI cannot coordinate global actors instantly. Some of these lock-ins occur before or during the emergence of transformative AI. Therefore, early space-governance work affects the post-AGI strategic landscape and cannot simply be postponed without loss.
The disagreement could then be over whether we reach AGI in a world where space industrialisation has already begun creating irreversible power asymmetries. If a large scale asteroid mining industry or significant industry on the moon emerges before AGI, then a small group controlling this infrastructure could have a huge first mover advantage in using that infrastructure to take advantage of rapid technological progress to lock in their power forever/ take control of the long-term future through the creation of a primitive Dyson swarm or the creation of advanced space denial capabilities. So, if AI timelines are not as fast as many in this community think they are, and an intelligence explosion happens closer to 2060 than 2030, then space governance work right now is even more important.
Space governance is also totally not arbitrary and is an essential element of AGI preparedness. AGI will operate spacecraft, build infrastructure, and manage space-based sensors. Many catastrophic failure modes (post-AGI power grabs, orbital laser arrays, autonomous swarms, asteroid deflection misuse) require both AGI and space activity. If it turns out that conceptual breakthroughs don't come about and we need ridiculous amounts of energy/compute to train superintelligence, then space expansion is also a potential pathway to achieving superintelligence. Google is already working on Project Suncatcher to scale machine learning in space, and Elon Musk, who has launched 9000 Starlink satellites into Earth orbit, has also discussed the value of solar powered satellites for machine learning. All of this ongoing activity is linked to the development of AGI and locks in physical power imbalances post-AGI.
As I argued in my post yesterday, even without the close links between space governance and AGI, it isn't an arbitrary choice of problem. I think that if a global hegemony doesn't emerge soon after the development of ASI, then it will likely emerge in outer space through the use of AI or self-replication to create large scale space infrastructure (allowing massive energy generation and access to interstellar space). So, under many scenarios related to the development of AI, competition and conflict will continue into outer space, where the winner could set the long-term trajectory of human civilisation or the ongoing conflict could squander the resources of the galaxy. This makes space governance more important than drought-resistant crops.
All you have to admit for space governance to be exceptionally important is that some of these scenarios where AGI initiates rapid technological progress but doesn't reshape all governance are fairly likely.
This is quite a large range, but yeah I get that this comes out of nowhere. The range I cite is based on a few things:
So yeah, good point, that Dyson swarm construction time is not well justified within the post, and the timeline I cite should be taken as just the subjective opinion of someone who's spent a decent amount of time researching it.
From my own experience and from what I've seen, I think it's common for new contributors to the forum to underestimate the amount of previous work that the discourse here builds on. And downvotes aren't used to disagree with a post, but are supposed to be used as something like a quality assessment. So my guess from a read of your downvoted post is that the downvotes reflect the fact that the argument you're making has been made before on the forum and within the wider EA community and you haven't engaged with that.
Maybe search for stuff like "AI-enabled coups", "power grabs", and "gradual disempowerment".
Effective altruists should spend more time and money on global systemic change
I'm mostly worried about low tractability and, if tractable, a lack of ability to predict the final outcomes if advocating for a World government. Maybe the safe option is to pursue traditional methods to promote international collaboration: treaties, information sharing, panel discussions etc.
Thanks for this comment! I broadly agree with it all and it was very interesting to read. Thanks in particular for advancing my initial takes on governance (I'm far more comfortable discussing quantum physics than governance systems).
a) Preventing catastrophe seems much more important for advanced civilizations than I realized and its not enough for the universe to be defense-dominated.
b) Robustly good governance seems attainable? It may be possible to functionally 'lock-out' catastrophic-risk and tyranny-risk on approach to tech maturity and it seems conceivable (albeit challenging) to softly lock-in definitions of 'catastrophe' and 'tyranny' which can then be amended in future as cultures evolve and circumstances change.
Agreed on both. Locking stuff out seems possible, and then as knowledge advances (in terms of moral philosophy or fundamental physics) and new possibilities come to light and priorities change, the "governance system" could be updated from a centralised position, like a software update expanding at the speed of light. Then the main tradeoff is between ensuring no possibility of a galactic x-risk or s-risk you know about could ever happen, and being adaptable to changing knowledge and emerging risks.
At the scale of advanced civilizations collapse/catastrophe for even a single star system seems unbearable.
I strongly agree with this. We (as in, humanity) are at a point where we can control what the long-term future in space will look like. We should not tolerate a mostly great future with some star systems falling into collapse or suffering - we are responsible for preventing that and allowing it to happen at all is inconceivably terrible even if the EV calculation is positive. We're better than naive utilitarianism.
If we buy your argument here Jordan or my takeaways from Joe's talk then we're like, ah man we may need really strong space governance. Like excellent, robust space governance. But no, No! This is a tyranny risk.
There are ways to address the risks I outlined without a centralised government that might be prone to tyranny (echoing your "hand waving" section later):
Maybe if we find the ultimate moral good and are coordinated enough to spread that, then the universe will be homogenous, so there is no need for governance to address unpredicted behaviour.
In particular it seems possible to forcibly couple the power to govern with goodness.
I think this is a crucial point. I'm hopeful of that. If it's possible to lock in that strong correlation then does that ensure absolute existential security and no s-risks? I think it depends on goodness. If the "goodness" is based on panbiotic ethics, then we have a universe full of suffering Darwinian biology. If the "goodness" is utilitarian then the universe becomes full of happiness machines... maybe that's bad. Don't know. It seems that the goodness in your USA example is defined by Christian values, which maybe don't give us the best possible long-term future. Maybe I'm reading too far into your simple model (I find it conceptually very helpful though).
There's also the sort hand-off or die roll wherein you cede/lose power to something and can't get it back unless so willed by the entity in question. I prefer my sketch of marching to decouple governmental power from competitiveness.
Yeah I agree. But I think it's dependent on the way that society evolves. If we're able to have a long reflection (which I think unlikely), then maybe we can build a good God more confidently. But your model sounds more realistic.
Thanks Tom, yeah the threat model for stage two is quite similar to your post, where I'm expecting one actor to potentially outgrow the rest of the world by grabbing space resources. However, I do think there might be dynamics in space that feed into a first mover advantage, like Fin's recent post about shutting off space access to other actors, or some way to get to resources first and defend them (not sure about this yet), or just initiating an industrial explosion in space before anyone else (which maybe pays off in the long-term because Earth eventually reaches a limit or slows down in growth compared to Dyson swarm construction).
As for the threat model of stage 1, I don't have strong opinions on whether a decisive strategic advantage on Earth is more likely to be achieved with superexponential growth or conflict, though your post is very compelling in favour of the former.
I'm thinking about this sort of thing at the moment in terms of ~what percentage of worlds a decisive strategic advantage is achieved on Earth vs in space, which informs how important space governance work is. I find the 3 stages of competition model to be useful to figure that out. It's not clear to me that Earth obviously dominates and I am open to stage 2 actually not mattering very much, but I want to map out strategies here.
I do already think that stage 3 doesn't matter very much, but I include it as a stage because I may be in a minority view in believing this, e.g. Will and Fin imply that races to other star systems are important in "Preparing for an Intelligence Explosion", which I think is an opinion based on works by Anders Sandberg and Toby Ord.