This is my first post, so all criticism is welcome. I will assume that the reader already agrees with utilitarianism. I will argue that if existential threats are not able to have their risks minimized in any meaningful way(I do not believe there is a way to do so), that space accelerationism or "Spacism" as I will call it, is a viable candidate for the most effective way to be altruistic.

I will begin with a couple assumptions.

  1. Civilization will not end
  2. No currently physics-breaking science is discovered (faster than light space travel, infinite energy, or the like)

With there 2 conditions met, I would say it is safe to assume that humanity (or whatever takes our place) will colonize the galaxy and local group. Simply put, given enough time, Earth would have practically no reason to not launch a single von Neumann probe that would eventually settle the reachable universe for us.

I will now do some math to calculate a magnitude of souls that could exist on a cosmic scale. I will use very conservative estimates wherever applicable.

There are around 100 billion stars in the milky way alone, and over a trillion more stars will become within our reach after the Andromeda collides and the local group merges. I will round this down to a trillion.

We currently do not know how many planets the average star has, likely upwards of one, but I will use an estimate of 1 planet each. We also do not know the average size of a planet, so I will use our smallest planet (Mercury) for calculations. Most planets are not habitable, however that is largely irrelevant as a much more efficient use of the resources would be to convert them into habitats instead of living on the surface.

We currently do not have any real scientists calculating how many habitats an Mercury-sized planet could produce, so I used some volumetric calculations and obscenely inefficient resource usage. If we turned Mercury into O'Neil Cylinders 20km across with a 10km diameter interior (5km thick walls), we would have around 15x the surface area of earth to use. 

Our final assumption is that Earth as it is now is our peak layout, and 15 earths worth of surface area can hold 15 times earth's current population.

Now let's multiply;

People per Earth Surface Area: 7500000000

Earth Surface Area Per Star: 15
Stars: 1000000000000

Multiply this out, and you get 112,500,000,000,000,000,000,000 people who can live with our resources. If we can speed up the settling of every star by one second, we will have as many extra seconds of life as we have humans. Compacted into years, speeding up the start of galactic colonization by one second will reward us with 3,500,000,000,000,000 years of human life.

As I have mentioned, I have used very conservative estimates wherever I could, therefore the actual number is likely many orders of magnitude higher. Most planets we know of are much larger than Mercury. We will likely be able to create a habitat design that will require less than 7.5 km3 of material to build 1 km2 of living space. We will likely be able to layout living space more efficiently than Earth, which is mostly unusable water/desert/mountains. We may be able to modify humans to require less resources and have a higher moral weight. Not to mention that it may be possible to harvest matter from stars, and that future humans will use the extra time to prepare for the heat death of the universe(further increasing their time's utility). However adding a few more zeros is not likely to convince anyone else, it may as well be infinity.

9

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

Welcome to the forum!

Have you read Bostrom's Astronomical Waste? He does a very similar estimate there. https://www.nickbostrom.com/astronomical/waste.html

I'd be keen to hear more about why you think it's not possible to meaningfully reduce existential risk.

I have not seen that, but I will check it out.

As for the existential threat, it is for a few reasons, I will make a more detailed post about it later. First off, I believe very few things are existential threats to humanity itself. Humans are extremely resilient and live in every nook and cranny on earth. Even total nuclear war would have plenty of survivors. As far as I see it, only an asteroid or aliens could wipe us out unexpectedly. AI could wipe out humanity, however I believe it would be a voluntary extinction in that case. Future humans may believe AI has qualia, and is much more efficient at creating utility than biological life. I cannot imagine future humans being so stupid as to have AI connected to the internet and a robot army able to be hijacked by said AI at the same time.

I do believe there is an existential threat to civilization, however it is not present yet, and we will be capable of self-sustaining colonies off Earth by the time is will arise(meaning that space acceleration would be a form of existential threat reduction). Large portions of Africa, and smaller portions of the Americas and Asia are not at a civilizational level that would make a collapse possible, however they will likely cross that threshold this century. If there is a global civilizational collapse, I do not think civilization would ever return. However, there are far too many unknowns as too how to avoid said collapse meaningfully. Want to prevent a civilization ending nuclear war? You could try to bolster the power of the weaker side to force a cold war. Or maybe you want to make the sides more lopsided so intimidation will be enough. However as we do not know which strategy is more effective, and they have opposite praxis, there is no way to know if you would be increasing existential threats or not.

Lastly, most existential threat reduction is political by nature. Politics are also extremely unpredictable, and extremely hard to influence even if you know what you are doing. Politics have incredibly strong driving forces behind them, being nationalism, desperation/fear, corruption, ect, and these driving forces can easily drown out philosophy and the idea of long-term altruism. People want to win before they do good, and largely believe they must win to do the most good.

TLDR: I believe most "existential threats" are not existential or not valid threats, those that do exist have unknowable ways to minimize them, and the political nature of most forms of existential threat reduction make them nearly impossible to influence in the name of long term altruism.

Just because something is difficult, doesn't mean it isn't worth trying to do, or at least trying to learn more about so you have some sense of what to do. Calling something "unknowable" -- when the penalty for not knowing it "civilization might end with unknown probability" -- is a claim that should be challenged vociferously, because if it turns out to be wrong in any aspect, that's very important for us to know.

I cannot imagine future humans being so stupid as to have AI connected to the internet and a robot army able to be hijacked by said AI at the same time.

I'd recommend reading more about how people worried about AI conceive of the risk; I've heard zero people in all of EA say that this scenario is what worries them. There are many places you could start: Stuart Russell's "Human Compatible" is a good book, but there's also the free Wait But Why series on superintelligence (plus Luke Muehlhauser's blog post correcting some errors in that series).

There are many good reasons to think that AI risk may be fairly low (this is an ongoing debate in EA), but before you say one side is wrong, you have to understand what they really believe.

You may like to see this post, I agree in theory but don't think that space programs currently are very good at accelerating long run colonization.

https://forum.effectivealtruism.org/posts/xxcroGWRieSQjCw2N/an-informal-review-of-space-exploration 

Some more prior art, on Earth vs off-world "lifeboats". See also 4.2 here for a model of mining Mercury (for solar panels, not habitats).

More from Giga
Curated and popular this week
Relevant opportunities