Hide table of contents

Audio version is here

Summary:

  • In a series of posts starting with this one, I'm going to argue that the 21st century could see our civilization develop technologies allowing rapid expansion throughout our currently-empty galaxy. And thus, that this century could determine the entire future of the galaxy for tens of billions of years, or more.
  • This view seems "wild": we should be doing a double take at any view that we live in such a special time. I illustrate this with a timeline of the galaxy. (On a personal level, this "wildness" is probably the single biggest reason I was skeptical for many years of the arguments presented in this series. Such claims about the significance of the times we live in seem "wild" enough to be suspicious.)
  • But I don't think it's really possible to hold a non-"wild" view on this topic. I discuss alternatives to my view: a "conservative" view that thinks the technologies I'm describing are possible, but will take much longer than I think, and a "skeptical" view that thinks galaxy-scale expansion will never happen. Each of these views seems "wild" in its own way.
  • Ultimately, as hinted at by the Fermi paradox, it seems that our species is simply in a wild situation.

Before I continue, I should say that I don't think humanity (or some digital descendant of humanity) expanding throughout the galaxy would necessarily be a good thing - especially if this prevents other life forms from ever emerging. I think it's quite hard to have a confident view on whether this would be good or bad. I'd like to keep the focus on the idea that our situation is "wild." I am not advocating excitement or glee at the prospect of expanding throughout the galaxy. I am advocating seriousness about the enormous potential stakes.

My view

This is the first in a series of pieces about the hypothesis that we live in the most important century for humanity.

In this series, I'm going to argue that there's a good chance of a productivity explosion by 2100, which could quickly lead to what one might call a "technologically mature"[1] civilization. That would mean that:

  • We'd be able to start sending spacecraft throughout the galaxy and beyond.
  • These spacecraft could mine materials, build robots and computers, and construct very robust, long-lasting settlements on other planets, harnessing solar power from stars and supporting huge numbers of people (and/or our "digital descendants").
    • See Eternity in Six Hours for a fascinating and short, though technical, discussion of what this might require.
    • I'll also argue in a future piece that there is a chance of "value lock-in" here: whoever is running the process of space expansion might be able to determine what sorts of people are in charge of the settlements and what sorts of societal values they have, in a way that is stable for many billions of years.[2] If that ends up happening, you might think of the story of our galaxy[3] like this. I've marked major milestones along the way from "no life" to "intelligent life that builds its own computers and travels through space."

Thanks to Ludwig Schubert for the visualization. Many dates are highly approximate and/or judgment-prone and/or just pulled from Wikipedia (sources here), but plausible changes wouldn't change the big picture. The ~1.4 billion years to complete space expansion is based on the distance to the outer edge of the Milky Way, divided by the speed of a fast existing human-made spaceship (details in spreadsheet just linked); IMO this is likely to be a massive overestimate of how long it takes to expand throughout the whole galaxy. See footnote for why I didn't use a logarithmic axis.[4]

??? That's crazy! According to me, there's a decent chance that we live at the very beginning of the tiny sliver of time during which the galaxy goes from nearly lifeless to largely populated. That out of a staggering number of persons who will ever exist, we're among the first. And that out of hundreds of billions of stars in our galaxy, ours will produce the beings that fill it.

I know what you're thinking: "The odds that we could live in such a significant time seem infinitesimal; the odds that Holden is having delusions of grandeur (on behalf of all of Earth, but still) seem far higher."[5]

But:

The "conservative" view

Let's say you agree with me about where humanity could eventually be headed - that we will eventually have the technology to create robust, stable settlements throughout our galaxy and beyond. But you think it will take far longer than I'm saying.

A key part of my view (which I'll write about more later) is that within this century, we could develop advanced enough AI to start a productivity explosion. Say you don't believe that.

  • You think I'm underrating the fundamental limits of AI systems to date.
  • You think we will need an enormous number of new scientific breakthroughs to build AIs that truly reason as effectively as humans.
  • And even once we do, expanding throughout the galaxy will be a longer road still.

You don't think any of this is happening this century - you think, instead, that it will take something like 500 years. That's 5-10x the time that has passed since we started building computers. It's more time than has passed since Isaac Newton made the first credible attempt at laws of physics. It's about as much time has passed since the very start of the Scientific Revolution.

Actually, no, let's go even more conservative. You think our economic and scientific progress will stagnate. Today's civilizations will crumble, and many more civilizations will fall and rise. Sure, we'll eventually get the ability to expand throughout the galaxy. But it will take 100,000 years. That's 10x the amount of time that has passed since human civilization began in the Levant.

Here's your version of the timeline:

The difference between your timeline and mine isn't even a pixel, so it doesn't show up on the chart. In the scheme of things, this "conservative" view and my view are the same.

It's true that the "conservative" view doesn't have the same urgency for our generation in particular. But it still places us among a tiny proportion of people in an incredibly significant time period. And it still raises questions of whether the things we do to make the world better - even if they only have a tiny flow-through to the world 100,000 years from now - could be amplified to a galactic-historical-outlier degree.

The skeptical view

The "skeptical view" would essentially be that humanity (or some descendant of humanity, including a digital one) will never spread throughout the galaxy. There are many reasons it might not:

  • Maybe something about space travel - and/or setting up mining robots, solar panels, etc. on other planets - is effectively impossible such that even another 100,000 years of human civilization won't reach that point.[6]
  • Or perhaps for some reason, it will be technologically feasible, but it won't happen (because nobody wants to do it, because those who don't want to block those who do, etc.)
  • Maybe it's possible to expand throughout the galaxy, but not possible to maintain a presence on many planets for billions of years, for some reason.
  • Maybe humanity is destined to destroy itself before it reaches this stage.
    • But note that if the way we destroy ourselves is via misaligned AI,[7] it would be possible for AI to build its own technology and spread throughout the galaxy, which still seems in line with the spirit of the above sections. In fact, it highlights that how we handle AI this century could have ramifications for many billions of years. So humanity would have to go extinct in some way that leaves no other intelligent life (or intelligent machines) behind.
  • Maybe an extraterrestrial species will spread throughout the galaxy before we do (or around the same time).
    • However, note that this doesn't seem to have happened in ~13.77 billion years so far since the universe began, and according to the above sections, there's only about 1.5 billion years left for it to happen before we spread throughout the galaxy.
  • Maybe some extraterrestrial species already effectively has spread throughout our galaxy, and for some reason we just don't see them. Maybe they are hiding their presence deliberately, for one reason or another, while being ready to stop us from spreading too far.
    • This would imply that they are choosing not to mine energy from any of the stars we can see, at least not in a way that we could see it. That would, in turn, imply that they're abstaining from mining a very large amount of energy that they could use to do whatever it is they want to do,[8] including defend themselves against species like ours.
  • Maybe this is all a dream. Or a simulation.
  • Maybe something else I'm not thinking of.

That's a fair number of possibilities, though many seem quite "wild" in their own way. Collectively, I'd say they add up to more than 50% probability ... but I would feel very weird claiming they're collectively overwhelmingly likely.

Ultimately, it's very hard for me to see a case against thinking something like this is at least reasonably likely: "We will eventually create robust, stable settlements throughout our galaxy and beyond." It seems like saying "no way" to that statement would itself require "wild" confidence in something about the limits of technology, and/or long-run choices people will make, and/or the inevitability of human extinction, and/or something about aliens or simulations.

I imagine this claim will be intuitive to many readers, but not all. Defending it in depth is not on my agenda at the moment, but I'll rethink that if I get enough demand.

Why all possible views are wild: the Fermi paradox

I'm claiming that it would be "wild" to think we're basically assured of never spreading throughout the galaxy, but also that it's "wild" to think that we have a decent chance of spreading throughout the galaxy.

In other words, I'm calling every possible belief on this topic "wild." That's because I think we're in a wild situation.

Here are some alternative situations we could have found ourselves in, that I wouldn't consider so wild:

  • We could live in a mostly-populated galaxy, whether by our species or by a number of extraterrestrial species. We would be some densely populated region of space, surrounded by populated planets. Perhaps we would read up on the history of our civilization. We would know (from history and from a lack of empty stars) that we weren't unusually early life-forms with unusual opportunities ahead.
  • We could live in a world where the kind of technologies I've been discussing didn't seem like they'd ever be possible. We wouldn't have any hope of doing space travel, or successfully studying our own brains or building our own computers. Perhaps we could somehow detect life on other planets, but if we did, we'd see them having an equal lack of that sort of technology.

But space expansion seems feasible, and our galaxy is empty. These two things seem in tension. A similar tension - the question of why we see no signs of extraterrestrials, despite the galaxy having so many possible stars they could emerge from - is often discussed under the heading of the Fermi Paradox.

Wikipedia has a list of possible resolutions of the Fermi paradox. Many correspond to the skeptical view possibilities I list above. Some seem less relevant to this piece. (For example, there are various reasons extraterrestrials might be present but not detected. But I think any world in which extraterrestrials don't prevent our species from galaxy-scale expansion ends up "wild," even if the extraterrestrials are there.)

My current sense is that the best analysis of the Fermi Paradox available today favors the explanation that intelligent life is extremely rare: something about the appearance of life in the first place, or the evolution of brains, is so unlikely that it hasn't happened in many (or any) other parts of the galaxy.[9]

That would imply that the hardest, most unlikely steps on the road to galaxy-scale expansion are the steps our species has already taken. And that, in turn, implies that we live in a strange time: extremely early in the history of an extremely unusual star.

If we started finding signs of intelligent life elsewhere in the galaxy, I'd consider that a big update away from my current "wild" view. It would imply that whatever has stopped other species from galaxy-wide expansion will also stop us.

This pale blue dot could be an awfully big deal

Describing Earth as a tiny dot in a photo from space, Ann Druyan and Carl Sagan wrote:

The Earth is a very small stage in a vast cosmic arena. Think of the rivers of blood spilled by all those generals and emperors so that, in glory and triumph, they could become the momentary masters of a fraction of a dot ... Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light ... It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world.

This is a somewhat common sentiment - that when you pull back and think of our lives in the context of billions of years and billions of stars, you see how insignificant all the things we care about today really are.

But here I'm making the opposite point.

It looks for all the world as though our "tiny dot" has a real shot at being the origin of a galaxy-scale civilization. It seems absurd, even delusional to believe in this possibility. But given our observations, it seems equally strange to dismiss it.

And if that's right, the choices made in the next 100,000 years - or even this century - could determine whether that galaxy-scale civilization comes to exist, and what values it has, across billions of stars and billions of years to come.

So when I look up at the vast expanse of space, I don't think to myself, "Ah, in the end none of this matters." I think: "Well, some of what we do probably doesn't matter. But some of what we do might matter more than anything ever will again. ...It would be really good if we could keep our eye on the ball. ...[gulp]"

This work is licensed under a Creative Commons Attribution 4.0 International License.


  1. or Kardashev Type III. ↩︎

  2. If we are able to create mind uploads, or detailed computer simulations of people that are as conscious as we are, it could be possible to put them in virtual environments that automatically reset, or otherwise "correct" the environment, whenever the society would otherwise change in certain ways (for example, if a certain religion became dominant or lost dominance). This could give the designers of these "virtual environments" the ability to "lock in" particular religions, rulers, etc. I'll discuss this more in a future piece. ↩︎

  3. I've focused on the "galaxy" somewhat arbitrarily. Spreading throughout all of the accessible universe would take a lot longer than spreading throughout the galaxy, and until we do it's still imaginable that some species from outside our galaxy will disrupt the "stable galaxy-scale civilization," but I think accounting for this correctly would add a fair amount of complexity without changing the big picture. I may address that in some future piece, though. ↩︎

  1. A logarithmic version doesn't look any less weird, because the distances between the "middle" milestones are tiny compared to _both _the stretches of time before and after these milestones. More fundamentally, I'm talking about how remarkable it is to be in the most important [small number] of years out of [big number] of years - that's best displayed using a linear axis. It's often the case that weird-looking charts look more reasonable with logarithmic axes, but in this case I think the chart looks weird because the situation is weird. Probably the least weird-looking version of this chart would have the x-axis be something like the logged distance from the year 2100, but that would be a heck of a premise for a chart - it would basically bake in my argument that this appears to be a very special time period. ↩︎

  2. This is exactly the kind of thought that kept me skeptical for many years of the arguments I'll be laying out in the rest of this series about the potential impacts, and timing, of advanced technologies. Grappling directly with how "wild" our situation seems to ~undeniably be has been key for me. ↩︎

  3. Spreading throughout the galaxy would certainly be harder if nothing like mind uploading (which I'll discuss in a future piece, and which is part of why I think future space settlements could have "value lock-in" as discussed above) can ever be done. I would find a view that "mind uploading is impossible" to be "wild" in its own way, because it implies that human brains are so special that there is simply no way, ever, to digitally replicate what they're doing. (Thanks to David Roodman for this point.) ↩︎

  4. That is, advanced AI that pursues objectives of its own, which aren't compatible with human existence. I'll be writing more about this idea. Existing discussions of it include the books Superintelligence, Human Compatible, Life 3.0, and The Alignment Problem. The shortest, most accessible presentation I know of is The case for taking AI seriously as a threat to humanity (Vox article by Kelsey Piper). This report on existential risk from power-seeking AI, by Open Philanthropy's Joe Carlsmith, lays out in detail which premises one would have to believe in order to take this problem seriously. ↩︎

  5. Thanks to Carl Shulman for this point. ↩︎

  6. See https://arxiv.org/pdf/1806.02404.pdf ↩︎

Show all footnotes
Comments47


Sorted by Click to highlight new comments since:

Crazyism about a topic is the view that something crazy must be among the core truths about that topic. Crazyism can be justified when we have good reason to believe that one among several crazy views must be true but where the balance of evidence supports none of the candidates strongly over the others

Eric Schwitzgebel, Crazyism

This is fantastic.

This doesn't take away from your main point, but it would be some definate amount less wild if we won't start exploring space for 100k years, right? Depending on how much less wild that would be, I could imagine it being enough to convince someone of a conservative view.

Some possible futures do feel relatively more "wild” to me, too, even if all of them are wild to a significant degree. If we suppose that wildness is actually pretty epistemically relevant (I’m not sure it is), then it could still matter a lot if some future is 10x wilder than another.

For example, take a prediction like this:

Humanity will build self-replicating robots and shoot them out into space at close to the speed of light; as they expand outward, they will construct giant spherical structures around all of the galaxy’s stars to extract tremendous volumes of energy; this energy will be used to power octillions of digital minds with unfathomable experiences; this process will start in the next thirty years, by which point we’ll already have transcended our bodies to reside on computers as brain emulation software.

A prediction like “none of the above happens; humanity hangs around and then dies out sometime in the next million years” definitely also feels wild in its own way. So does the prediction “all of the above happens, starting a few hundred years from now.” But both of these predictions still feel much less wild than the first one.

I suppose whether they actually are much less “wild” depends on one’s metric of wildness. I’m not sure how to think about that metric, though. If wildness is epistemically relevant, then presumably some forms of wildness are more epistemically relevant than others.

To say a bit more here, on the epistemic relevance of wildness:

I take it that one of the main purposes of this post is to push back against “fishiness arguments,” like the argument that Will makes in “Are We Living at the Hinge of History?

The basic idea, of course, is that it’s a priori very unlikely that any given person would find themselves living at the hinge of history (and correctly recognise this). Due to the fallibility of human reasoning and due to various possible sources of bias, however, it’s not as unlikely that a given person would mistakenly conclude that they live at the HoH. Therefore, if someone comes to believe that they probably live at the HoH, we should think there’s a sizeable chance they’ve simply made a mistake.

As this line of argument is expressed in the post:

I know what you're thinking: "The odds that we could live in such a significant time seem infinitesimal; the odds that Holden is having delusions of grandeur (on behalf of all of Earth, but still) seem far higher."

The three critical probabilities here are:

  • Pr(Someone makes an epistemic mistake when thinking about their place in history)
  • Pr(Someone believes they live at the HoH|They haven’t made an epistemic mistake)
  • Pr(Someone believes they live at the HoH|They’ve made an epistemic mistake)

The first describes the robustness of our reasoning. The second describes the prior probability that we would live at the HoH (and be able to recognise this fact if reasoning well). The third describes the level of bias in our reasoning, toward the HoH hypothesis, when we make mistakes.

I agree that all possible futures are “wild,” in some sense, but I don’t think this point necessarily bears much on the magnitudes of any of these probabilities.

For example, it would be sort of “wild” if long-distance space travel turns out to be impossible and our solar system turns out to be the only solar system to ever harbour life. It would also be “wild” if long-distance space travel starts to happen 100,000 years from now. But — at least at a glance — I don’t see how this wildness should inform our estimates for the three key probabilities.

One possible argument here, focusing on the bias factor, is something like: “We shouldn’t expect intellectuals to be significantly biased toward the conclusion that they live at the HoH, because the HoH Hypothesis isn’t substantially more appealing, salient, etc., than other beliefs they could have about the future.”

But I don’t think this argument would be right. For example: I think the hypothesis “the HoH will happen within my lifetime” and the hypothesis “the HoH will happen between 100,000 and 200,000 years from now” are pretty psychologically different.

To sum up: At least on a first pass, I don't see why the point "all possible futures are wild" undermines the fishiness argument raised at the top of the post.

We were previously comparing two hypotheses:

  1. HoH-argument is mistaken
  2. Living at HoH

Now we're comparing three:

  1. "Wild times"-argument is mistaken
  2. Living at a wild time, but HoH-argument is mistaken
  3. Living at HoH

"Wild time" is almost as unlikely as HoH. Holden is trying to suggest it's comparably intuitively wild, and it has pretty similar anthropic / "base rate" force.

So if your arguments look solid,  "All futures are wild" makes hypothesis 2 look kind of lame/improbable---it has to posit a flaw in an argument, and also that you are living at a wildly improbable time. Meanwhile, hypothesis 1 merely has to posit a flaw in an argument, and hypothesis 3 merely has to a posit HoH (which is only somewhat more to swallow than a wild time).

So now if you are looking for errors, you probably want to focus for errors in the argument that we are living at a "wild time." Realistically, I think you probably need to reject the possibility that the stars are real and that it is possible for humanity to spread to them. In particular, it's not too helpful to e.g. be skeptical of some claim about AI timelines or about our ability to influence society's trajectory.

This is kind of philosophically muddled because (I think) most participants in this discussion already accept a simulation-like argument that "Most observers like us are mistaken about whether it will be possible for them to colonize the stars." If you set aside the simulation-style arguments, then I think the "all futures are wild" correction is more intuitively compelling.

(I think if you tell people "Yes, our good skeptical epistemology allows us to be pretty confident that the stars don't exist" they will have a very different reaction than if you tell them "Our good skeptical epistemology tells us that we aren't the most influential people ever.")

Am I right in thinking Paul your argument here is very similar to Buck's in this post? https://forum.effectivealtruism.org/posts/j8afBEAa7Xb2R9AZN/thoughts-on-whether-we-re-living-at-the-most-influential.

Basically you're saying that if we already know things are pretty wild (In Buck's version: that we're early humans) it's a much less fishy step from there to very wild ('we're at HoH') than it would be if we didn't know things were pretty wild already.

Thanks for the clarification! I still feel a bit fuzzy on this line of thought, but hopefully understand a bit better now.

At least on my read, the post seems to discuss a couple different forms of wildness: let’s call them “temporal wildness” (we currently live at an unusually notable time) and “structural wildness” (the world is intuitively wild; the human trajectory is intuitively wild).[1]

I think I still don’t see the relevance of “structural wildness,” for evaluating fishiness arguments. As a silly example: Quantum mechanics is pretty intuitively wild, but the fact that we live in a world where QM is true doesn’t seem to substantially undermine fishiness arguments.

I think I do see, though, how claims about temporal wildness might be relevant. I wonder if this kind of argument feels approximately right to you (or to Holden):

Step 1: A priori, it’s unlikely that we would live even within 10000 years of the most consequential century in human history. However, despite this low prior, we have obviously strong reasons to think it’s at least plausible that we live this close to the HoH. Therefore, let’s say, a reasonable person should assign at least a 20% credence to the (wild) hypothesis: “The HoH will happen within the next 10000 years.”

Step 2: If we suppose that the HoH will happen with the next 10000 years, then a reasonable conditional credence that this century is the HoH should probably be something like 1/100. Therefore, it seems, our ‘new prior’ that this century is the HoH should be at least .2*.01 = .002. This is substantially higher than (e.g.) the more non-informative prior that Will's paper starts with.

Fishiness arguments can obviously still be applied to the hypothesis presented in Step 1, in the usual way. But maybe the difference, here, is that the standard arguments/evidence that lend credibility to the more conservative hypothesis “The HoH will happen within the next 10000” are just pretty obviously robust — which makes it easier to overcome a low prior. Then, once we’ve established the plausibility of the more conservative hypothesis, we can sort of back-chain and use it to bump up our prior in the Strong HoH Hypothesis.


    1. I suppose it also evokes an epistemic notion of wildness, when it describes certain confidence levels as “wild,” but I take it that “wild” here is mostly just a way of saying “irrational”? ↩︎

Ben, that sounds right to me. I also agree with what Paul said. And my intent was to talk about what you call temporal wildness, not what you call structural wildness.

I agree with both you and Arden that there is a certain sense in which the "conservative" view seems significantly less "wild" than my view, and that a reasonable person could find the "conservative" view significantly more attractive for this reason. But I still want to highlight that it's an extremely "wild" view in the scheme of things, and I think we shouldn't impose an inordinate burden of proof on updating from that view to mine.

The three critical probabilities here are Pr(Someone makes an epistemic mistake when thinking about their place in history), Pr(Someone believes they live at the HoH|They haven’t made an epistemic mistake), and Pr(Someone believes they live at the HoH|They’ve made an epistemic mistake).

I think the more decision relevant probabilities involve "Someone believes they should act as if they live at the HoH" rather than "Someone believes they live at the HoH". Our actions may be much less important if 'this is all a dream/simulation' (for example). We should make our decisions in the way we wish everyone-similar-to-us-across-the-multiverse make their decisions.

As an analogy, suppose Alice finds herself getting elected as the president of the US. Let's imagine there are citizens in the US. So Alice reasons that it's way more likely that she is delusional than she actually being the president of the US. Should she act as if she is the president of the US anyway, or rather spend her time trying to regain her grip on reality? The citizens want everyone in her situation to choose the former. It is critical to have a functioning president. And it does not matter if there are many delusional citizens who act as if they are the president. Their "mistake" does not matter. What matters is how the real president acts.

This paper - https://grabbyaliens.com/ - is an interesting deeper dive into how humanity seems early on a cosmological timescale, and a mathematical model of expansionist civilizations expanding to fill the universe.  Many common themes with this piece - particularly regarding the Fermi Paradox - plus some more to think about.

Thanks for the post.

You give a gloss definition of "wild":

we should be doing a double take at any view that we live in such a special time

Could you say a bit more on this? I can think of many different reasons one might do a double take—my impression is that you're thinking of just a few of them, but I'm not sure exactly which.

I'm not sure I can totally spell it out - a lot of this piece is about the raw intuition that "something is weird here."

One Bayesian-ish interpretation is given in the post: "The odds that we could live in such a significant time seem infinitesimal; the odds that Holden is having delusions of grandeur (on behalf of all of Earth, but still) seem far higher." In other words, there is something "suspicious" about a view that implies that we are in an unusually important position - it's the kind of view that seems (by default) more likely to be generated by wishful thinking, ego, etc. than by dispassionate consideration of the facts.

There's also an intuition along the lines of "If we're really in such a special position, I'd think it would be remarked upon more; I'm suspicious of claims that something really important is going on that isn't generally getting much attention."

I ultimately think we should bite these bullets (that we actually in the kind of special position that wishful thinking might falsely conclude we're in, and that there actually is something very important going on that isn't getting commensurate attention). I think some people imagine they can avoid biting these bullets by e.g. asserting long timelines to transformative AI; this piece aims to argue that doesn't work.

I agree that Intuition is certainly an important piece of the puzzle.

A lot of this makes me think of not only Nietzche but Jean Baudrillard, as well as Jose Ortega when he speaks of the fearlessness scientist and philosophers need in the coming times, and the idea of the "masses".

We must be just as careful of works of media such as Anne franks frankenstein as we are brave new world.

Also, for me, they “stable galactic civilisation” doesn’t quite do the wildness there justice.

At that point, the consciousness might live lives that are as beyond our imagination as the the life of an archaeologist is to a cow.

To horrendously misquote CS Lewis: “ We are… like an ignorant child who wants to go on making mud pies in a slum because he cannot imagine what is meant by the offer of a holiday at the sea. We are far too easily pleased.”

I have just so little conception of what the life of the average stable galactic citizen might be like.

I guess the point that gets me a bit is "is a stable galactic civilisation run by a single misaligned AI wild in the sense the author means?"

Pro wild:

  • building megastructures
  • galactic AI is pretty wild

Anti wild

  • Internal life of this AI might be trivial
  • Processes might be tremendously repetitive - if you have no aims, is turning all matter and energy into paperclips hard?

I guess for me the jury is still out, but that case does feel different to me, at least some times.

I think it's wild if we're living in the century (or even the 100,000 years) that will produce a misaligned AI whose values come to fill the galaxy for billions of years. That would just be quite a remarkable, high-leverage (due to the opportunity to avoid misalignment, or at least have some impact on what the values end up being) time period to be living in.

I like this theme a lot! 

In looking at longest-term scenarios, I suspect there might be useful structure&constraints available if we take seriously the idea that consciousness is a likely optimization target of sufficiently intelligent civilizations. I offered the following on Robin Hanson's blog:

Premise 1: Eventually, civilizations progress until they can engage in megascale engineering: Dyson spheres, etc.

Premise 2: Consciousness is the home of value: Disneyland with no children is valueless. 
Premise 2.1: Over the long term we should expect at least some civilizations to fall into the attractor of treating consciousness as their intrinsic optimization target.

Premise 3: There will be convergence that some qualia are intrinsically valuable, and what sorts of qualia are such.

Conjecture: A key piece of evidence for discerning the presence of advanced alien civilizations will be megascale objects which optimize for the production of intrinsically valuable qualia.

--

Essentially: I think formal consciousness research could generate a new heuristic for both how to parse cosmological data for intelligent civilizations, and what longest-term future humanity may choose for itself.

Physicalism seems plausible, and the formulation of physicalism I most believe in (dual-aspect monism) has physics and phenomenology as two sides of the same coin. As Tegmark notes, "humans ... aren't the optimal solution to any well-defined physics problem." Similarly, humans aren't the optimal solution to any well-defined phenomenological problem.

I can't say I know for sure we'll settle on filling the universe with such an "optimal solution", nor would I advocate anything at this point, but if we're looking for starting threads for how to conceptualize the longest-term optimization targets of humanity, a little consciousness research might go a long way.

More: 

https://opentheory.net/2019/09/whats-out-there/

https://opentheory.net/2019/06/taking-monism-seriously/

https://opentheory.net/2019/02/simulation-argument/

I mostly agree with your assessments.

On the skeptical side, I think the most likely way(s) space colonization doesn't happen are that costs to do it would be too high for people to ever be able to afford or want to do it on a large scale (given opportunity costs), or at least before we go extinct for some other reason. Furthermore, if there's little interest or active opposition to allowing AIs (or artificial sentience) to colonize space on their own, then costs may increase significantly to feed biological humans and ensure there's enough oxygen for them, although it could still be more AIs than humans going.

I don't think assigning probability > 50% to these possibilities together is unreasonable, nor is assigning probabilities < 50%. If you forced me to choose a single number, I'd probably choose something close to 50-50 on whether large scale space colonization happens at all, because of how uncertain I am now. Something like only a 10% chance on each side would be about the limit for what I would consider for decision-making (except to illustrate) if I wanted to provide a range of cost-effectiveness estimates for any intervention related to this. I'm not sure I'd say anything outside this 10-90% range is unreasonable, but just outside what I'd consider worth entertaining for myself. I would want to see a really strong argument to entertain something as extreme as 1% on either side.

This is a great post. However, according to your graph, first tool use occurred 3.3 million years ago. A quick google suggest Homo Sapiens have been around for approximately 300,000 years. So apply Fermi's paradox to a smaller scope - ask why we only see civilization within the narrow window of the last 10,000 years out of 300,000 years despite Homo Sapiens being in their current modern form for so long?

Here's a graph to help visualize the last 50,000 years https://www.dandebat.dk/images/1579p.jpg

Hopefully it's obvious that something changed coincidental to the advent of civilization - a very stable climate regime called the Holocene. We didn't develop the agricultural technology 10,000 years ago - the climate settled into a very unique period that allowed for it.

That period is rapidly drawing to a close. This is no longer possible:

"Today's civilizations will crumble, and many more civilizations will fall and rise."

Civilization depends on agriculture - it  has never existed without it. Agriculture  depends on a stable climate - it has never existed without it.

We're not leaving the planet. The likelihood of another Holocene occurring within the next million years, if ever, is extremely low. Here's another graph demonstrating how unique the Holocene is along with some pretty impressive technological feats by humans prior to experiencing a stable climate:

https://www.ecologyandsociety.org/vol14/iss2/art32/figure1.jpg

And here's the 400,000 year view - we don't live on a stable planet:

https://i.redd.it/4g72pltxx8k11.png

The skeptical view of Fermi's paradox holds - you get one shot at creating a sustainable civilization but the likelihood that entropy will swamp the required complexity is extremely high. 

Agree, I think our near term would be better spent figuring out how to control our climate rather than building warp drives. It would not take too many trip ups and crop fails to send us back to smaller groups hunting gathering,and the loss of Miami and Shanghai et al  will be an additional burden we face . Also lets not forget if we are sent backwards it will be tough to replay the rise of the industrial world now that the surface hydrocarbons have been exhausted, discovering oil with a pickaxe and shovel only happens once.  I'm all for our wild future but I think its best to end these types of predictions with the CYA a la Carl Sagan "if we do not destroy ourselves"

Maybe some extraterrestrial species already effectively has spread throughout our galaxy, and for some reason we just don't see them.

Could it be that we don't see extraterrestrial species spread throughout our galaxy because civilizations spread at near-lightspeed?

I'm not sure I follow this. I think if there were extraterrestrials who were going to stop us from spreading, we'd likely see signs of them (e.g., mining the stars for energy, setting up settlements), regardless of what speed they traveled while moving between stars.

regardless of what speed they traveled while moving between stars

Adding to my other reply to your other comment I just made, let me just clarify that the model I'm working with is the "fast colonization" model from 25:20 of this Stuart Armstrong FHI talk, in which von Nuemann probes are sent directly from their origin solar system to each other galaxy, rather than hopping from galaxy to galaxy (as in the "slow colonization" model used by Sagan/Newman/Fogg/Hanson according to Stuart's slide).

So if >0.99c probes are possible, then I think the hypothesis I described is at least plausible, since civilizations indeed wouldn't see other expanding civilizations until those civilizations reached them.

To clarify, I am pointing out that if extraterrestrials exist that are mining stars for energy and doing other large-scale things that we'd expect to be visbile from other solar systems or galaxies, and if those extraterrestrials are >X light-years away from us and only started doing those large-scale things <X years ago, then we would not expect to see them because the light from their civilization would not yet have had time to reach us.

So the speed of expansion of their civilization isn't a necessary aspect of why we can't see them.

However, if the nature of our universe is such that extraterrestrials are likely to have arisen elsewhere in our galaxy (meaning <100,000 ly from us), then what's the explanation for why they arose in the last <100,000 years and not in the billions of years before that? That sould seem improbable a priori.

One (partial) explanation for that coincidence is if we hypothesize that the nature of our universe is such that any civilization that arises and reaches a point of doing large-scale things that would be visible from many light-years away also expands at near the speed of light beginning as soon as it starts having those large-scale effects. If we further assume that such expansion reaching our solar system before now would have prevented us from existing today (e.g. by extinguishing life on Earth and replacing it with something else), then this serves as a (partial) explanation for the above coincidence by introducing an observation selection effect where we only exist in the first place because no other extraterrestrials have arisen within X ly of us in the last X years.

Note that I called this ("intelligence expands at (near) light speed once it starts having effects that would be visible from light years away") hypothesis a "partial" explanation above (for lack of a better word) to note that while it could explain why it's not surprising that we don't see signs of extraterrestrials mining stars (even conditional on them existing), it is also a hypothesis that we find ourselves in a very rare world (simulation possibilities aside)--one in which intelligence arose more than once in our vaccinity, but at almost exactly the same time (e.g. 13.79995 and 13.8 billion years after the big bang if some other civilization in our galaxy started expanding 50,000 years ago), which a priori is unlikely.

I'm not sure I'm fully following, but I think the "almost exactly the same time" point is key (and I was getting at something similar with "However, note that this doesn't seem to have happened in ~13.77 billion years so far since the universe began, and according to the above sections, there's only about 1.5 billion years left for it to happen before we spread throughout the galaxy"). The other thing is that I'm not sure the "observation selection effect" does much to make this less "wild": anthropically, it seems much more likely that we'd be in a later-in-time, higher-population civilization than an early-in-time, low-population one.

The other thing is that I'm not sure the "observation selection effect" does much to make this less "wild": anthropically, it seems much more likely that we'd be in a later-in-time, higher-population civilization than an early-in-time, low-population one.

That's a good point: my hypothesis doesn't help to make reality seem any less wild.

I like your style. It’s concise yet narratival.

This makes me think of David Deutsch’s point that far from being powerless, creative minds are the only thing that can one day turn an empty bit of space into any structure permitted by physics (with enough hydrogen in the original space).

I guess I think it’s also quite wild that we are the only thing in the way of this wild future. If we nuke ourselves into oblivion etc etc this process might be delayed by millions of years.

OK, thanks for all that, perhaps I will wade through some of it.  Full disclosure, I'm having trouble getting past "our currently-empty galaxy". Seriously? or perhaps I misunderstood? Peace, galaxy brother. 

I found this article enthralling. But I have a critique:

So humanity would have to go extinct in some way that leaves no other intelligent life (or intelligent machines) behind.

A few people I know think this is not a very "wild" outcome. Earth could suffer a disaster that wipes out both humanity and the digital infrastructure needed to sustain advanced AI. I think this is a distinct possibility because humanity seems resilient whereas IT infrastructure is especially brittle - it depends on electricity and communications systems of some sort.

To put some numbers on this:

  • In The Precipice, Toby Ord estimates that total existential risk is 1/6 in the next 100 years, and x-risk from AI is 1/10. So the total x-risk not from AI is  in the next century. This means that such a disaster (one in which humans and AI both go extinct) is likely to happen once every 1500 years.
  • Given that humanity goes extinct, another intelligent species emerging on Earth and restarting civilization seems really unlikely. I'd put it at once every 100,000 years (a scientific wild-ass guess).

Another intuition that may explain people's faith in the "skeptical view": Species come and go on Earth all the time. Humans are just another species - and, at that, are "disrupting" the "natural order" of Earth's biosphere, and will eventually go extinct too.

If humanity simply goes extinct without reaching meaningful space expansion, I agree that that outcome would not be particularly wild.

However, I would find it wild to think this is definitely (or even "overwhelmingly likely") where things are heading. (While I also find it wild to think there's a decent chance that we will reach galaxy scale.)

I agree with that. I think humanity (as a cultural community, not the species) will most likely have the ability to expand across the Solar System this century, and will most likely have settled other star systems by a billion years from now, when Earth is expected to become uninhabitable.

Do people's wildness intuitions change when we think about human lives or life-years, instead of calendar years?

7 billion of 115 billion humans ever are living today. Given today's higher life expectancies, about 15% of all experience so far has been experienced by people who are alive right now.

So the idea that the reference class "humans alive today" is special among pre-expansion humans doesn't feel that crazy. (There's a related variant of the doomsday argument -- if we expect population to grow rapidly until space expansion, many generations might reasonably expect themselves to be at or near the cusp of something big.)

Of course, we'd still be very special relative to all living beings, assuming big and long-lasting post-expansion populations.

I think your last comment is the key point for me - what's wild is how early we are, compared to the full galaxy population across time.

I'm really put off by the "currently-empty galaxy" claim that comes right at the beginning of the piece. How would the author know? We've only sent two probes outside our own solar system, and they haven't even reached another star yet. I don't consider it a good sign that there's a massive unwarranted assumption sitting at the very beginning of the piece. That's why I can't be bothered to read further.

I'm not sure what publication this was in, but the claim seems to be supported here: https://arxiv.org/pdf/1806.02404.pdf.

I feel like in order to make it to the true interplanetary and intergalactic periods we will not only need tech such as AI but perhaps a complete biological makeover, human engineering on the level of complete optimization with inter galactic space travel in mind. A man that is not only more selfless and hard to bore, but a man that can breath less air, drink less water—have a more intimate faith and understanding that existence craves itself, that it will crawl out of the nothingness of the cave as many times as it takes to bring itself into the light.

Would it be possible to release the audio narrations on Apple Podcasts too? I would personally be much more likely to dig into them if I can easily access it when going out for a walk or something.

Working on that!

(It's on Apple Podcasts now, under Cold Takes Audio.)

I found it a bit baffling why people are afraid of a super AI take over. If they can do everything humans can do, why don’t let them take over the burden of exploring the galaxy?

There are lots of places you can read about this. Two of my favorite "starter" posts are:

Not all actions humans are capable of doing are good. 

On this theme, I was struck by the 80,000 hours podcast with Tom Moynihan, which discussed the widespread past belief in the 'principle of plenitude': "Whatever can happen will happen", with the implication that the current period can't be special. In a broad sense (given humanity's/earth's position), all such beliefs were wrong. But it struck me that several of the earliest believers in plenitude were especially wrong - just think about how influential Plato and Aristotle have been!

The timelines do a great job of visualising how colonisation would be completed quickly on a cosmic timescale.

There was also a memorable visualisation in Scientific American depicting how space colonies grow exponentially to fill the galaxy:  Crawford, Ian (2000) Where are they? Maybe we are alone in the galaxy after all, Scientific American, July.

The time it takes to colonise the galaxy depends on the speed of the colony ships and the time it takes for new colonies to create colony ships of their own.

The remarkable thing is that the home planet only needs to send out two successful colony expeditions to start the colonisation wave. That's it. Just two ships to colonise the galaxy. One of the most high impact projects one can think of.

Thanks for EA for your educative contents

This education will not leave us the same surely

Curated and popular this week
 ·  · 1m read
 · 
 ·  · 14m read
 · 
1. Introduction My blog, Reflective Altruism, aims to use academic research to drive positive change within and around the effective altruism movement. Part of that mission involves engagement with the effective altruism community. For this reason, I try to give periodic updates on blog content and future directions (previous updates: here and here) In today’s post, I want to say a bit about new content published in 2024 (Sections 2-3) and give an overview of other content published so far (Section 4). I’ll also say a bit about upcoming content (Section 5) as well as my broader academic work (Section 6) and talks (Section 7) related to longtermism. Section 8 concludes with a few notes about other changes to the blog. I would be keen to hear reactions to existing content or suggestions for new content. Thanks for reading. 2. New series this year I’ve begun five new series since last December. 1. Against the singularity hypothesis: One of the most prominent arguments for existential risk from artificial agents is the singularity hypothesis. The singularity hypothesis holds roughly that self-improving artificial agents will grow at an accelerating rate until they are orders of magnitude more intelligent than the average human. I think that the singularity hypothesis is not on as firm ground as many advocates believe. My paper, “Against the singularity hypothesis,” makes the case for this conclusion. I’ve written a six-part series Against the singularity hypothesis summarizing this paper. Part 1 introduces the singularity hypothesis. Part 2 and Part 3 together give five preliminary reasons for doubt. The next two posts examine defenses of the singularity hypothesis by Dave Chalmers (Part 4) and Nick Bostrom (Part 5). Part 6 draws lessons from this discussion. 2. Harms: Existential risk mitigation efforts have important benefits but also identifiable harms. This series discusses some of the most important harms of existential risk mitigation efforts. Part 1 discus
 ·  · 2m read
 · 
THL UK protestors at the Royal Courts of Justice, Oct 2024. Credit: SammiVegan.  Four years of work has led to his moment. When we started this, we knew it would be big. A battle of David versus Goliath as we took the Government to court. But we also knew that it was the right thing to do, to fight for the millions of Frankenchickens that were suffering because of the way that they had been bred. And on Friday 13th December, we got the result we had been nervously waiting for. Represented by Advocates for Animals, four years ago we started the process to take the Government to court, arguing that fast-growing chicken breeds, known as Frankenchickens, are illegal under current animal welfare laws. After a loss, and an appeal, in October 2024 we entered the courts once more. And the judgment is now in on one of the most important legal cases for animals in history. The judges have ruled in favour on our main argument - that the law says that animals should not be kept in the UK if it means they will suffer because of how they have been bred. This is a huge moment for animals in the UK. A billion Frankenchickens are raised with suffering coded into their DNA each year. They are bred to grow too big, too fast, to make the most profit possible. In light of this ruling, we believe that farmers are breaking the law if they continue to keep these chickens. However, Defra, the Government department responsible for farming, has been let off the hook on a technicality. Because Defra has been silent on fast-growing breeds of chicken, the judges found they had no concrete policy that they could rule against. This means that our case has been dismissed and the judges have not ordered Defra to act. It is clear: by not addressing this major animal welfare crisis, Defra has failed billions of animals - and the farming community. This must change. While this ruling has failed to force the Government to act, it has confirmed our view that farmers are acting criminally by using