There are two common models of space colonization people sometimes allude to, neither of which I think is particularly likely.
Model 1 (“normal colonization”) is that space colonization will look something like Earth colonization, e.g. the way the first humans to expand to the Polynesian islands. So your boat (rover/ship/probe) hops to one island (planet), you build up a civilization, and then you send your probes onwards to the next couple of nearby planets, maybe saving up a bunch of resources if you've colonized nearby star systems (eg your galaxy) and need to send a bigger ship to more distant stars. So it looks like either orderly civilizational growth or an evolutionary process.
I don't think this model is really likely because von Neumann probes will be really cheap relative to the carrying capacity of star systems. So I don't think the intuitive "slow waves of colonization" model makes a lot of sense on a galactic scale.
I don’t think my view here is particularly controversial. My impression is that while the first model is common in science fiction, nobody in the futurism/x-risk/etc field really believes it.
Model 2 (“mad dash”) is that you race ahead as soon as you reach relativistic speeds. So as soon as your science and industry has advanced enough for your probes to reach appreciable fractions of c, you start blasting out von Neumann probes to the far reaches of the affectable universe.
I think this model is more plausible, but still unlikely. A small temporal delay is worth it to develop more advanced spacefaring technology.
My guess is that even if all you care about is maximizing space colonization, it still makes sense to delay some time before you launch your first "serious" interstellar space probe, rather than do it as soon as possible[1].
Whether you can reach the furthest galaxies is determined by something like[2]:
total time to reach a galaxy = delay + distance/speed
So you want to delay and keep researching until the marginal speed gain from additional R&D time is lower than the marginal cost of the delay.
I don't have a sense of how long this is, but intuitively it feels more like decades and centuries, maybe even slightly longer, than months or years. The theoretically reachable universe is 16-18 billion years away, so a 100 years delay is worth it if you can just increase the speed by 1/100 millionth of c [4].
For energy/resource reasons you might want to expand to nearby star systems first to send the fastest possible probes but note again that delay before sending your first probe is always at worst a constant amount of time. There's the possible exception of being able to accelerate R&D in other star systems, e.g. because you need multiple star systems of compute in order to do the R&D well. But this is trickier than it looks! The lightspeed communication barrier means sending information is slow, so you're really giving up a lot in terms of latency to use up more compute. A caveat here is that you might want your supercomputer to be bigger than the home system's resources. So maybe you want to capture a nearby star system to turn that into your core R&D department. Though that takes a while to build out, too.
Here are a few models of space colonization that I think are more likely:
I’m neither an astrophysicist nor in any other way a “real” space expert and I’ve spent less than a day thinking about the relevant dynamics, so let me know if you think I’m wrong or you have additional thoughts! Very happy to be corrected. :)
[1] Modulo other reasons for going faster, like worries about single-system x-risk, stagnation, meme wars etc. There are also other reasons to go slower, for example worries about interstellar x-risks/ vulnerable universe, wanting more value certainty and fear of value drift, being scared of aliens, etc.
[2] + relativistic effects and other cosmological effects that I don't understand. I never studied relativity but I'd be surprised if it changes the OOM calculus.
[3] where we predict additional research time yields diminishing returns relative to acting on current knowledge
[4] See also earlier work by Kennedy. Kennedy 2006's 'wait calculation' formalizes a version of this tradeoff for nearby stars and gets centuries-scale optimal delays, though his model doesn't consider the intergalactic case and has additional assumptions about transportation speeds that I’m unsure about.
Know Your Meme says it started off as video game jargon; my impression is that it's pretty common online outside of that.
PSA: regression to the mean/mean reversion is a statistical artifact, not a causal mechanism.
So mean regression says that children of tall parents are likely to be shorter than their parents, but it also says parents of tall children are likely to be shorter than their children.
Put in a different way, mean regression goes in both directions.
This is well-understood enough here in principle, but imo enough people get this wrong in practice that the PSA is worthwhile nonetheless.
I think something a lot of people miss about the “short-term chartist position” (these trends have continued until time t, so I should expect it to continue to time t+1) for an exponential that’s actually a sigmoid is that if you keep holding it, you’ll eventually be wrong exactly once.
Whereas if someone is “short-term chartist hater” (these trends always break, so I predict it’s going to break at time t+1) for an exponential that’s actually a sigmoid is that if you keep holding it, you’ll eventually be correct exactly once.
Now of course most chartists (myself included) want to be able to make stronger claims than just t+1, and people in general would love to know more about the world than just these trends. And if you're really good at analysis and wise and careful and lucky you might be able to time the kink in the sigmoid and successfully be wrong 0 times, which is for sure a huge improvement over being wrong once! But this is very hard.
And people who ignore trends as a baseline are missing an important piece of information, and people who completely reject these trends are essentially insane.
Also seems a bit misleading to count something like "one afternoon in Vietnam" or "first day at a new job" as a single data point when it's hundreds of them bundled together?
From a information-theoretic perspective, people almost never refer to a single data point as strictly as just one bit, so whether you are counting only one float in a database or a whole row in a structured database, or also a whole conversation, we're sort of negotiating price.
I think the "alien seeing a car" makes the case somewhat clearer. If you already have a deep model of cars (or even a shallow one), seeing another instance of a Ford Focus tells you relatively little, but an alien coming across one will get many bits about it, perhaps more than a human spending an afternoon in Vietnam.
EDIT: I noticed that in my examples I primed Claude a little, and when unprimed Claude does not reliably (or usually) get to the answer. However Claude 4.xs are still noticeable in how little handholding they need for this class of conceptual errors, Geminis often takes like 5 hints where Claude usually gets it with one. And my impression was that Claude 3.xs were kinda hopeless (they often don't get it even with short explanations by me, and when they do, I'm not confident they actually got it vs just wanted to agree).
So I agree that humanity might just choose not to reach the stars. It seems unlikely to me that nobody (or nobody with sufficient resources) would want to do this post-AGI, but it's possible humanity as a whole prevents other people from expanding (eg worries about building independent power centers that might harm the safety of Earth, or spoilt negotiations, or more idiosyncratic factors).
This is not the most likely existential risk imo, but certainly one to be aware of.
That said, the 1960s-70s moon landing was a large net resource loss. Costed ~ half a percentage point of GDP (!) annually for multiple years and didn't get anything in return other than a few innovations and one-upping the Soviets. Seems like a pretty different story!