One theory is that we are early in the universe.
See, for example, Robin Hansons grabby aliens hypothesis which goes into why we would expect the universe to look empty from our perspective
Yeah, and it must be noted that the grabby aliens idea works best if you consider specifically unaligned AGIs, which I expect would be the most dedicated possible expanders. Other civilizations with more sophisticated values might have plenty of reasons to not grow quite that relentlessly and brutally.
Why haven’t we even seen evidence of power seeking AIs expanding in the universe? Any answer to that should also explain why they haven't arrived here.
The easiest potential answer is that we see what we should expect to see, even if there were many earlier civilizations, because civilizations are so spread out through space. If that’s true, you just need to explain why they would be so rare. But we really can't be sure if any of the potential explanations are true.
Are we (naively) early?
This is a question of whether more independent births of civilizations exist in the future than have already existed. This is difficult to actually address. The universe will last for trillions of trillions of years, the integral of even a very low rate of planet formation over that time would vastly outnumber the planets that have already formed. A huge amount of dust is gravitationally bound to galaxies still waiting to fuel star and planet formation. Somewhere around 92% of the total baryonic mass that can form stars is still in the galactic halos. That could potentially mean 92% of Earth-like planets have yet to form. It’s at least an upper-bound.
This is where that 92% comes from:
It does seem like we are early if the formation of Earth-like planets is a good proxy for formation of civilizations, and if the emergence of each civilization is an independent event. That’s the “naive” part.
We don’t want to know how many planets, or even Earth-like planets, will form in the future. We want to know how many planets capable of birthing civilizations will form in the future. If we don’t like the statistical implications of simply being early:
What might make the emergence of different civilizations not independent? How could one civilization reduce the number produced in the future? What prevents those 92% of planets yet to form from satisfying the condition “capable of birthing civilizations”?
Grabby aliens are an option. If every early civilization inevitably leads to an expanding, power-seeking AI, then that would prevent future civilizations from forming within its sphere of influence. That expansion would be slower than light, so we should see evidence of it before one hits us. Now the question could be “How late among the initial civilizations are we?” If we were very late, we might expect to actually see some evidence of grabby aliens out in the universe.
Great sentence from that paper:
if the Milky Way today contained another civilization, it is likely that Earth would be at least the ten billionth planet to host a civilization in the observable universe, which would eventually contain at least a hundred billion civilizations.
The solar system formed after more than 50% of other Earth-like planets in both the Milky Way and the observable universe. So “Where are the grabby aliens/power-seeking AIs?” is a very valid question. Even a slow rate of grabby alien expansion doesn’t explain why we haven’t seen anything. Given how late we are, it seems like civilizations would have to be quite rare. Or somehow all civilizations to form very close together in time. The Milky Way is only 100,000LY across. That’s the upper limit for how long ago grabby alien expansion could have started if we aren’t alone in the galaxy.
Bottom-left shows the formation of the solar system ⊙ outside the 50% contour.
Every statement above can be followed by 10 “unless”es, some of them more likely than what I said here.
The error bars are huge on everything.
I was assuming all civilizations create ever-expanding AIs, and not really differentiating between that and grabby aliens.
I definitely don’t want to give the impression this is rigorous or exhaustive. That paper is from 2015. A lot of work has gone into thinking about these things since. It’s probably outdated. I just wanted to use a couple numbers from it. It goes way more in-depth.
I tried to avoid committing to any specific anthropic reasoning by saying "if you have a statistical problem with being early"
David Kipping is a good source for more rigorous statistical analysis and constraining.
Life is rare.
Things are very far apart in space.
The universe is pretty young compared to evolutionary timelines that seem to require 3rd generation stars for the right mix of elements.
I think the bigger question is why haven't we found other species elsewhere in the universe.
Then I see the question about whether they'll kill us or not as a different one.
Because it wouldn't be a very intelligent move from the AGI. It'd be way easier for an AGI to set its own reward function to infinity by manipulating its own circuitry than it would be to warp the universe to its precise specifications. https://www.academia.edu/22359393/Utility_function_security_in_artificially_intelligent_agents
Same reason we haven't been destroyed by a nuclear apocalypse yet: if we had, we wouldn't be here talking about it.
As for the question "why haven't we encountered a power-seeking AGI from elsewhere in the universe who didn't destroy us", I don't know.
I'm not sure your answer is very helpful. You act like OPs question isn't meaningful, but I think it is. If you want to, interpret it as "why are we still here"?
One can answer we haven't been destroyed by a nuclear apocalypse because of safeguards or game theoretic considerations for example. Just as one can answer why we haven't been destroyed by power-seeking AI through explanations such as life is very rare and so AI hasn't been created yet. Not saying those are correct answers, just saying that providing a useful answer seems possible.
Seems like this question just reduces to the normal Fermi paradox right? "Power-seeking AGI" isn't adding any additional bits to that question.
I think it removes the "Hostile AGI is the Great Filter" scenario, which I recall seeing a few times but it doesn't make much sense to begin with.
Agreed, that seems quite unlikely
We don't really know why we haven't met other intelligent life yet. There are a bunch of hypotheses.https://itsonlychemo.wordpress.com/2020/01/13/a-list-of-solutions-to-the-fermi-paradox/