I think it is almost always assumed that superintelligent artificial intelligence (SAI) disempowering humans would be bad, but are we confident about that? Is this an under-discussed crucial consideration?
Most people (including me) would prefer the extinction of a random species to that of humans. I suppose this is mostly due to a desire for self-preservation, but can also be justified on altruistic grounds if humans have a greater ability to shape the future for the better. However, a priori, would it be reasonable to assume that more intelligent agents would do better than humans, at least under moral realism? If not, can one be confident that humans would do better than other species?
From the point of view of the universe, I believe one should strive to align SAI with impartial value, not human value. It is unclear to me how much these differ, but one should beware of surprising and suspicious convergence.
In any case, I do not think this shift in focus means humanity should accelerate AI progress (as proposed by effective accelerationism?). Intuitively, aligning SAI with impartial value is a harder problem, and therefore needs even more time to be solved.
I just don't think total sum utilitarianism maps well with the kind of intuitions I'd like a functional moral system to match. I think ideally a good aggregation system for utility should not be vulnerable to being gamed via utility monsters. I lean more towards average utility as a good index, though that too has its flaws and I'm not entirely happy with it. I've written a (very tongue-in-cheek) post about it on Less Wrong.
Sure. So that actually backs my point that it's all relative to sentient subjects. There is no fundamental "real morality", though there are real facts about the conscious experience of sentient beings. But trade-offs between these experiences aren't obvious and can't be settled empirically.
But more so, killing people violates their own very strong preference towards not being killed. That holds for an ASI too.
I mean, ok, one can construct these hypothetical scenarios, but the one you suggested wasn't about preventing deaths, but ensuring the existence of more lives in the future. And those are very different things.
But obviously if you count future beings too - as you are - then it becomes inevitable that this approach does justify genocide. Take the very real example of the natives of the Americas. By this logic, the same exact logic that you used for an example of why an ASI could be justified in genociding us, the colonists were justified in genociding the natives. After all, they lived in far lower population densities that the land could support with advanced agricultural techniques, and they lived hunter-gatherer or at best bronze-age style lives, far less rich of pleasures and enjoyments than a modern one. So killing a few millions of them to allow eventually for over 100 million modern Americans to make full use of the land would have been a good thing.
See the problem with the logic? As long as you have better technology and precommit to high population densities you can justify all sorts of brutal colonization efforts as a net good, if not maximal good. And that's a horrible broken logic. It's the same logic that the ASI that kills everyone on Earth just so it can colonize the galaxy would follow. If you think it's disgusting when applied to humans, well, the same standards ought to apply to ASI.