I think it is almost always assumed that superintelligent artificial intelligence (SAI) disempowering humans would be bad, but are we confident about that? Is this an under-discussed crucial consideration?
Most people (including me) would prefer the extinction of a random species to that of humans. I suppose this is mostly due to a desire for self-preservation, but can also be justified on altruistic grounds if humans have a greater ability to shape the future for the better. However, a priori, would it be reasonable to assume that more intelligent agents would do better than humans, at least under moral realism? If not, can one be confident that humans would do better than other species?
From the point of view of the universe, I believe one should strive to align SAI with impartial value, not human value. It is unclear to me how much these differ, but one should beware of surprising and suspicious convergence.
In any case, I do not think this shift in focus means humanity should accelerate AI progress (as proposed by effective accelerationism?). Intuitively, aligning SAI with impartial value is a harder problem, and therefore needs even more time to be solved.
I think the reason for those intuitions is that (reasonably enough!) we can't imagine there being 10^100 people without there also being a story behind that situation. A world in which e.g. some kind of entity breeds humans on purpose to then torture them, leading to those insane amounts, sounds indeed absolutely hellish! But the badness of it is due to the context; a world in which there exists only one person, and that person is being horribly tortured, is also extremely upsetting and sad, just in a different way and for different reasons (and all paths to there are also very disturbing; but we'll maybe think "at least everyone else just died without suffering as much" so it feels less horrible than the 10^100 humans torture world).
But my intuition on the situation alone is more along the lines of: imagine you know you're going to be born into this world. Would you like you odds? And in both the "one tortured human" and the "10^100 tortured humans" worlds, your odds would be exactly the same: 100% chance of being tortured.
But all of these are just abstract thought experiments. In any realistic situations, torture worlds don't just happen - there is a story leading to them, and for any kind of torture world, that story is godawful. So in practice the two things can't be separated. I think it's fairly correct to say that in all realistic scenarios the 10^100 world will be in practice worse, or have a worse past, though both worlds would be awful.