I think it is almost always assumed that superintelligent artificial intelligence (SAI) disempowering humans would be bad, but are we confident about that? Is this an under-discussed crucial consideration?
Most people (including me) would prefer the extinction of a random species to that of humans. I suppose this is mostly due to a desire for self-preservation, but can also be justified on altruistic grounds if humans have a greater ability to shape the future for the better. However, a priori, would it be reasonable to assume that more intelligent agents would do better than humans, at least under moral realism? If not, can one be confident that humans would do better than other species?
From the point of view of the universe, I believe one should strive to align SAI with impartial value, not human value. It is unclear to me how much these differ, but one should beware of surprising and suspicious convergence.
In any case, I do not think this shift in focus means humanity should accelerate AI progress (as proposed by effective accelerationism?). Intuitively, aligning SAI with impartial value is a harder problem, and therefore needs even more time to be solved.
In practice, I think smaller beings will produce welfare more efficiently. For example, I guess bees produce it 5 k times as effectively as humans. To the extent the way of producing welfare in the most efficient way involves a single or a few beings, they would have to be consuming lots and lots of energy of many many galaxies. So it would be more like the universe itself being a single being or organism, which is arguably more compelling than what the term "utility monster" suggests.
How about this scenario. There is a terrorist who is going to release a very infectious virus which will infect all humans on Earth. The virus makes people infertile forever, thus effectively leading to human extinction, but it also makes people fully lose their desire to have children, and have much better self-assessed lives. Would it make sense to kill the terrorist? Killing the terrorist would worsen the lives of all humans alive, but it would also prevent human extinction.
Yes, it can justify genocide, although I am sceptical it would in practice. Genocide involves suffering, and suffering is bad, so I assume there would be a better option to maximise impartial welfare. For example, ASI could arguably persuade humans that their extinction was for the better, or just pass everyone to a simulation without anyone noticing, and then shutting down the simulation in a way that no suffering is caused in the process.
I agree you can justify the replacement of beings who produce welfare less efficiently by beings who produce welfare more efficiently. For example, replacing rocks by humans is fine, and so might be replacing humans by digitals minds. However, the replacement process itself should maximise welfare, and I am very sceptical that "brutal colonization efforts" would be the most efficient way for ASI to perform the replacement.