I think it is almost always assumed that superintelligent artificial intelligence (SAI) disempowering humans would be bad, but are we confident about that? Is this an under-discussed crucial consideration?
Most people (including me) would prefer the extinction of a random species to that of humans. I suppose this is mostly due to a desire for self-preservation, but can also be justified on altruistic grounds if humans have a greater ability to shape the future for the better. However, a priori, would it be reasonable to assume that more intelligent agents would do better than humans, at least under moral realism? If not, can one be confident that humans would do better than other species?
From the point of view of the universe, I believe one should strive to align SAI with impartial value, not human value. It is unclear to me how much these differ, but one should beware of surprising and suspicious convergence.
In any case, I do not think this shift in focus means humanity should accelerate AI progress (as proposed by effective accelerationism?). Intuitively, aligning SAI with impartial value is a harder problem, and therefore needs even more time to be solved.
Thanks for elaborating! As a meta point, as I see my comment above has been downvoted (-6 karma excluding my upvote), it would be helpful for me to understand:
I want to contribute to a better world, and, if my comments are not helpful, I would like to improve them, or, if the time taken to make them helpful is too long, give more consideration to not commenting.
I am not sure I understand your point. Are you suggesting we should not maximise impartial welfare because this principle might imply humans being a small fraction of the overall number of beings?
Whether suffering is good or bad depends on the situation, including the person assessing it.
Suffering is not always bad/good. However, do you think adding suffering to the world maintaining everything else equal can be bad? For example, imagine you have 2 scenarios:
I think the vast majority of people would prefer B.
The idea sounds bad to me too! The reason is that, in the real world, killing rarely brings about good outcomes. I am strongly against violence, and killing people.
Genociding a population is almost always a bad idea, but I do not think one should reject it in all cases. Would you agree that killing a terrorist to prevent 1 billion human deaths would be good? If so, would you agree that killing N terrorists to prevent N^1000 billion human deaths would also be good? In my mind, if the upside is sufficiently large, killing a large number of people could be justified. You might get the sense I have a low bar for this upside from what I am saying, but I actually have quite a high bar for thinking that killing is good. I have commented that:
I very much agree it makes sense to be sceptical about arguments for killing lots of people in practice. For the AI extinction case, I would also be worried about the AI developers (humans or other AIs) pushing arguments in favour of causing human extinction instead of pursuing a better option.
Total utilitarianism only says one should maximise welfare. It does not say killing weaker beings is a useful heuristic to maximise welfare. My own view is that killing weaker beings is a terrible heuristic to maximise welfare (e.g. it may favour factory-farming, which I think is pretty bad).