If the AI has moral status, one could argue that the ability of the AI to replicate and create new digital minds with positive experiences has very high moral status. And this might be sufficiently high that it is worth sacrificing humans for.
An AI can been seen to have moral status even if it doesn't have perfect alignment with humans. Some animals likely have moral status. My neighbour Jeff has moral status. Anything that has values similar but not same as me (or humans in general) seems to have moral status. Yet if Jeff and I had to decide the future of humanity via totalitarian control, there's a non-trivial probability we get into a fight over it. So we're not aligned in the face of such power, despite regarding each other as having moral status.
I'm not personally espousing this view but I wonder if it has been discussed before.