Not really? Given my personal impression of the difficulty of the alignment problem, I would consider humanity very lucky if AGI managed to follow any set of human-defined values at all.
Also, it seems that most downsides of totalitarian regimes ultimately boil down to a lower quality of life among citizens. (For instance, a government that suppresses dissent is bad. But dissent is only valuable in that it may lead to reforms of the government, which may lead to improved lives for citizens.) Strong AI, if truly aligned with a government's aims, would probab... (read more)
That is only assuming the totalitarian leaders' interest is mostly aligned with people's interests, which is likely in the short term but a big IF in the long run. I worry that AGI create more obedient citizens because their decisions are almost unchallengeable, and the legitimacy borrowed from AGI will encourage the ruler to take bolder actions. For instance, China and Russia both have aging leaders that might serve until their death; we already know from history that old dictators tend to be more paranoid.
Not really? Given my personal impression of the difficulty of the alignment problem, I would consider humanity very lucky if AGI managed to follow any set of human-defined values at all.
Also, it seems that most downsides of totalitarian regimes ultimately boil down to a lower quality of life among citizens. (For instance, a government that suppresses dissent is bad. But dissent is only valuable in that it may lead to reforms of the government, which may lead to improved lives for citizens.) Strong AI, if truly aligned with a government's aims, would probab... (read more)