EAs and longtermists rightly place a lot of emphasis on aligning powerful AI systems with human values. But I have always wondered, if super-human AI starts doing the bidding of some subset of humans, what is the governance equilibrium? When considering this question, two sub-questions stand out to me.
1) Does transformational AI most likely result in the end of democracy? After all, if all useful work is done by AI, most of the leverage held by the average person disappears. They can no longer strike, and protests or revolts may be entirely futile in the face of drone-based weapons and crowd control systems.
2) Is a unipolar or multipolar world more likely? The most powerful AI systems might be developed by American tech companies, the Chinese government, or some other actor. If a multipolar world is possible, how high is the risk of war? In general, it seems that if a state feels like it is facing an existential threat, its willingness to use WMDs or take other drastic measures increases. If, hypothetically, a victory by a Chinese state AI system would imply complete and indefinite subjugation of the American state AI system, and visa-versa, wouldn't the risk of conflict be extraordinarily high?
I'm curious to hear what the EA community has been thinking about these topics, and whether anyone has tried to estimate the likelihood of different governance outcomes in a world with aligned AI.
Some people think that, with a super-powerful AI running the world, there would be no need for traditional government. The AI can simply make all the important decisions to optimize human welfare.
This is similar to the Marxist idea of the "withering away of the state". Once perfect Communism has been achieved, there will be no more need for government.
https://en.wikipedia.org/wiki/Withering_away_of_the_state
In practice, Stalinism didn't really wither away. It was more like, Stalin gained personal control over this new organization, the Communist Party, to reinforce his own dictatorship and bend the nation to his will.
If we have transformational superhuman AI, the risk of war seems quite high. But an AI powerful enough to turn the whole world into paper clips could win a war immediately, without bloodshed. Or with lots of bloodshed, if that's what it wanted.
One possible outcome of superhuman AI is a global dictatorship. Whoever controls the superhuman AI controls the world, right? The CEO of the AI company that wins the race aligns the AI to themselves and makes themselves into an immortal god-king. At first they are benevolent. Over time it becomes impossible for the god-king to retain their humanity, as they become less and less like any normal human. The sun sets on the humanist era.
But this is turning into a science fiction story. In practice a "superhuman AI" probably won't be all-powerful like this, there will be many details of what it can and can't do that I can't predict. Or maybe the state will just wither away!
I'm not saying that this is the only option but simce 1800s we have let the market choose which idea is going to thrive - which service or product gets rewarded.
The hard problem of measuring the future of AI is we don't have a preexisting model for such that once an AGI is let loose for distribution, we cannot see where will it leads us. This is a black swan event as Nassim Taleb described something world altering yet we do not know for now how transformative it can be for the future and beyond.
These are hard questions that is why alignment should be achieved as for us not to worry on how will AGI act and respond in the real world and us not worrying who controls governance of it's code base and infrastructure.