EAs and longtermists rightly place a lot of emphasis on aligning powerful AI systems with human values. But I have always wondered, if super-human AI starts doing the bidding of some subset of humans, what is the governance equilibrium? When considering this question, two sub-questions stand out to me.

1) Does transformational AI most likely result in the end of democracy? After all, if all useful work is done by AI, most of the leverage held by the average person disappears. They can no longer strike, and protests or revolts may be entirely futile in the face of drone-based weapons and crowd control systems.

2) Is a unipolar or multipolar world more likely? The most powerful AI systems might be developed by American tech companies, the Chinese government, or some other actor. If a multipolar world is possible, how high is the risk of war? In general, it seems that if a state feels like it is facing an existential threat, its willingness to use WMDs or take other drastic measures increases. If, hypothetically, a victory by a Chinese state AI system would imply complete and indefinite subjugation of the American state AI system, and visa-versa, wouldn't the risk of conflict be extraordinarily high?

I'm curious to hear what the EA community has been thinking about these topics, and whether anyone has tried to estimate the likelihood of different governance outcomes in a world with aligned AI. 

8

0
0

Reactions

0
0
Comments2


Sorted by Click to highlight new comments since:

Some people think that, with a super-powerful AI running the world, there would be no need for traditional government. The AI can simply make all the important decisions to optimize human welfare.

This is similar to the Marxist idea of the "withering away of the state". Once perfect Communism has been achieved, there will be no more need for government.

https://en.wikipedia.org/wiki/Withering_away_of_the_state

In practice, Stalinism didn't really wither away. It was more like, Stalin gained personal control over this new organization, the Communist Party, to reinforce his own dictatorship and bend the nation to his will.

If we have transformational superhuman AI, the risk of war seems quite high. But an AI powerful enough to turn the whole world into paper clips could win a war immediately, without bloodshed. Or with lots of bloodshed, if that's what it wanted.

One possible outcome of superhuman AI is a global dictatorship. Whoever controls the superhuman AI controls the world, right? The CEO of the AI company that wins the race aligns the AI to themselves and makes themselves into an immortal god-king. At first they are benevolent. Over time it becomes impossible for the god-king to retain their humanity, as they become less and less like any normal human. The sun sets on the humanist era.

But this is turning into a science fiction story. In practice a "superhuman AI" probably won't be all-powerful like this, there will be many details of what it can and can't do that I can't predict. Or  maybe the state will just wither away!

I'm not saying that this is the only option but simce 1800s we have let the market choose which idea is going to thrive - which service or product gets rewarded.

The hard problem of measuring the future of AI is we don't have a preexisting model for such that once an AGI is let loose for distribution, we cannot see where will it leads us. This is a black swan event as Nassim Taleb described something world altering yet we do not know for now how transformative it can be for the future and beyond.

These are hard questions that is why alignment should be achieved as for us not to worry on how will AGI act and respond in the real world and us not worrying who controls governance of it's code base and infrastructure.

Curated and popular this week
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
 ·  · 8m read
 · 
> How the dismal science can help us end the dismal treatment of farm animals By Martin Gould ---------------------------------------- Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- This year we’ll be sharing a few notes from my colleagues on their areas of expertise. The first is from Martin. I’ll be back next month. - Lewis In 2024, Denmark announced plans to introduce the world’s first carbon tax on cow, sheep, and pig farming. Climate advocates celebrated, but animal advocates should be much more cautious. When Denmark’s Aarhus municipality tested a similar tax in 2022, beef purchases dropped by 40% while demand for chicken and pork increased. Beef is the most emissions-intensive meat, so carbon taxes hit it hardest — and Denmark’s policies don’t even cover chicken or fish. When the price of beef rises, consumers mostly shift to other meats like chicken. And replacing beef with chicken means more animals suffer in worse conditions — about 190 chickens are needed to match the meat from one cow, and chickens are raised in much worse conditions. It may be possible to design carbon taxes which avoid this outcome; a recent paper argues that a broad carbon tax would reduce all meat production (although it omits impacts on egg or dairy production). But with cows ten times more emissions-intensive than chicken per kilogram of meat, other governments may follow Denmark’s lead — focusing taxes on the highest emitters while ignoring the welfare implications. Beef is easily the most emissions-intensive meat, but also requires the fewest animals for a given amount. The graph shows climate emissions per tonne of meat on the right-hand side, and the number of animals needed to produce a kilogram of meat on the left. The fish “lives lost” number varies significantly by