Naive question: I see many EAs talking about non-extinction X-risks such as the alleged dangers of 'value lock-in' or the imposition of a 'global permanent totalitarian state'. Most recently I came across Will MacAskill mentioning these as plausible risks in the new book 'What we owe the future'.

As an evolutionary psychologist, I'm deeply puzzled by the idea that any biologically reproducing species could ever be subject to a 'permanent' socio-cultural condition of the sort that's posited. On an evolutionary time scale, 'permanent' doesn't just mean 'a few centuries of oppression'. It would mean 'zero change in the biological foundations of the species being oppressed -- including no increased ability to resist or subvert oppression -- across tens of thousands of generations'. 

As long as humans or post-humans are reproducing in any way that involves mutation, recombination, and selection (either with standard DNA or post-DNA genome-analogs such as digital recipes for AGIs), Darwinian evolution will churn along. Any traits that yield reproductive advantages in the 'global totalitarian state' will spread, changing the gene pool, and changing the psychology that the 'global totalitarians' would need to manage. 

Unless the global totalitarians are artificial entities such as AIs that are somehow immune to any significant evolution or learning in their own right, the elites running the totalitarian state would also be subject to biological evolution. Their heritable values, preferences, and priorities would gradually drift and shift over thousands of generations. Any given dictator might want their family dynasty to retain power forever. But Mendelian randomization, bad mate choices, regression to the mean, and genetic drift almost always disrupt those grand plans within a few generations.

So, can someone please point me to any readings that outline a plausible way whereby humans could be subject to any kind of 'global totalitarian oppressive system' across a time scale of more than a hundred generations?

39

0
0

Reactions

0
0
Comments17
Sorted by Click to highlight new comments since: Today at 6:37 PM

I wouldn't want to put much trust in evolution-based arguments when talking about the long-term future of human civilization, because technology seems so unpredictable, and might offer many ways to stay ahead of any problems thrown up by the slow process of biological evolution:

  • Maybe a totalitarian world government arises, and then performs advanced genetic engineering to make humans much more docile and willing to submit to authority.  Climbing out of that hole could take a long time; maybe the new equilibrium would even be self-reinforcing somehow.
  • Maybe surveillance and mind-reading technologies make it incredibly easy for an oppressive totalitarian system to maintain itself against revolt, so it doesn't really matter how the humans' biology is slowly changing over the millennia -- as long as pro-authoritarian forces stay decidedly ahead of anti-authoritarian forces, the balance will always tend towards lock-in.
  • Over the past several hundred million years, multicellular organisms have successfully managed to maintain control over individual cells (who might try and "rebel" against the host body by becoming cancerous).  It's not like being multicellular has gotten any harder as time went on; if anything it's probably gotten easier.  In the same way, society might develop better and better mechanisms for policing and suppressing internal dissent over time.
  • Maybe the totalitarian world government goes on a mass sterilization campaign (perhaps putting sterilizing chemicals into the environment), and creates new children via artificial wombs or by giving select citizens a treatment to reverse the sterilization process.  This would obviously throw a wrench into the way that demographic selection effects operate today.
  • Maybe humanity develops a cure for aging, or people upload their minds into computers, in a way that makes them immortal.  Instead of a world government that has to worry about transition of power and a changing populace over centuries, maybe everyone is unified under a single dictator who can consolidate and wield power indefinitely.  (Consider the many dictators -- Stalin and Mao, Robert Mugabe, the Kim Il-Sung -- who rule uncontested right up to the moment of their deaths.)

Jackson -- thanks for the interesting examples. Have you written anything more detailed about any of these, or know anyone who has?

Some of these sounds technically feasible within a few decades or centuries, but most raise the issue -- what motivation would the powerful people/AIs/whatever running society have for doing any of these things? Some of them sound pointlessly sadistic,  costly, and unaligned with the powerful beings' interests. (For example, why perpetuate a species of docile post-human submissives, instead of just automating whatever one wants to do? Why keep copies of everyone's uploaded consciousness, if they're not actually smart and empowered enough to actually do anything useful?)

I'd love to see some serious game theory analysis of these kinds of scenarios -- e.g. which kinds of powerful elite behavior (in perpetuating a 'global totalitarian state') would actually make any rational sense across millennia? Versus which are more like Black Mirror dystopian fantasies that don't actually make sense in terms of anyone's long-term interests?

As far as I know, there is really not much EA thought about this idea of "stable totalitarianism", which is odd considering that it is often brought up right when people are introducing the fundamental logic of "longtermist" EA, as you mentioned.  The EA Forum just has a couple oddball articles, including this one brainstorming how we might try to screen out mean-spirited people to prevent them rising to power, this section of a post on Brain-Computer Interfaces on how there is obvious totalitarian potential if you can read the minds of your subjects or directly wire reward/punishment into their brains, this essay by Bryan Caplan, a couple of articles about protecting democracy (although these are more near-term-oriented)...  compared to the usual thoroughness that EA often brings to the table, it's pretty lame!

Maybe there are other related subcultures beyond EA, where the idea of stable totalitarianism has been given more thought?  Crypto people are pretty libertarian/paranoid, so maybe they have good takes on this stuff?  Dunno...

One related area where people (including myself) have written a bit more is the "vulnerable world hypothesis" -- situations where you might actually need global totalitarianism in order for humanity to control an incredibly dangerous technology. 

fwiw I think conventional political science literature or most historians would tell the idea is really out there

Don't historians often write about how the totalitarian governments of the 20th century were enabled by various new technologies?  (ie, radio and newspapers for propagating ideology, advances in bureaucratic administration that helped nations keep tabs on millions of individual citizens, etc?  People are always mentioning things like how IBM made those punch-card machines that the Nazis used to help organize the holocaust.)  I don't think that stable totalitarianism is very plausible with modern-day technology.  But new technology is being developed all the time -- the fear is that just like the 20th century made "totalitarianism" possible for the first time, the balance of new technology might shift in a way that favors centralization of government power even more strongly.

Political commentators often mention that China has developed a lot of innovative high-tech methods for controlling its Uighur population: AI-based facial tracking and gait analysis to identify people's movements around the city, social credit scores to lock them out of opportunities, forced sterilization to reduce birthrates, etc.  Obviously China's innovations aren't good enough that they'll be able to outcompete the free world and attain perfect global hegemony, or anything like that!  But technology is unpredictable; future surveillance tech might give much bigger advantages to authoritarian systems.

Jackson -- thanks for your comment. 

I agree that historically, new technologies often allow new forms of political control (but also new forms of political resistance and rebellion). We're seeing this with social media and algorithmic 'bubble formation' that increases polarization.

Your last paragraph identifies what I think is the latent fear among many EAs: when they talk about a 'permanent global totalitarian state', I think they're often implicitly extrapolating from the current Chinese state, and imagining it augmented by much stronger AI. Trouble is, I think these fears are often (but not always) based on some pretty serious misunderstandings of China, and its history, government, economy, culture, and ethos. 

By most objective standards, I think the CCP over the last 100 years has actually been more adaptable, dynamic, and flexible in its approach to policy changes than most 'liberal democracies' have been -- with diverse approaches ranging from Mao's centralized economic control to Mao's cultural revolution to Deng's economic liberalization to Hu's humble meritocracy to Xi's re-assertive nationalism. Decade by decade, China's policies change quite dramatically, even as the CCP remains in power. By contrast, Western 'liberal democracies' tend to be run by the same deep state bureaucrats and legislatively gridlocked duopolies that rarely deviate from a post-WWII centrist status quo.  Anyway, I think EAs interested in whether 'China + AI' provides a credible model for a 'permanent totalitarian state' could often benefit from learning a bit more about Chinese history over the last century. (Recommended podcasts: 'China Talk' and 'China History Podcast').

This post itself sounds very misinformed about CCP history over the past hundred years. 

Yes, the CCP changes, but not its underlying logic of unlimited power, and all the dangers associated with it.

Yes, it adapts to external environment to survive, but the domestic costs of doing so cannot be lightly overlooked - such as some of the worst famines, political purges, mass-shooting against teenage students, mass imprisonment, forced labour camps (and the list goes on) humanity has ever seen. 

There is the tendency among some China watchers, in their eagerness to 'educate' the West about China, too quickly adopt the official narrative and history of the CCP. In doing so, they create a dangerous alliance, often out of ignorance more than willingness. Only when one can get over the hook of CCP official propaganda can one truly begin to see China as it is (sometimes it does seem terribly enticing. Hundreds of millions of people literally lifted out of by the Mother Party, rising on the global stage, developing modern technology, etc.). And I'm beginning to come to the view that the moral instincts of ignorant people reacting to phenomena in China are often more laudable than those of 'experts', who claim to know subtleties but in effect really are finding hopeless justifications for a morally bankrupt system. I'd recommend reading not Western China watchers but well-respected (and often suppressed) Chinese experts, scholars such as Gao Hua, Qin Hui, Shen Zhihua, to name a few. 

My rough sense of the argument is "AI is immune to all evolution mechanisms so it can stay the same forever, so an AI-governed totalitarian state can be permanent."

AI domination is not the only situation described in this argument, though: it also considers human domination that is aided by AI. In this scenario, your argument about drift in the elite class makes sense.

Thanks, that makes sense.

Although I'm still puzzled by the idea that any highly capable AI would be immune to any evolutionary mechanisms. Any systems capable of re-engineering themselves --including their values, preferences, motivations, and priorities -- in fairly general and adaptive ways, would be subject to significant change over thousands of years. 

The idea that one could program a 'master utility function' into an AGI that says 'oppress all humans forever, except favored elites in category X or family Y', and expect that utility function to stay static over millennia, seems very dubious.

Does it seem dubious to you because the world is just too chaotic? How would you describe the reasons for your feeling about this?

My intuition here is that whenever there are long-term conflicts of interest in any evolutionary system (e.g. predators vs. prey, parasites vs. hosts, males vs. females, parents vs. offspring), we almost always see a coevolutionary arms race of adaptation and counter-adaptation.

Any 'global totalitarian' AI with a fixed utility function that's not aligned with the beings that it's oppressing, exploiting, or otherwise harming, will be vulnerable to counter-adaptations among those beings. If they're biological beings at all, with any semblance of heredity, variation, and differential success/survival/reproduction, they will be under strong selection to find exploits, vulnerabilities, and countermeasures against the AI. Sooner or later, they will stumble upon some tricks that work to erode the 'totalitarian control'. If the AI can't counter-adapt, then sooner or later, its power will start to wane, and its 'totalitarian control' will start to slip. Like a cheetah that can't adapt to new gazelle escape tactics, or a virus that can't adapt to an immune system.

That's my intuition, anyway. Could easily be wrong. But I'd love to see some writings that address the coevolutionary arms race issue.

Thanks for sharing! 

A counterpoint I thought of: it seems like if there is no consequential coevolution happening between humans and other mammalian species, then perhaps AI could grow on such fast time scales that humans can't hope to have meaningful coevolution.

Maybe. But it seems like we have to pick one: either

(1) Powerful AI tries to impose global permanent totalitarian oppression based on its own stable, locked-in values, preferences, and priorities... which would make it static and brittle, and a sitting duck for coevolution by any beings it's exploiting, 

or

(2) Powerful AI tries to impose oppression based on its own nimble, adaptive, changeable values, preferences, and priorities.... which could co-evolve faster than any beings it's exploiting, but which would mean it's no longer 'permanent' in terms of the goals and nature of its totalitarian oppression.

[anonymous]2y2
0
0

Thanks for writing this and your responses in the comments. I find your argument that a stable permanent totalitarian state is biologically and empirically unlikely to be a coherent or realistic concept really interesting and quite persuasive. 

Thanks; I appreciate it. Will try to develop this into a longer, more coherent, and better-referenced argument at some point.

Maybe one could argue with the second species argument/gorilla problem (Russel in Human Compatible)? Seems plausible to me that we're currently enabling a totalitarian global state for many factory-farmed animals -- and we probably could do this permanently.

Lennart -- thanks for the link. I understand the analogy.

The question is, would our totalitarian global state of factory farming actually be stable and permanent (in the sense of lasting thousands of generations?)

Seems like we raise animals for meat, and they suffer. If we enjoy faster technological progress than the farm animals, we'll eventually invent ways to grow their meat without having to raise them at all. Their suffering isn't causally relevant to meat production; it's a negative by-product. 

So any AI capable of imposing a global totalitarian state in order to exploit our labor (or whatever it's getting from us), should be able to find more efficient alternatives to raising humans at all. Like if the Machines in the Matrix movies found a better way to produce energy (e.g. fusion?) than keeping humans around as 'batteries' locked in a totalitarian virtual reality.

In which case we face a true extinction risk, not a non-extinction totalitarian lock-in risk.

Curated and popular this week
Relevant opportunities