Will enhanced government control of populations' behaviors and ideologies become one of AI's biggest medium-term safety risks?
For example, China seems determined to gain a decisive lead in AI research research by 2030, according to the new plan released this summer by its State Council:
https://www.newamerica.org/documents/1959/translation-fulltext-8.1.17.pdf
One of China's key proposed applications is promoting 'social stability' and automated 'social governance' through comprehensive monitoring of public spaces (through large-scale networks of sensors for face recognition, voice recognition, movement patterns, etc) and social media spaces (through large-scale monitoring of online activity). This would allow improved 'anti-terrorism' protection, but also much easier automated monitoring and suppression of dissident people and ideas. Over the longer term, inverse reinforcement learning could allow AI systems to learn to model the current preferences and likely media reactions of populations, allowing new AI propaganda systems to pre-test ideological messaging with much more accuracy, shaping gov't 'talking points', policy rationales, and ads to be much more persuasive. Likewise, the big US, UK, EU media conglomerates could weaponize AI ideological engineering systems to shape more effective messaging in their TV, movies, news, books, magazines, music, and web sites -- insofar as they have any ideologies to promote. (I think it's become pretty clear that they do.) As people spend more time with augmented reality systems, AI systems might automatically attach visual labels to certain ideas as 'hate speech' or certain people as 'hate groups', allowing mass automated social ostracism of dissident opinions. As people spend more time in virtual reality environments during education, work and leisure, AI ideological control might become even more intensive, resulting in most citizens spending most of their time in an almost total disconnect from reality. Applications of AI ideological control in mass children's education seem especially horrifying.
Compared to other AI applications, suppressing 'wrong-think' and promoting 'right-think' seems relatively easy. It requires nowhere near AGI. Data mining companies such as Youtube, Facebook, and Twitter are already using semi-automatic methods to suppress, censor, and demonetize dissident political opinions. And governments have strong incentives to implement such programs quickly and secretly, without any public oversight (which would undermine their utility by empowering dissidents to develop counter-strategies). Near-term AI ideological control systems don't even have to be as safe as autonomous vehicles, since their accidents, false positives, and value misalignments would be invisible to the public, hidden deep within the national security state.
AI-enhanced ideological control of civilians by governments and by near-monopoly corporations might turn into '1984' on steroids. We might find ourselves in a 'thought bubble' that's very difficult to escape -- long before AGI becomes an issue.
This probably isn't an existential risk, but it could be serious threat to human and animal welfare whenever governments and near-monopolies realize that their interests diverge from those of their citizens and non-human subjects. And it could increase other global catastrophic risks wherever citizen oversight could decrease risks from bioweapons, pandemics, nuclear weapons, other more capable AI systems, etc.
Has anyone written anything good on this problem of AI ideological engineering systems? I'd appreciate any refs, links, or comments.
(I posted a shorter version of this query on the 'AI Safety Discussion' group in Facebook.)
It does look like AI and deep learning will by default push toward greater surveillance, and greater power to intelligence agencies. It could supercharge passive surveillance of online activity, prediction of futuer crime, could make lie detection reliable.
But here's the catch. Year on year, AI and synthetic biology become more powerful and accessible. On the Yudkowsky-Moore law of mad science: "Every 18 months, the minimum IQ necessary to destroy the world drops by one point." How could we possibly expect to be headed toward a stably secure civilization, given that the destructive power of technologies is increasing more quickly than we are really able to adapt our institutions and our selves to deal with them? An obvious answer is that in a world where many can engineer a pandemic in their basement, we'll need to have greater online surveillance to flag when they're ordering a concerning combination of lab equipment, or to more sensitively detect homicidal motives.
On this view, the issue of ideological engineering from governments that are not acting in service of their people is one we're just going to have to deal with...
Another thought is that there will be huge effects from AI (like the internet in general) that come from corporations rather than government. Interacting with apps aggressively tuned for profit (e.g. a supercharged version of the vision described in the Time Well Spent video - http://www.timewellspent.io/) could - I don't know - increase the docility of the populace or have some other wild kind of effects.
The last chapter of Global Catastrophic Risks (Bostrom and Circovic) covers global totalitarianism. Among other things they mention how improved lie-detection technology, anti-aging research (to mitigate risks of regime change), and drugs to increase docility in the population could plausibly make a totalitarian system permanent and stable. Obviously an unfriendly AGI could easily do this as well.
The increasing docility could be a stealth existential risk increaser, in that people would be less willing to challenge other peoples ideas and so slow or stop entirely technological progress we need to save ourselves from super volcanoes and other environmental threats
I'm also worried about the related danger of AI persuasion technology being "democratically" deployed upon open societies (i.e., by anyone with an agenda, not necessarily just governments and big corporations), with the possible effect that in the words of Paul Christiano, "we’ll live to see a world where it’s considered dicey for your browser to uncritically display sentences written by an untrusted party." This is arguably already true today for those especially vulnerable to conspiracy theories, but eventually will affect more and more people as the technology improves. How will we solve our collective problems when the safety of discussions are degraded to such an extent?
This seems like a more specific case of a general problem with nearly all research on persuasion, marketing, and advocacy. Whenever you do research on how to change people's minds, you increase the chances of mind control. And yet, many EAs seem to do this: at least in the animal area, a lot of research pertains to how we advocate, research that can be used by industry as well as effective animal advocates. The AI case is definitely more extreme, but I think it depends on a resolution to this problem.
I resolve the problem in my own head (as someone who plans on doing such research in the future) through the view that people likely to use the evidence most are the more evidence-based people (and I think there's some evidence of this in electoral politics) and that the evidence will likely pertain more to EA types than others (a study on how to make people more empathetic will probably be more helpful to animal advocates, who are trying to make people empathetic, than industry, which wants to reduce empathy). These are fragile explanations, though, and one would think an AI would be completely evidence-based and a priori have as much evidence available to it as those trying to resist would have available to them.
Also, this article on nationalizing tech companies to prevent unsafe AI may speak to this issue to some degree: https://www.theguardian.com/commentisfree/2017/aug/30/nationalise-google-facebook-amazon-data-monopoly-platform-public-interest
The same can be said for messages which come from non-government sources. Governments have always had an advantage in resources and laws, so they've always had the high ground in information warfare/propaganda, but at the same time dissenting ideas are frequently spread. I don't see why the balance would be shifted.
Likewise, the same reasoning goes for small and independent media and activist groups.
Yeah, it is a problem, though I don't think I would classify it as AI safety. The real issue is one of control and competition. Youtube is effectively a monopoly and Facebook/Twitter are sort of a duopoly, and all of them are in the same Silicon Valley sphere with the same values and goals. Alternatives have little chance of success because of a combination of network effects and the 'Voat Phenomenon' (any alternative platform to the default platform will first attract the extreme types who were the first people to be ostracized by the main platform, so that the alternative platform will forever have a repulsive core community and a tarnished reputation). I'm sure AI can be used as a weapon to either support or dismantle the strength of these institutions; it seems better to approach it from a general perspective than just as an AI one.
I'd imagine there are several reasons this question hasn't received as much attention as AGI Safety, but the main reasons are that it's both much lower impact and (arguably) much less tractable. It's lower impact because, as you said, it's not an existential risk. It's less tractable because even if we could figure out a technical solution, there are strong vested interests against applying the solution (as contrasted to AGI Safety, where all vested interests would want the AI to be aligned).
I'd imagine this sort of tech would actually decrease the risk from bioweapons etc for the same reason that I'd imagine it would decrease terrorism generally, but I could be wrong.
Regarding the US in particular, I'm personally much less worried about the corporations pushing their preferred ideologies than them using the tech to manipulate us into buying stuff and watching their media - companies tend to be much more focussed on profits than on pushing ideologies.
I think this is part of the backdrop to my investigation into the normal computer control problem. People don't have control over their own computers. the bad actors that do get control could be criminals or a malicious state (or AIs).