In nearly every discussion I've engaged in relating to the potential delay or pause in AI research, multiple people have responded with the quip: "If we don't build AGI, then China will, which is an even worse possible world". This is taken at face value and is something I've never seen seriously challenged.

This does not seem obvious to me.

Given China's semi-conductor supply chain issues, China's historical lack of cutting edge innovative technology research and the tremendous challenges powerful AI systems may pose to the governing party and their ideology, it seems highly uncertain that China will develop AGI in a world where Western orgs stopped developing improved LLMs.

I appreciate people can point to multiple countries, including ones with non-impressive historical research credentials, developing nuclear weapons independently. 

Beyond this, can anyone point me to, or outline arguments in favour of the idea that China is very likely to develop AGI+, even if Western orgs cease research in this field. 

I don't have a strong view on this topic but given so many people assume it to be true, I would like to further understand the arguments in support of this claim.

New Answer
New Comment

5 Answers sorted by

DMMF - I also encounter this claim very often on social media. 'If the US doesn't rush ahead towards AGI, China will, & then we lose'. It's become one of the most common objections to slowing down AI research by US companies, and is repeated ad nauseum by anti-AI-safety accelerationists.

I agree with you that it's not at all obvious that China would rush ahead with AI if the US slowed down. China's CCP leadership already seems pretty concerned with X risks and global catastrophic risks, e.g. climate change. Xi Jinping's concept of 'community of common destiny' emphasizes humanity's shared vulnerability to runaway technological developments such as space-based weapons (and AI, maybe). Chinese science fiction movies (e.g. Shanghai Fortress, The Wandering Earth), routinely depict China as saving the rest of humanity from X-risks, after other nations have failed. I think China increasingly sees itself as the wise elder trying to keep impetuous, youthful, reckless America from messing everything up for everybody.

If China was more expansionist, imperialistic, and aggressive, I'd be more concerned that they would push ahead with AI development for military applications. Yes, they want to retake Taiwan, and they will, sooner or later. But they're not showing the kind of generalized western-Pacific expansionist ambitions that Japan showed in the 1930s. As long as the US doesn't meddle too much in the 'internal affairs of China' (which they see as including Taiwan), there's little need for a military arms race involving AI.

I worry that Americans tend to think and act as if we are the only people in the world who are capable of long-term thinking, X risk reduction, or appreciation of humanity's shared fate. As if either the US dominates the world with AI, or other nations such as China will develop dangerous AI without any concern for the consequences. The evidence so far suggests that China might actually be a better steward of our global safety than the US is being, at least in the domain of AI development.

evidence so far suggests that China might actually be a better steward of our global safety than the US is being

Here's a thought experiment: if we lived in China, could we suggest on a Chinese forum that 'the US might actually be a better steward of our global safety than China is being, at least in the domain of AI development'?

Could we have a discussion that was honest, free, and open with no fear of censorship or consequences?

Where are all the public discussions in China about how the CCP needs to be more responsible in how it uses AI, how perhaps it sh... (read more)

I'm not a China expert, but I have some experience running classes and discussion forums in a Chinese university. In my experience, people in China feel considerably more freedom to express their views on a wide variety of issues than Westerners typically think they do. There is a short list of censored topics, centered around criticism of the CCP itself, Xi Jinping, Uyghurs, Tibet, and Taiwan. But I would bet that they have plenty of freedom to discuss AI X risks, alignment, and geopolitical issues around AI, as exemplified by the fact that Kai-Fu Lee, author of 'AI Superpowers' (2018), and based in Beijing, is a huge tech celebrity in China who speaks frequently on college campuses there - despite being a vocal critic of some gov't tech policies.

Conversely, there are plenty of topics in the West, especially in American academia, that are de facto censored (through cancel culture). For example, it was much less trouble to teach about evolutionary psychology, behavior genetics, intelligence research, and even sex research in a Chinese university than in an American university.

Hard disagree on the reasoning behind why China might not pursue AGI. China seems to me, after almost a decade there, no more concerned about global X-risks than any other country. Just it is quite profitable for the leadership to signal that they are. Chinese Govt. signaling that they wish for safe AGI is just a case of sour grapes - they are nowhere near deploying anything significant, so it is completely costless for them to say that they are restrained. In actuality, they spent decades trying to build up their own semiconductor industry, unsuccessfully... (read more)

PS I'd encourage folks to read this excellent article on whether China is being over-hyped as an AI rival.

It all depends on what timeline you're considering. Can China do it in a year? Very likely not. Can it do it if left alone for a century? Probably yes. What time frame do you have in mind?

I have a few questions and a lot of things that give me pause:

  1. Even assuming that the pursuers come to know the risks - perhaps that AGI may ultimately betray its users - why would that diminish its appeal? Some % of people have always been drawn to the pursuit of power without much concern for the potential risks.
  2. Why would leaders in China view AGI they controlled as a threat to their power? It seems that artificial intelligence is a key part of how the Chinese government currently preserves its power internally, and it's not a stretch to see how artificial intelligence could help massively in external power projection, as well as in economic growth.
  3. Why assume Chinese incompetence in the area of AI? China invests a lot of money into AI, uses it in almost all areas of society, and aims for global leadership in this area by 2030. China also has a large pool of AI researchers and engineers, a lot of data, and few data protections for individuals. Assuming incompetence is not only unwise, it disregards genuine Chinese achievements, and in some cases it's prejudiced. Do you really want to say that China does not perform innovative technology research?
  4. If China is genuinely struggling (economically, technologically, etc.), why would leaders abandon the pursuit of AGI? I would have thought the opposite. History suggests that countries which see themselves as having a narrow window of opportunity to achieve victory are the most dangerous. And fuzzy assumptions of benevolence are unwise: Xi Jinping has told the Chinese military to prepare for war, while overseeing one of the fastest military expansions in history, and he has consolidated authority around himself.
  5. Given the potential risks associated with the development of AGI, what approach do you recommend for slowing down its pursuit: a unilateral approach where countries like the US and UK take the initiative, or a multilateral approach where countries like China are included and formal agreements (including verification arrangements) are established? How would you establish trust while also preventing authoritarian regimes from gaining AGI supremacy? The article you linked mentions a lot of "maybes" - maybe China would not gain supremacy - but to be honest, given the high stakes, Western policymakers would want much higher confidence.
Comments2
Sorted by Click to highlight new comments since: Today at 4:52 PM

Agreed. There's also an argument to be made that China doesn't want the widespread use of large language models to destabilise its censorship agenda. In fact, maybe China also doesn't want to hurtle towards the end of the world with reckless abandon just so they can say they won, and slowing down AI in the west will allow them to take a safer approach since they're no longer trying to keep up appearances in the AI race.