Late last week, the Biden administration announced a new set of regulations that make it illegal for US companies to export a range of AI-related products and services to China (Financial Times coverage here (paywalled); Reuters here). By all accounts this new export controls policy is a big deal for US-China competition and Chinese AI progress. Its wide-reaching effects touch on a number of issues important to effective altruists, AI safety strategists, and folks interested in reducing global catastrophic risks. This is a quickly-written post in which I summarize my understanding of the new regulations and their probable effects, then list some questions the announcement prompted regarding the effect on various global catastrophic risks.
The new export controls policy
If you pick a random article commenting on the new export controls policy, you’ll probably see some dramatic language. Reuters describes the new rules as an attempt to “hobble” China’s chip industry. CSIS’s Greg Allen writes that they seek to “[choke] off China’s access to the future of AI”. ChinaTalk’s Jordan Schneider thinks they “will reshape the global semiconductor industry and the future of the US-China relationship”.
The export controls seek to slow or stop Chinese companies from developing cutting-edge AI capabilities. They do this by making it illegal for US companies, US citizens, and foreign companies that use US products to sell AI-related hardware, software, and services to Chinese entities. This kind of policy is not new in itself. For years, the US has prohibited companies from selling AI tech to China for military use. However, the new regulations are notable for being much broader than previous restrictions and broad-based, no longer restricted to military uses in particular.
Allen writes that the regulations target four “chokepoints” in the AI development supply chain. Restrictions have been placed on the export of high-end computer chips, chip design software, chip manufacturing equipment, and manufacturing equipment components. I am going to summarize Allen’s explanations very briefly. If this is relevant to your work I strongly recommend you go read his article, which is packed with interesting and important details. Schneider also has a digestible tweet thread explanation you can read here.
1. The best computer chips
The new policies will functionally end the sale of the “best” computer chips, i.e. any chip above a certain performance threshold, to China. The US had already blocked the sale of such chips to China’s military. However, Allen writes that this policy was ineffective due to military-civil fusion in China; the boundary between military and non-military organizations in China is purposefully and officially blurry. Unable to distinguish military from non-military uses, the US government has decided to just block the sale of high-end chips to China entirely.
2. The software used to design chips
Advanced chips are so complex that specialized software is required to design them. The new regulations also ban companies from providing this software to Chinese companies. This will make it harder for Chinese companies to design new chips that can compete with the high-end chips produced by, e.g., Nvidia and AMC, to which they have just lost access.
3. The equipment used to make semiconductors
Allen writes that Chinese companies could yet circumvent chokepoints (1) and (2) by designing and manufacturing chips using older programs and equipment. But chokepoint (3) seeks to stop this by banning the sale of chip manufacturing equipment that exceeds a certain performance threshold. US citizens are also no longer allowed to help repair or maintain such equipment in China. Allen writes that this is a “devastating blow” for Chinese chip manufacturers. Schneider says these restrictions are already “wreaking havoc” as US citizens in the semiconductor industry in China have stopped working for fear of violating them.
4. The components used to make new semiconductor manufacturing equipment
Finally, to slow potential efforts by China to nurture domestic production of semiconductor manufacturing equipment, a range of components critical for building these machines are now under export controls. Without them, Allen writes, China will be “starting from scratch” in building up this industry, with “seven decades” of achievements and experience to replicate; “an extremely tall mountain to climb.”
Questions I’m left with
One of the takeaways that Allen leaves his readers with is that “this policy signals that the Biden administration believes the hype about the transformative potential of AI and its national security implications is real.” That sentiment probably feels familiar to many readers of this forum. Given the obvious links to EA, I’m left with three kinds of questions. I think more work in this vein, drawing out the implications for both policymakers and EA researchers, could be valuable.
First, I have questions about AI capabilities, progress, and timelines. When, if ever, will China’s AI capacity catch up with the US’s? How does this line up with AI timelines? My impression is that the future trajectory of the AI capabilities of the leading organizations in China remains uncertain, despite the reach of the export controls. Key uncertainties here are the extent to which the US is able to bring allies on board to make this a truly global blockade, and how effective a response the Chinese government and companies can muster. Perhaps in the long-term, the effect of these controls will be to spur the development of a powerful and entirely decoupled AI industry within China. While the technical challenges are considerable, it’s unclear to me whether the drag on Chinese AI progress should be measured in years or decades.
Second, I have questions about the likelihood of conflict. The evidence on how strongly economic interdependence promotes peace is surprisingly mixed. But it’s certainly plausible to me that decoupling in critical sectors makes conflict more likely (at least at the margin) by lowering the costs of supply chain disruption. On the other hand, there are also conditions under which this policy would lower the chance of conflict. This may be the case if (1) transition points when one country is overtaken by another are particularly dangerous, and (2) this policy delays that transition point for the US and China. It’s unclear to me how these drivers net out. And if export controls do increase the probability of conflict, the policy’s net effect on total existential risk is also unclear.
Finally, I have questions about cooperation on other existential risks. To what extent do tension-raising actions in national security hamper efforts to cooperate in other important domains, such as on climate change, pandemic prevention, or space governance? I am not one to naively suggest “cooperation” as a panacea for risk reduction. In many cases, competition and policy diversification can make us more robust to risks at a civilizational scale. But there are other domains where cooperation will clearly be needed to protect global commons and solve coordination problems at the planetary level. Who is thinking about the cross-domain linkages here? What is the long-term outlook?
I'm sure the EAs whose work is affected by these new policies are already aware of them. But I was surprised by the strength of the new policies, their direct relevance to AI progress, and the lack of discussion of linkages to other domains.
Fair point, the answer is unclear and could change. The most important fact IMO is that two of the leading AGI companies in the US, OpenAI and Deepmind, are explicitly concerned with x-risk and have invested seriously in safety. (Not as much as I’d like, but significant investments.) I’d rather those companies reach AGI than others who don’t care about safety. They’re US-based and benefit relative to Chinese companies from US policy that slows China.
Second, while I don’t think Joe Biden thinks or cares about AI x-risk, I do think US policymakers are more likely to be convinced of the importance of AI x-risk. Most of the people arguing for AI risk are English speaking, and I think they’re gaining some traction. Some evidence:
The Catastrophic Risk Management Act introduced by Senators Portman and Peters is clearly longtermist in motivation. From the act: “Not later than 1 year after the date of enactment of this Act, the President, with support from the committee, shall conduct and submit to Congress a detailed assessment of global catastrophic and existential risk.” Several press releases explicitly mentioned risks from advanced AI, though not the alignment problem. This seems indicative of longtermism and EAs gaining traction in DC.
https://www.congress.gov/bill/117th-congress/senate-bill/4488/text
https://www.hsgac.senate.gov/media/minority-media/portman-peters-introduce-bipartisan-bill-to-ensure-federal-government-is-prepared-for-catastrophic-risks-to-national-security-
The National Security Commission on AI commissioned by Congress in 2018 did not include x-risk in their report, which is disappointing. That group, led by Eric Schmidt former CEO of Google, has continued their policy advocacy as the Special Competitive Studies Project. They are evidently aware of x-risk concerns, as they cited Holden Karnofsky’s writeup of the most important century hypothesis. Groups like these seem like they could be persuaded of the x-risk hypothesis, and could successfully advocate sensible policy to the US government.
https://www.scsp.ai/reports/mid-decade-challenges-for-national-competitiveness/preface/
Finally, there are think tanks who explicitly care about AI x-risk. My understanding is that CSET and CNAS are the two leaders, but the strong EA grantmaking system could easily spur more and more successful advocacy.
On the other hand, I’m unaware of a single major group in China that professes to care about x-risk from AI. I might not know if they did exist, so if there’s any evidence I’d love to hear it. China does seem to have much stronger regulatory skills, and would probably be better at implementing compute controls and other “pivotal acts”. But without a channel to communicate why they should do so, I’m skeptical that they will.