Hide table of contents

Late last week, the Biden administration announced a new set of regulations that make it illegal for US companies to export a range of AI-related products and services to China (Financial Times coverage here (paywalled); Reuters here). By all accounts this new export controls policy is a big deal for US-China competition and Chinese AI progress. Its wide-reaching effects touch on a number of issues important to effective altruists, AI safety strategists, and folks interested in reducing global catastrophic risks. This is a quickly-written post in which I summarize my understanding of the new regulations and their probable effects, then list some questions the announcement prompted regarding the effect on various global catastrophic risks.

The new export controls policy

If you pick a random article commenting on the new export controls policy, you’ll probably see some dramatic language. Reuters describes the new rules as an attempt to “hobble” China’s chip industry. CSIS’s Greg Allen writes that they seek to “[choke] off China’s access to the future of AI”. ChinaTalk’s Jordan Schneider thinks they “will reshape the global semiconductor industry and the future of the US-China relationship”.

The export controls seek to slow or stop Chinese companies from developing cutting-edge AI capabilities. They do this by making it illegal for US companies, US citizens, and foreign companies that use US products to sell AI-related hardware, software, and services to Chinese entities. This kind of policy is not new in itself. For years, the US has prohibited companies from selling AI tech to China for military use. However, the new regulations are notable for being much broader than previous restrictions and broad-based, no longer restricted to military uses in particular.

Allen writes that the regulations target four “chokepoints” in the AI development supply chain. Restrictions have been placed on the export of high-end computer chips, chip design software, chip manufacturing equipment, and manufacturing equipment components. I am going to summarize Allen’s explanations very briefly. If this is relevant to your work I strongly recommend you go read his article, which is packed with interesting and important details. Schneider also has a digestible tweet thread explanation you can read here.

1. The best computer chips

The new policies will functionally end the sale of the “best” computer chips, i.e. any chip above a certain performance threshold, to China. The US had already blocked the sale of such chips to China’s military. However, Allen writes that this policy was ineffective due to military-civil fusion in China; the boundary between military and non-military organizations in China is purposefully and officially blurry. Unable to distinguish military from non-military uses, the US government has decided to just block the sale of high-end chips to China entirely.

2. The software used to design chips

Advanced chips are so complex that specialized software is required to design them. The new regulations also ban companies from providing this software to Chinese companies. This will make it harder for Chinese companies to design new chips that can compete with the high-end chips produced by, e.g., Nvidia and AMC, to which they have just lost access.

3. The equipment used to make semiconductors

Allen writes that Chinese companies could yet circumvent chokepoints (1) and (2) by designing and manufacturing chips using older programs and equipment. But chokepoint (3) seeks to stop this by banning the sale of chip manufacturing equipment that exceeds a certain performance threshold. US citizens are also no longer allowed to help repair or maintain such equipment in China. Allen writes that this is a “devastating blow” for Chinese chip manufacturers. Schneider says these restrictions are already “wreaking havoc” as US citizens in the semiconductor industry in China have stopped working for fear of violating them. 

4. The components used to make new semiconductor manufacturing equipment

Finally, to slow potential efforts by China to nurture domestic production of semiconductor manufacturing equipment, a range of components critical for building these machines are now under export controls. Without them, Allen writes, China will be “starting from scratch” in building up this industry, with “seven decades” of achievements and experience to replicate; “an extremely tall mountain to climb.”

Questions I’m left with

One of the takeaways that Allen leaves his readers with is that “this policy signals that the Biden administration believes the hype about the transformative potential of AI and its national security implications is real.” That sentiment probably feels familiar to many readers of this forum. Given the obvious links to EA, I’m left with three kinds of questions. I think more work in this vein, drawing out the implications for both policymakers and EA researchers, could be valuable.

First, I have questions about AI capabilities, progress, and timelines. When, if ever, will China’s AI capacity catch up with the US’s? How does this line up with AI timelines? My impression is that the future trajectory of the AI capabilities of the leading organizations in China remains uncertain, despite the reach of the export controls. Key uncertainties here are the extent to which the US is able to bring allies on board to make this a truly global blockade, and how effective a response the Chinese government and companies can muster. Perhaps in the long-term, the effect of these controls will be to spur the development of a powerful and entirely decoupled AI industry within China. While the technical challenges are considerable, it’s unclear to me whether the drag on Chinese AI progress should be measured in years or decades.

Second, I have questions about the likelihood of conflict. The evidence on how strongly economic interdependence promotes peace is surprisingly mixed. But it’s certainly plausible to me that decoupling in critical sectors makes conflict more likely (at least at the margin) by lowering the costs of supply chain disruption. On the other hand, there are also conditions under which this policy would lower the chance of conflict. This may be the case if (1) transition points when one country is overtaken by another are particularly dangerous, and (2) this policy delays that transition point for the US and China. It’s unclear to me how these drivers net out. And if export controls do increase the probability of conflict, the policy’s net effect on total existential risk is also unclear.

Finally, I have questions about cooperation on other existential risks. To what extent do tension-raising actions in national security hamper efforts to cooperate in other important domains, such as on climate change, pandemic prevention, or space governance? I am not one to naively suggest “cooperation” as a panacea for risk reduction. In many cases, competition and policy diversification can make us more robust to risks at a civilizational scale. But there are other domains where cooperation will clearly be needed to protect global commons and solve coordination problems at the planetary level. Who is thinking about the cross-domain linkages here? What is the long-term outlook?

I'm sure the EAs whose work is affected by these new policies are already aware of them. But I was surprised by the strength of the new policies, their direct relevance to AI progress, and the lack of discussion of linkages to other domains.

Comments20
Sorted by Click to highlight new comments since: Today at 7:14 AM

I don't have good answers to your questions, but I just want to say that I'm impressed and surprised by the decisive and comprehensive nature of the new policies. It seems that someone or some group actually thought through what would be effective policies for achieving maximum impact on the Chinese AI and semiconductor industries, while minimizing collateral damage to the wider Chinese and global economies. This contrasts strongly with other recent US federal policy-making that I've observed, such as COVID, energy, and monetary policies. Pockets of competence seem to still exist within the US government.

Great writeup! I also wrote  about the restrictions here, with some good discussion. A few thoughts:

  • I think this slows China's AI progress by a few years. Losing Nvidia GPUs alone is a serious hit to ML researchers in China. They are building their own alternatives, for example CodeGeeX is a GPT-sized language model trained entirely on Chinese GPUs. But this makes GPUs more scarce. 
  • It probably also reduces US influence over China and Chinese AI in the future. We're making them less reliant on us now, meaning we can't use GPUs as leverage to force safety standards or other kinds of cooperation in the future. 
    • I agree with your concern about cooperation on other existential risks. If we want to work together on climate change or banning research on dangerous pathogens, this hurts us. 
  • China more or less does not care about AI safety from existential risks. Therefore, slowing their timelines is good, but sacrificing US influence over China is bad. It's unclear how these two balance out. If you have longer timelines, you'd probably prioritize long-term influence. 
  • I think this definitely increases the chances of war with China, because it's explicitly designed to prepare for a possible war. It's Tom Cotton's strategy of economic decoupling to "Beat China"
    • From the standard US foreign policy viewpoint, I think this shift is well-warranted. Cooperation with China on trade has not given us the soft power we hoped it would. They're as anti-democratic as ever, still committing human rights abuses and arguably only growing more aggressive. It's time to move from the carrot to the stick. 
    • From an EA standpoint placing higher value on existential risk and lower value on typical US foreign policy interests, conflict with China definitely looks worse. The US stance looks like it would rather start World War III than allow China to become the top global superpower. I do believe that US democratic values are much better for the world than authoritarianism, and I'm scared of long-term authoritarianism, but solving it with global war doesn't help. 
    • This is analogous to the question of Ukraine: Do you support a democratic nation attacked by a despot at risk of nuclear war? Avoiding armageddon has to be the top priority, but over the years we've been able to pursue our other interests without spiraling into nuclear war. 
    • Important operationalization: Do we defend Taiwan with US troops? Biden says yes. Taiwan is very important to defend (not least for TSMC and its semiconductors), but I think it's probably better to lose Taiwan than raise the chances of nuclear war by 0.1%. 

They are building their own alternatives, for example CodeGeeX is a GPT-sized language model trained entirely on Chinese GPUs.

It used Huawei Ascend 910 AI Processors, which was fabbed by TSMC, which will no longer be allowed to make such chips for China.

Ofer
1y13
3
2

China more or less does not care about AI safety from existential risks. Therefore, slowing their timelines is good, but sacrificing US influence over China is bad.

What evidence do you have that the Chinese government cares less about x-risks from AI than the current US government, let alone whatever government the US will have after 2024? If avoiding existential catastrophes from AI mostly depends on governments' ability to regulate AI companies, does the US government seem to you better positioned than the Chinese government to establish and enforce such regulations?

Fair point, the answer is unclear and could change. The most important fact IMO is that two of the leading AGI companies in the US, OpenAI and Deepmind, are explicitly concerned with x-risk and have invested seriously in safety. (Not as much as I’d like, but significant investments.) I’d rather those companies reach AGI than others who don’t care about safety. They’re US-based and benefit relative to Chinese companies from US policy that slows China.

Second, while I don’t think Joe Biden thinks or cares about AI x-risk, I do think US policymakers are more likely to be convinced of the importance of AI x-risk. Most of the people arguing for AI risk are English speaking, and I think they’re gaining some traction. Some evidence:

The Catastrophic Risk Management Act introduced by Senators Portman and Peters is clearly longtermist in motivation. From the act: “Not later than 1 year after the date of enactment of this Act, the President, with support from the committee, shall conduct and submit to Congress a detailed assessment of global catastrophic and existential risk.” Several press releases explicitly mentioned risks from advanced AI, though not the alignment problem. This seems indicative of longtermism and EAs gaining traction in DC.

https://www.congress.gov/bill/117th-congress/senate-bill/4488/text

https://www.hsgac.senate.gov/media/minority-media/portman-peters-introduce-bipartisan-bill-to-ensure-federal-government-is-prepared-for-catastrophic-risks-to-national-security-

The National Security Commission on AI commissioned by Congress in 2018 did not include x-risk in their report, which is disappointing. That group, led by Eric Schmidt former CEO of Google, has continued their policy advocacy as the Special Competitive Studies Project. They are evidently aware of x-risk concerns, as they cited Holden Karnofsky’s writeup of the most important century hypothesis. Groups like these seem like they could be persuaded of the x-risk hypothesis, and could successfully advocate sensible policy to the US government.

https://www.scsp.ai/reports/mid-decade-challenges-for-national-competitiveness/preface/

Finally, there are think tanks who explicitly care about AI x-risk. My understanding is that CSET and CNAS are the two leaders, but the strong EA grantmaking system could easily spur more and more successful advocacy.

On the other hand, I’m unaware of a single major group in China that professes to care about x-risk from AI. I might not know if they did exist, so if there’s any evidence I’d love to hear it. China does seem to have much stronger regulatory skills, and would probably be better at implementing compute controls and other “pivotal acts”. But without a channel to communicate why they should do so, I’m skeptical that they will.

Late response, but may still be of interest: some colleagues and I spent some time surveying the existing literature on China x AI issues and the resource list we produced includes a section on Key actors and their views on AI risks. In general, I'd recommend the Concordia AI Safety newsletter for regular news of Chinese actors commenting on AI safety (and, more or less directly, on related x-risks).

Ofer
1y15
3
0

On the other hand, I’m unaware of a single major group in China that professes to care about x-risk from AI. I might not know if they did exist, so if there’s any evidence I’d love to hear it.

There is a research institute in China called the Beijing Academy of Artificial Intelligence. In May 2019 they published a document called "The Beijing Artificial Intelligence Principles" that included the following:

Harmony and Cooperation: Cooperation should be actively developed to establish an interdisciplinary, cross-domain, cross-sectoral, cross-organizational, cross-regional, global and comprehensive AI governance ecosystem, so as to avoid malicious AI race, to share AI governance experience, and to jointly cope with the impact of AI with the philosophy of "Optimizing Symbiosis".

.

Long-term Planning: Continuous research on the potential risks of Augmented Intelligence, Artificial General Intelligence (AGI) and Superintelligence should be encouraged. Strategic designs should be considered to ensure that AI will always be beneficial to society and nature in the future.

(This is just something that I happened to stumble upon when it was published; there may be many people in China at relevant positions that take x-risks from AI seriously.)

Great podcast on it from Jordan Schnieder, the document itself, and the press release

One of the takeaways that Allen leaves his readers with is that “this policy signals that the Biden administration believes the hype about the transformative potential of AI and its national security implications is real.” That sentiment probably feels familiar to many readers of this forum.

To be clear, it's good that this sentiment is possible, it is good that you mention it and consider it, and it is good that Allen mentions it and may believe it.

 

If Allen or you are trying to suggest that this action is even partially motivated by concern about "transformative AI" in the Holden sense (much less full-on "FOOM" sense), this seems very unlikely and  probably misleading.

 

Approximately everyone believes "AI is the future" in some sense. For example, we easily can think of dozens of private and public projects that sophisticated management at top companies pushed for, that are "AI" or "ML" (that often turn out to be boondoggles). E.g., Zillow buying and flipping houses with algorithms. 

These were claimed to be "transformative", but this is only in a limited business sense.

This is probably closer to the meaning of "transformative" being used.

I'm surprised to see that it hasn't been mentioned, but a lot is still up in the air with this one. Both countries have a long history of not fully following through on proposed restrictions, but of course that could end at any time and surprise anyone who things historical precedent precludes historically unprecedented events (in this case, major fully-enforced restrictions).

In this case, Samsung (Korea) and TSMC (Taiwan) have already reportedly circumvented the latest restrictions, although I don't know to what extent they'll be used as a loophole for Chinese firms to get most of the chips anyway, or if the 1-year period is just a buffer and they won't find some complicated way to extend it. 

People in this area are, by now, pretty accustomed to saying "it could still go either way".

Stephen  - thanks for a helpful and insightful summary.

It seems like Biden's regulations will send a clear signal to China that (1) the US considers AI central to future geopolitical and military power, and (2) the US considers itself to be in an AI arms race against China.

If we really want China to double down on its AI plan  for 2030, and to push for even stronger investment of talent, money, and attention into AI R&D, this seems like a great way to do it...

Epistemic caveat: I'm far from a China expert; I just taught some online classes for undergraduates at a Chinese university  over the last couple of years. I will comment that many of the students considered geopolitical competition for AI to be pretty central to China's strategy, and learning about AI was treated as something of a patriotic duty... and a great career opportunity.

Not related to the topic: I doubt it is worth it to post hacks that are likely illegal (or even for EAs to use) - the money saved seems likely to be orders of magnitudes lower than the expected harm. (Non-EA people seeing the post and using this against us, EA people who might be upset seeing these, personal legal risks)

For onlookers, there was originally a link to a way to get around the FT paywall in the post. But I appreciate Fai's comment and have removed it.

I would actively appreciate a norm to link to non-paywalled version of articles. I don't think the legality concerns matter.

I got new upvotes for my above comment (even though it is still negative now) which reminds me of it. I suddenly have a question and I genuinely want to know the answer and do not wish to be offensive or sarcastic. 

Question: Would people have voted (karma and agreement) differently if my comment happened a month later? (FTX collapse)

Also, at that time, quite a number of people are searching on the EA forum for evidence that they claim to support views like "EAs ignore laws and common sense morality", "EAs think that ends always justify means", etc. This means that I could have made a wrong decision to leave the above comment by letting non-EAs potentially see it (if I can reasonably expect the voting results to seem to support illegal things.)

And maybe, I should just delete this comment, now?

I didn't vote, but:

People may have also found the assertion that there is something "likely illegal" to be unsupported by your comment. I don't know how the previously-linked site worked, so offer no opinion on that. Furthermore, the use of these sites is common, so it is also reasonable to question the assumption that using it carried reputational risk. And the existence of any legal risk, especially to anyone who merely clicked on the link, seems highly questionable as a practical matter. These things exist on the open web, the publishing industry knows where they are, and it would be illogical / horrible optics / very cumbersome and expensive for publishers to go after individual users rather than the service providers. 

My view: This is largely noise at best in the long-term, and extremely negative short term (but this could flip sign., and therefore bad for both the short and long-term future, and here's why.

In the short-term, the key area isn't "which values?" but more like "can we avoid misalignment towards a human?" And thus values of the US vs China matter less than alignment at all, and thus this isn't important.

More generally, contra some EA folks, I think value lock-in isn't as likely people think.

Long-term, their value differences don't matter because of the fact that they probably won't exist for a super long time.

Surely reducing the number of players, making it more likely that US entities develop AGI (who might be more or less careful, more or less competent, etc. than Chinese entities), and (perhaps) increasing conflict all matter for alignment? There are several factors here that push in opposite directions, and this comment is not an argument for why the sum is zero to negative.

I should actually change my comment to say that this would be extremely negative, as the US government seems to believe in TAI (in the Holden Karnofsky sense) and also want it to happen, which at the current state of alignment would be extremely negative news for humanity's future, though this could flip sign to the positive end.

This is not an answer to your questions, but I can’t resist pointing out that this post/set of questions seems like a decent illustration of the use case of the project idea I describe here: https://forum.effectivealtruism.org/posts/9RCFq976d9YXBbZyq/research-reality-graphing-to-support-ai-policy-and-more

There’s of course no guarantee that something like the envisioned research graph would have the answers to your questions (it’s hard to predict what users will want, but I think graphs handle this user-uncertainty challenge better than traditional literature reviews) but it might help?