Hide table of contents

The rate at which China is able to advance towards TAI is a crucial consideration in for many policy questions. My current take is that, without significant political reforms which seem very unlikely while Xi is alive (although considerably more likely after his death,) it’s very unlikely that China will be able to mount a meaningful challenge to AI firms in US and allies in the race for TAI. I don’t think it requires democratic reforms are required to China to be competitive with the US and allies, but I do think rule of law reforms are likely to be required.

The first post is going to be me forecasting Chinese growth on the theory that, if China reaches rich country status, it’s likely that it will be able to compete with the US and allies for leadership in AI. I’ll write a second post looking at Chinese AI efforts in particular.  

The outside view

Most countries that become middle income countries, have, thus far, stayed at middle income level. Chinese per capita income is currently at almost exactly the world average level.

The only countries (and territories) in the last 70 years that have gone low income to high income countries in the last 70 years (without oil wealth) are South Korea, Taiwan, Singapore (which does have substantial oil wealth,) and Hong Kong, although it seems very likely that Malaysia will join that club in the near future.

The majority of countries have managed to emerge from low-income status to middle income status because you only need to get a few things right. If you can get your population to urbanize, have basic rule by law so that firms have basic protection from violence, and get a high enough savings rate to accumulate physical capital you can get to middle income status just using catch up growth.

Catch up growth is the reason conceptually why middle-income status – rather than getting to a given level of GDP per capita – is the correct misuse. When growing with catch up growth you can just growth by accumulating physical capital using standard technologies that have been already been developed, like the technology for light manufacturing or civil engineering. Past this point though countries get rich by being able to develop and use technologies close to or at the frontier.

China has successfully managed to accumulate capital to utilize catch-up technologies, like steelmaking and light manufacturing. It’s quite successfully managed to urbanize it’s population and now seems to have reached the Lewis turning point where young people who try to leave their villages to find work cities often don’t find it and have to stay in their villages, in the much lower productivity jobs.

Democracy and rule of law rates give another outside view on Chinese growth prospects. Of the 53 rich countries and territories that aren’t oil states or microstates, only 2 aren’t democracies – Singapore and Hong Kong – and none lack rule by law and all have low levels of corruption.

China currently lacks democracy, has high levels of corruption (although roughly normal levels for a middle income country is my perception,) and sort of middling levels of rule by law.

An important part of countries getting to high income status is new firms forming and competing to deploy and create ~frontier technologies and process. This is harder to do than accumulating enough capital and having low enough levels of violence and corruption to be able to build decent housing, supply reliable electricity and water, and have large numbers of workers do semi-skilled manual labour at scale. Specifically this can all be done while elites earn large rents by establishing monopolies (or more generally accruing market power)  that they exclude non-elites from.

The role that democracy plays in this story is that its much harder for elites to rig markets in their favor in democracies and earn money by extracting rents rather than by doing useful economic activity. On a large scale, democracies with enfranchisement don’t have institutions like slavery or serfdom, which are extremely effective intuitions for elites to extract rents with. The parallel to this in contempory China is the Hukou system. The Hukou system prevents individuals with rural Hukou from accessing social services and to a degree jobs (for instance admission to elite Chinese universities is easier with urban Hukou)  in urban areas, and so retards rural-urban migration as well as reducing the labour market power of workers with rural hukou in urban China.

Another one of these political economy type problems for long run economic growth is whether the way to get rich is by gaining access to state power or by creating new and useful products. In middle income countries it’s often the latter – for instance the Carlos Slim Helu, the richest person in Mexico and for a brief period the world, was able to get rich by being able to get access to state granted monopolies is real estate and telecoms.

On the other hand, in rich democracies, the way to make lots of money is typically not to gain access to state power in some way. The way that billionaires make the their money in the US is typically by creating new companies.

This is bad for economic growth because it means that productive effort is going into zero-sum competition for access to state power rather than into creating new, useful, goods and services. If this gets sufficiently, as is the case is many low income nations with large mineral wealth, this can be completely disastrous for economic growth because you get wars over natural resources.

China is currently somewhere in the middle of these extremes. There are lots of people who get rich in China by creating new useful goods and services, but there are also lots of people who get rich by getting good state contracts, or by getting the local security forces to beat up and threaten your business rivals.

More perniciously for long run economic growth is actively preventing reallocation of  labor and capital to higher productivity areas to prevent social unrest, and preventing business leaders from acclimating independent power bases by getting rich in new areas. This is particaurly pernious because it’s not merely causing static inefficiencies that reduce long run growth prospects, it’s leads to optimization pressure against growth. We see this in contempory China with state run firms not reducing output (and so not reducing their use of capital and labour) to prevent to social instability that would come with layoffs from state owned firms. We also see business leaders sanctioned for, essentially, being extremely successful. The arrest of Jack Ma, the founder of Alibaba (a more conglomerate version of Amazon and one of the most successful Chinese firms) is the most famous example of this.  

All of these political economy arguments should be taken with a pinch of salt – it’s really hard to do good causal inference on these types of questions, but the base rates alone  shouldn’t be underestimated. It’s also not the case that China hasn’t got the point where countries typically becoming more democratic and more governed by rule of law if they’re going to get rich. Both Singapore and Hong Kong were governed by law as low and middle income countries, and Taiwan and South Korea had democratized at about half the per capita income level that China currently has.

Broadly China has got further away from democracy and rule of law ideals under Xi. Xi has consolidated his personal control over the CCP by removing the term limits for President and Party Chair from the CCPs constitution, and by moving policy making to working groups rather than using state or party organs that he has less personal control over. Opposition within the standing committee of the Politburo has been removed following the party Congress in 2023.

Formerly, there were two factions in the Politburo. Xi’s faction was broadly more pro market and group around leaders whose parents had been senior members of the CCP, and the many of the group was associated with Shanghai. The other faction, of which former leader Hu was part of, is broadly more pro redistribution and associated with youth wing of the CCP. The late premier Li was the most senior member of this group during Xi’s leadership. Following the Li’s death and the 2023 party congress no one from this second faction was appointed to the standing committee. I think this is evidence for Xi increasing his personal control and for the end of the rule by party elites at least somewhat governed by rules that existed during the leaderships of Jiang and Hu.

The use of the corruption police has been another tool that Xi has used to entrench his personal power. The arrests of Bo and the other guy are the famous cases of this which were extremely norm breaking due to the seniority of Bo and the other guy and that their families were target following their arrests.

After Xi’s death – and he is quite old – I think there’s a reasonable chance that there are significant political reforms as there were after Mao. Following Mao, the latitude of debates over the political trajectory of China was extremely broad – it extended to serious calls for free speech and democracy. Deng ended up taking a middle path between the most extreme authoritarians and the most extreme liberals. I don’t think that large political reforms can be ruled out following Xi’s death. I expect there to be a large middle class and business community that will advocate for rule of law and potential some democratic reforms.  There’s also the wildcard factor of the roughly 115 million Protestants who have often suffered repression under the CCP and form a natural co-ordination mechanism. I don’t really know how to model this.

Headwinds in the Chinese economy

There some clear ways in which China is running out easy sources of growth and will be forced to start growing by using and developing frontier technologies.

·      A relatively large percentage of the population has urbanized

·      There are reliable supplies of electricity and running water

·      Chinese firms are already good at making steel and concrete and do so in large quantities

·      There’s no regular violence or unpredictable, mass arrests

·      Relatively high literacy rates

These are relatively reliable growth sources for low and middle income countries that China has either exhausted or is getting close to exhausting.

There are also some more pernicious headwinds.

·      Wages in China are rising above other countries that can do export orientated light manufacturing like Vietnam

·      China has a very rapidly aging population

High wages relative to competitors is particularly bad because it threatens a core part of the Chinese growth model. Export oriented manufacturing is important not just because it’s a large share of GDP but also because it provides a mechanism for technology transfer and acts as a disciplining force on Chinese firms and Chinese elites, forcing them to innovate rather to compete on international markets rather than simply collect rents.

There are many problems that come with an aging population but I don’t want to write about them because it would involve me reading more papers and I don’t want to.


 

Comments2


Sorted by Click to highlight new comments since:

This is a very interesting post. I wonder if I could ask your opinion on two things:

  • How do you think China's expansive R&D espionage (whether PLA-centric or individual) will impact this, if at all? Does the ability to (relatively easily and legally) steal AI-related R&D from others affect this prediction at all?
     
  • Do you think China's internal AI law and policy has a positive or negative impact on their growth rates mentioned in your post? Eg Internet Information Service Algorithmic Recommendation Management Provisions or the Interim Measures for Generative AI Service Management etc? No worries if this isn't your area and you can't answer. It's not really mine either, hence the question.

Thank you for this post :)

Executive summary: China is unlikely to catch up to the US and its allies in developing transformative AI due to political and demographic constraints, though reforms after Xi could change this.

Key points:

  1. Most countries get stuck at middle income status, but China has already reached the world average GDP per capita.
  2. Lack of democracy and rule of law, high corruption, and demographic issues could prevent China from reaching high income status.
  3. Xi has consolidated power and moved away from political reforms needed for innovation.
  4. However, there could be openings for change after Xi's death.
  5. China is running out of easy sources of growth and needs to innovate at the technological frontier.
  6. But export manufacturing is threatened by rising wages, and China faces rapid population aging.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 9m read
 · 
TL;DR In a sentence:  We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up. In more detail: We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.   During 2025, we are prioritising: 1. Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well 2. Communicating why and how people can contribute to reducing the risks 3. Connecting our users with impactful roles in this field 4. And fostering an internal culture which helps us to achieve these goals We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.   This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions. Why we’re updating our strategic direction Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes. We think we should consolidate our effort and focus because:   * We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI