Summary
This is part two of a five-part series examining how political systems shape the development and deployment of military AI. This part covers China. Part one, covering the European Union, is available here. Parts three through five — the United States, Russia, and a comparative conclusion — will follow.
Introduction
The first part of this series showed what the "theory of democracy" looks like in its purest form: detailed rules, no enforcement mechanism, and a system that monitors its own actions more rigorously than those of its adversaries.
China is the opposite case — and in a precise sense. It also declares responsible use of AI, also supports human control in principle, and also participates constructively in UN discussions on autonomous weapons. The difference is structural: in the Chinese system, there is no independent actor capable of holding the state to its own declared commitments. The party sets the red lines. The party can move them. There is no appeal.
This part examines the doctrine of intelligentized warfare, the Military-Civil Fusion mechanism, the gap between China's declared UN position and its operational reality, and the central paradox of a military that may be the world's most technologically ambitious — and has not fought a war since 1979.
I. Doctrine and Institutional Architecture
China enters this comparative analysis as the actor with the most systematically constructed doctrine of military AI — and the least real-world validation of it. The doctrine of 智能化战争 (zhìnénghuà zhànzhēng, "intelligentized warfare") represents the third evolutionary stage of Chinese military thinking over three decades. The first stage, grounded in 机械化 (mechanization), dominated PLA modernization through the 1980s and 1990s. The second, 信息化 (informatization), drove the integration of digital technologies into command and control from the 2000s through the mid-2010s. The third stage, formally embedded in Chinese strategic documents from 2019 onward, shifts the focus to artificial intelligence as the primary instrument of "cognitive superiority" over adversaries: the ability to perceive more, decide faster, and act more precisely than any conceivable competitor . The doctrine's priorities are distributed across four functional directions: autonomous systems, particularly drone swarms; AI in command and control — decision-support systems for rapid OODA cycle traversal; real-time intelligence and data fusion; and cognitive warfare, meaning the use of AI for information operations, adversary perception management, and narrative control.
The institutional mechanism for implementing the doctrine is Military-Civil Fusion (MCF, 军民融合), formally enshrined as a state strategy in 2017. MCF is a system under which private companies, universities, and research institutes are obligated to participate in defence development, share technologies with the military, and embed defence requirements into civilian products. This is neither a market partnership in the Western sense nor a Soviet-style command economy — it is a third model: the state directs the resources of the civilian innovation sector into military channels without formally nationalising the companies themselves.
The key element of MCF is the concept of dual-use, interpreted in China in a fundamentally different way from the West. In the European framework, dual-use means that a civilian technology may have military applications — and this creates a regulatory problem, hence the exclusions in the AI Act. In the Chinese framework, dual-use means there is no distinction between civilian and military application from the outset: any development in AI, unmanned systems, image recognition, or signal processing is treated as potentially military from the moment of its creation. This eliminates the friction between civilian and defence sectors that is well-documented in the European case — but it also means that China's entire civilian AI sector is de facto part of the defence industrial base.
The financial scale of this system is substantial, though precise figures are difficult to verify given the structure of Chinese budget reporting. Pentagon estimates place Chinese military AI expenditure as comparable to American figures — around $1.5–2 billion annually in direct programmes, not counting the enormous investments channelled through MCF that technically appear as civilian spending. The aggregate state and private AI market in China reached $9.3 billion in investment in 2024 . For comparison, the United States invested approximately $109 billion in private AI in 2024 — a gap of more than ten to one. Direct comparison is misleading, however: the Chinese system redirects state resources into the military domain through MCF in ways that do not appear in direct defence statistics.
The speed at which China is moving in this domain is partly explained by a structural pattern established since the era of Deng Xiaoping: the capacity to take foreign technologies, adapt them to Chinese conditions, and scale them with state support faster than the originators of those technologies can respond. In the 1980s this principle applied to manufacturing technologies, in the 1990s to telecommunications, in the 2000s to internet platforms. In the 2020s it applies to AI: Chinese companies have trained models extensively on open Western architectures — transformers, diffusion models — adapted them to Chinese linguistic contexts and the specific demands of defence tasks, and integrated them into military systems through MCF. DeepSeek became the clearest embodiment of this pattern: a model built on principles developed largely by Western researchers, but implemented with fundamentally different computational efficiency.
Behind this lies a philosophically distinct approach to AI application. Western laboratories — OpenAI, Anthropic, DeepMind — organise their existence around the ambition of creating artificial general intelligence (AGI): a system capable of solving any task a human can solve. This is both a commercial strategy and a research programme, and to some degree a worldview. China's military AI strategy is fundamentally utilitarian: not to create a universal intelligence, but to deploy specialised systems that solve specific operational tasks faster and more reliably than a human can. Target identification, satellite data processing, drone swarm management, decision support in the OODA cycle — each system is optimised for a specific function. This approach is faster to develop, cheaper to deploy, and less vulnerable to single-component failure. Its limitation is that specialised systems perform poorly in situations for which they were not designed — and real war is full of exactly those situations.
II. Declared Position and the Gap with Reality
On the floor of the United Nations and in international forums, China occupies a position that appears among the most constructive of the major military powers. In October 2025, at the 80th session of the UN General Assembly First Committee, the Chinese delegation again affirmed its support for negotiations toward a legally binding instrument on LAWS — "when conditions are ripe" . In December 2025, China abstained on UN General Assembly Resolution 80/57 on lethal autonomous systems . Formally, China is the only major military power to declare support for a binding LAWS treaty — unlike the United States and Russia, which make no such declaration.
The key to understanding this position lies in what China actually considers "unacceptable" LAWS. In a working paper submitted to the CCW GGE in 2022 and reaffirmed through the 2024–2025 sessions, China proposed a five-criteria definition: a system is an unacceptable LAWS only if it is simultaneously (1) lethal, (2) autonomous, (3) incapable of being stopped once launched, (4) capable of killing indiscriminately, and (5) capable of autonomous learning . All five criteria must be satisfied simultaneously. Analysts at the Lieber Institute at West Point characterised this position directly: by insisting that a system must exhibit all five characteristics at once before it faces outright prohibition, China has drawn a remarkably narrow line for what counts as unacceptable. This cumulative threshold, unchanged in Beijing's contributions through the 2025 GGE sessions, effectively excludes a wide range of emerging autonomous capabilities, many of which Beijing is developing . In other words, a system with a high degree of autonomy, capable of selecting and engaging targets without meaningful human involvement, can easily fail to satisfy at least one of the five conditions — and therefore remain outside the prohibition.
Why does this gap not provoke a sharper international reaction? Several reasons operate simultaneously. First, the CCW GGE negotiating process is consensus-based, meaning any state has an effective veto and the process generally produces consensus at the lowest common denominator. Second, Western states — above all the United States — also have no interest in a rigorous binding LAWS treaty, which deprives them of the moral authority to criticise China's position. Third, the formulation "when conditions are ripe" allows China to demonstrate constructiveness in every diplomatic round without accepting any real obligations, since "conditions ripening" never occurs until "consensus on definitions" is reached — and consensus on definitions is itself blocked by China through the cumulative five-criteria definition. It is an elegant diplomatic construction: support the principle while making it practically inapplicable.
The structural reason for this gap is that in the Chinese system, red lines are set directly by the Communist Party — not by independent corporate actors or judicial institutions. In the American case, Anthropic could sue the Pentagon and win tactically. In the European case, EU courts can theoretically challenge the application of the AI Act. In China, there is no actor that could institutionally contest a party decision about what level of autonomy is "acceptable." This means red lines are movable in both directions: the party can tighten them when it needs to send a reassuring signal to the international community — and loosen them under operational necessity, without triggering public proceedings. The absence of an independent arbiter makes declared constraints structurally less durable than they appear from Geneva.
III. Corporate Architecture and State Control
The most fundamental structural difference between the Chinese model and the American or European one is the complete absence of independent corporate actors in the military AI domain. In the United States, Anthropic could publicly refuse the Pentagon, file a lawsuit, and — at least temporarily — win. This was possible because the company exists as a legal entity independent of the state, capable of opposing it with commercial interests and its own value commitments. In China, an analogous conflict is structurally impossible: companies operating in AI may have private shareholders and Hong Kong listings, but they cannot institutionally resist a request from the party or the PLA.
The level of state control over the largest private companies varies, but is always structurally ensured through several simultaneous mechanisms. Direct mechanisms include: mandatory state equity stakes or "golden shares" in strategic companies; party committees inside all corporations above a certain headcount threshold; and the 2017 National Intelligence Law, which obliges any Chinese organisation or citizen to support state intelligence activities on demand. Indirect mechanisms include licensing control, regulatory pressure, and the possibility of administrative proceedings against executives. The fate of Jack Ma is instructive: his public criticism of regulatory policy in late 2020 was followed by months out of public view, after which Ant Group lost its IPO and Alibaba received a record antitrust fine. No formal nationalisation was required — a clear demonstration of consequences was sufficient.
Direct and command-style control through MCF carries measurable advantages. Speed: the decision to integrate a specific technology into military systems requires no negotiation with an independent board of directors, no shareholder approval, no resolution of conflicts with corporate ethical declarations. Scale: the state can direct the resources of the entire technology sector toward a specific task — as happened with DeepSeek, whose development was accelerated in part by state pressure following American chip sanctions. Absence of leakage: technologies developed under MCF do not flow to competitors through open-access publications or staff departures to other countries with the same ease as in open ecosystems. The constraints are symmetrical. Speed without feedback produces systemic errors that are not corrected from below. Scale without competition reduces incentives for innovation beyond state-defined priorities. The absence of independent actors means the absence of an institutional "red team" capable of identifying system weaknesses.
DeepSeek deserves separate consideration not as a product but as a systemic phenomenon. The release of DeepSeek R1 in January 2025, with performance comparable to American frontier models at fundamentally lower computational cost, was a shock to the Western AI community no less significant than the Sputnik launch in 1957. For understanding Chinese military AI, what matters is not the model quality per se but three consequences of its emergence. First: DeepSeek demonstrates that sanctions on high-quality NVIDIA chips have not stopped Chinese AI development — they forced developers to find more efficient architectural solutions, which in some contexts is more advantageous than simply having more computing power. Second: the model runs on significantly more modest hardware, making it suitable for deployment at the edge of the network — on board drones, in field command posts, in systems that cannot rely on cloud infrastructure. Third, a Chinese research team used DeepSeek to reconstruct 10,000 potential battlefield situations in 48 seconds — a task that would traditionally take human commanders approximately 48 hours . This is not a metaphor; it describes a specific operational application of a language model in military planning.
The overall assessment of the symbiosis of military, corporation, and state in China reveals a paradox opposite to the European one. In the EU, the central problem is how to ensure that normative requirements are observed in the military domain in the absence of an enforcement mechanism. In China, the central problem is the reverse: how to ensure that a system in which the enforcement mechanism is absolute does not lose its capacity for adaptation and innovation under the pressure of that very enforcement. Party control ensures speed and resource consolidation. It also creates information filters that impede the upward transmission of negative signals — which in a military context means the risk of systematic overestimation of one's own capabilities by command structures deprived of reliable feedback.
IV. Drivers of Rapid Development and Structural Risks
The rapid development of Chinese military AI systems is explained by the cumulative effect of several simultaneously operating factors. A strong state role allows the setting of long-term technological priorities independent of electoral cycles or market conditions. A powerful economy — the world's second largest — provides a resource base comparable to the American one. Resource consolidation through MCF allows the entire civilian technological potential to be directed toward military needs without formal nationalisation. The subordination of AI companies eliminates the friction between corporate and state interests that is well-documented in the American case. Generous incentives for specialists — high salaries, state grants, housing programmes — partially offset the brain drain, though they do not eliminate it. The absence of rigorous legal regulation in the style of the AI Act removes barriers that slow development and deployment. And finally, the state's priority access to any data — including the enormous corpus of data on the population, user behaviour, and civilian infrastructure — creates training datasets of a scale unavailable to most other actors.
The chip independence problem deserves particular attention. American sanctions of 2022–2024 substantially restricted China's access to advanced NVIDIA semiconductors required for training frontier models. The response has been a two-track strategy. On one hand, Huawei is developing its own Ascend AI chips, which remain significantly below NVIDIA's performance but are gradually closing the gap. On the other hand — and here DeepSeek served as a structural argument — China is investing in algorithmic efficiency: creating models capable of performing comparable tasks with fewer parameters and lower computational costs. This does not eliminate semiconductor dependence entirely, but it reduces its operational acuity and creates an alternative development path not blockable by export restrictions.
Taiwan is the world's leading producer of advanced semiconductors — approximately 90% of chips at 7nm and below are manufactured by TSMC on the island. For China, this means that complete technological self-sufficiency in AI is impossible without resolving the Taiwan question — or without creating domestic production capacity comparable to TSMC, which requires at minimum a decade of intensive investment. For Western countries, it means that any scenario of Chinese military pressure on Taiwan affects not only the island's strategic sovereignty but also the global semiconductor supply chains on which the entire world's military and civilian AI sector depends. Taiwan in this sense is not merely a geopolitical flashpoint, but literally the physical infrastructure on which the AI arms race runs.
The success of the Chinese model — consolidated, deregulated, and state-managed — creates structural pressure on other actors. If MCF allows China to develop and deploy military AI systems faster than competitive market models with independent actors and normative constraints, this creates an incentive — conscious or not — for other states to move toward greater deregulation and state control. Hegseth's memorandum of 9 January 2026, with its "any lawful use" requirement and the elimination of "ideological tuning," can be interpreted in part as a response to this structural challenge: an attempt to reproduce some of the operational advantages of the Chinese model without formally replacing the market architecture. This is not convergence of systems, but it is pressure in the direction of convergence.
V. The Combat Experience Deficit and Strategic Choice
For all China's technological ambition, the structural vulnerability of the PLA is this: the PLA has not fought any serious conflict since 1979, and effectively no one in the military has real combat experience . This is not merely a statistical fact — it is a source of deep analytical uncertainty. The PLA's declared capabilities are based on exercises, simulations, and extrapolation from open sources. How closely they correspond to reality is unknown, and impossible to verify without an actual conflict.
The paradox is that China — despite having the most ambitious technical programme among the four actors — has less experience of conducting modern high-intensity warfare than any of the others. The United States has been through Afghanistan, Iraq, Syria, and now Iran. Russia has been fighting in Ukraine for four years. European states — the UK, France, Germany — have participated in NATO operations from Kosovo to Libya and Mali. The PLA observes all of this from the sidelines, accumulating analytical conclusions but not combat experience in the operational sense.
This raises a question for which the honest answer is an acknowledgement of the limits of knowledge: how effective are other PLA branches in the conditions of modern war? The navy, the rocket forces, the air force — all have undergone large-scale modernisation and regularly demonstrate impressive technical capabilities in exercises. But the declared power of these branches has never been tested under the real pressure of combat opposition. Deng Xiaoping, explaining the necessity of the 1979 invasion of Vietnam, partially used precisely this argument: the army needs combat experience, and without it even well-equipped forces remain theoretically unverified. The result of that war was a complete confirmation of the thesis — only with the opposite sign: the PLA performed significantly worse than its declared capabilities suggested. There is no basis for believing this lesson has become obsolete.
This produces a closed loop. The longer China refrains from active military engagement, the longer the PLA's reputation as one of the world's strongest militaries is preserved — a reputation based on technical characteristics, budgets, and platform counts, not on combat results. But simultaneously, the more the gap accumulates relative to armies that actually fight: the United States in Iran, Russia in Ukraine, Europeans — if partially — through the Ukrainian experience of their systems. Not fighting preserves the reputation and avoids the costs. It also preserves fundamental uncertainty about actual capabilities.
The reasons for this restraint are not ideological but entirely pragmatic. The economic priority: China in the 2020s is above all a trading power with an economy deeply integrated into global supply chains. Military conflict creates risks of sanctions, trade disruption, and technological isolation — costs incommensurable with the potential gains of most conceivable operations. The diplomatic priority: China is actively building an image as a responsible mediating power — the 2023 Saudi-Iranian normalisation was the symbolic achievement of this strategy. Military intervention undermines that image. The reputational risk: the fear of becoming bogged down in a war on the model of Russia in Ukraine or the United States in Iran. The absence of institutionalised military engagement on the international stage: unlike the United States, China has no established system of allies ready to share military costs.
The consequence of all this is the necessity of adapting a military system by drawing largely on the experience of other countries' wars. Observing Ukraine through the Russian-Chinese channel, analysing American operations in Iran through MizarVision and the trilateral agreement — all of these are surrogates for direct combat experience. Valuable surrogates, but incomplete ones: another country's experience always carries the problem of applicability. What works in Ukrainian steppe conditions against Soviet-era doctrines and Western weaponry may work in fundamentally different ways in a Taiwan scenario with its maritime component, air-defence-saturated environment, and American forward presence.
VI. Compensatory Strategies
The primary attempt to resolve the conflict between the PLA's declared power and the real absence of combat experience is indirect support for Russia. According to Ukrainian intelligence, Chinese factories and companies supply Russia with hardware and AI software for adapting unmanned systems. In 2025, Russia used Chinese components to produce up to 2 million small tactical UAVs . For China, this is neither charity nor ideological solidarity — it is paid observation of how technologies function under real combat conditions. By supplying components and software, China receives data on how those components perform under load, what fails and why, and how the adversary adapts to specific technical solutions.
The trilateral strategic pact between China, Russia, and Iran, signed on 29 January 2026, formalises what was previously informal cooperation. The pact is not a mutual defence treaty — it provides diplomatic cover, intelligence cooperation, economic resilience, and technological support . For China, it creates a legitimate framework for receiving intelligence data on US operations in Iran without being a party to the conflict. This maximises the information gain while minimising direct costs.
Within this strategy, MizarVision has become the most publicly documented example of "observation through the private sector." The company, founded in 2021 and holding a Chinese national military standard certificate, uses AI to catalogue activity at American bases in the Middle East, track fleet movements, and locate aircraft and missile defence systems. Its data provided detailed coverage of the build-up of American forces ahead of Operation Epic Fury, including the transit of the aircraft carriers USS Gerald R. Ford and USS Abraham Lincoln . Formally the company is private, formally it receives no state assignments. In practice it performs a state intelligence task — collecting data on the real-world application of American military systems, which can subsequently be used to train Chinese targeting and planning systems. This is the model of "deniable observation": the state receives intelligence product while maintaining plausible deniability of direct involvement.
In parallel, China is betting on the proposition that technological superiority can compensate for the deficit of combat experience. The logic is clear: if a system can process data faster than a human, make decisions in milliseconds, and coordinate thousands of units in a swarm operation, an experienced commander is not needed — a good algorithm is sufficient. China has turned to AI as a substitute for direct combat experience, developing high-fidelity wargaming platforms, predictive modelling, and algorithm-driven war planning . The "War Skull" system in its second generation adapts modular strategies to different adversaries. The "Aiwu LLM+" AI system integrates language models and multimodal data analysis for support of command information systems .
The aggregate assessment of the compensatory strategies is as follows. All of them work at the level of analytical knowledge — China genuinely understands modern warfare significantly better than it did in 2019. None of them is the equivalent of direct combat experience in the operational sense. Simulations are limited by the quality of the data on which they are trained. Observing Ukraine provides lessons from a specific theatre of operations that may transfer poorly to a Taiwan scenario. The Iranian conflict shows the application of American AI systems against a state adversary with significantly weaker capabilities than the PLA. A fundamental uncertainty remains: how will systems tested only in simulations perform when they encounter a real adversary capable of unpredictable adaptation? China does not have an answer to this question. And that is precisely what makes its strategic position in military AI simultaneously ambitious and fundamentally unverified.
VII. Strategic Assessment
The Justice Mission 2025 exercises, conducted on 29–30 December 2025, illustrate the strategy of "overhanging threat" in its operational embodiment. The drills covered a larger zone around Taiwan than any of the six previous major exercises since 2022, and for the first time explicitly designated deterrence of external intervention as a publicly stated objective . 130 aircraft sorties, 14 warships, live rocket launches into waters north and southwest of the island, simulation of blockades of the ports of Keelung and Kaohsiung. Crucially, the Pentagon received no advance warning of the exercises — and Trump publicly expressed no particular concern. This itself is part of the message: China demonstrates its capacity to create a naval crisis without escalating to a level requiring an American response.
The current model, in all likelihood, suits the Chinese government — and there are no signs it intends to depart from it. Continued build-up of military capabilities and AI technologies without real combat deployment creates maximum diplomatic leverage at minimum direct cost. High-technology defence is used as an instrument of pressure — on Taiwan, on South China Sea states, on Western partners in trade and technology negotiations. Preserving military non-engagement simultaneously protects the PLA's reputation from the test of reality.
This is the central strategic paradox of the Chinese model of military AI. Non-demonstration of real capabilities is simultaneously its greatest strategic advantage and its greatest structural weakness. The advantage: the adversary is forced to plan against the worst-case scenario without data to refute it. The PLA remains an "unknown quantity" — and in strategic planning, uncertainty functions as a threat multiplier. The weakness: that same uncertainty works in both directions. China itself does not know how its systems will hold under real pressure — and cannot find out without activating precisely the costs it is seeking to avoid. A system that has never been tested in combat may prove either significantly stronger or significantly weaker than it appears in exercises. Until the first real engagement, both possibilities are equally plausible.
Conclusion
The Chinese model is the most internally consistent of the four examined — and the most opaque. The doctrine is clear, the institutional architecture is coherent, the investment is substantial, the declared position is carefully calibrated. What is absent is any external verification mechanism, any independent actor capable of holding the system to its own stated commitments, and any real-world test of whether the capabilities are what they claim to be.
The declared support for a LAWS ban "when conditions are ripe" and the simultaneous development of systems that would not meet most proposed ban criteria illustrate the core dynamic: China has constructed a position that is diplomatically defensible, operationally unconstrained, and structurally durable for as long as it avoids a major conflict. Whether the Taiwan scenario, if it ever materialises, would validate or devastate that position is the question that the Chinese military itself cannot answer from its current vantage point.
The next part turns to the United States — the actor with the deepest operational AI integration, the most documented corporate-state conflicts over red lines, and the first large-scale combat test of AI targeting systems against a state adversary.
