In January 2026, the Bureau of Industry and Security revised its export policy for advanced AI chips for China. The new framework allows case-by-case license reviews for chips below defined performance thresholds, roughly NVIDIA H200s, with aggregate sales to China capped at 50% of equivalent U.S. domestic volumes and a proposed 25% export surcharge announced by the White House. The Council on Foreign Relations promptly called the policy “strategically incoherent and unenforceable.”
The CFR critique is right, but it doesn’t go far enough. Security analysts can spot the contradictions. Economists can explain why those contradictions were structurally inevitable, and what to do about them.
Most of the conversation around compute export controls is framed through a national security lens: which chips to restrict, which countries to cut off, how to keep adversaries from building frontier AI. These are real questions. But economists bring tools that are mostly absent from the debate: circumvention analysis, input-output modeling, game theory, and measurement design. Here are four lessons from trade policy that I think deserve more attention.
1. Circumvention incentives scale with rent
This is probably the most fundamental insight trade economists have to offer, and it keeps getting ignored in export control design: the larger the wedge a barrier creates between domestic and foreign prices, the greater the incentive to get around it.
This is not a bug to be patched with enforcement. It is a structural feature of any restriction that creates rents. A hard ban on a product with massive global demand does not eliminate that demand. It reroutes it through gray markets, transshipment hubs, and regulatory arbitrage.
History is full of examples. American Prohibition didn’t stop drinking; it created a massively profitable smuggling economy that enriched organized crime while depriving the government of tax revenue. The policy was designed around a moral objective with little thought given to the economic incentives it would unleash, and the result was predictable in hindsight.
The data on AI chips tells a similar story. An estimated 140,000 restricted chips were smuggled to China in 2024, equivalent to roughly 37,000 Blackwell GPUs. If the same diversion rate holds against the higher 2026 U.S. production volume of approximately 26 million H100-equivalent units, projected smuggling would reach 276,000 Blackwell equivalents, about 4% of total U.S. chip output. These are not marginal quantities.
The January 2026 policy implicitly acknowledges this by converting a hard ban into a tariff-plus-cap mechanism. That instinct is defensible. Economists generally prefer price-based instruments over quantity restrictions precisely because they reduce the rent that drives circumvention. But a 25% tariff on a product with an effective demand premium of several hundred percent in restricted markets barely dents the arbitrage incentive. The direction is right. The magnitude is wrong.
As a Korean native, I find the most striking illustration of this principle in an unexpected place: the spread of South Korean pop culture in North Korea. Despite the regime threatening execution for consuming K-pop and K-dramas, smuggled USB drives carrying Korean entertainment have become pervasive among young North Koreans. When the gap between what people want and what they are allowed to have is large enough, no enforcement regime, not even the threat of death, fully stops the flow. The incentive simply overwhelms the control.
In my own work analyzing interprovincial trade barriers in Canada, I’ve seen the same pattern at a different scale. Duplicative provincial regulations create compliance wedges that firms routinely circumvent through regulatory arbitrage rather than challenge through formal channels. For instance, construction firms operating across provincial borders will set up shell entities in each province rather than navigate mutual recognition processes, effectively routing around the barrier at a cost. The formal statistics can’t monitor interprovincial construction services.
The response is always more creative routing, not less trade. Enforcement costs grow nonlinearly with the size of the restriction, and controls that ignore this end up in an expensive arms race against the incentives they themselves create.
There’s a deeper point here about who actually benefits from these barriers. If American trade policy on chips truly served American interests, it would encourage open trade that directly profits U.S. native chip designers like NVIDIA and AMD rather than creating rents captured by smugglers and intermediaries operating outside any regulatory framework. The current approach taxes American companies and subsidizes the very gray market actors it claims to be targeting.
2. Second-order effects are traceable, if you build the models
Chip export restrictions don’t just affect chip buyers. They ripple through the entire AI value chain. Cloud providers lose revenue and shift capacity plans. Model developers face compute cost increases that change training run economics. Application-layer companies adjust product roadmaps. End users in restricted markets look for alternatives. Each of these channels has its own elasticity, lag structure, and feedback loop.
Economists have a mature framework for tracing exactly these kinds of supply chain effects: input-output analysis. In national accounts, I-O tables map how a shock in one sector (say, a tariff on steel) propagates through purchasing relationships to affect output, employment, and prices in downstream sectors. The same logic applies to AI compute. A restriction on chip exports is an upstream supply shock whose downstream effects can, in principle, be modeled with the same tools we use to estimate GDP multipliers.
At Statistics Canada, I built sector-level models to assess how infrastructure investments worth over $10 billion would transmit through the Canadian economy, tracing elasticities, pass-through effects, and macroeconomic multipliers across interconnected industries. The AI compute supply chain is structurally similar: concentrated upstream production (a handful of fab operators), distributed midstream allocation (cloud providers), and a diffuse downstream application layer.
Yet nobody is building the I-O equivalent for AI compute governance. Current policy debates rely on static capacity estimates without modeling how restrictions alter investment flows or substitution patterns across the value chain. Right now, Chinese AI chips trail the U.S. state of the art by roughly three to five years in manufacturing capability, and domestic chips are expected to power 30 to 40% of China’s AI compute by 2026, up from under 10% in 2024. The Institute for Progress estimates that without chip exports, the U.S. holds a 21-to-49x advantage in 2026-produced AI compute. Unrestricted H200 exports would shrink this to as little as 1.2x. That is an extraordinary compression of the compute gap from a single policy change. But even that is a first-order estimate. The second-order effects on Huawei’s competitive trajectory, on cloud pricing in Southeast Asia, on the global distribution of AI development capacity, those remain largely unquantified.
3. Unilateral controls face coordination failures
Export controls are, at their core, a multi-player game. The United States restricts chip sales to China. But unless allied nations (the Netherlands with ASML, Japan, South Korea with Samsung and SK Hynix, Taiwan with TSMC) impose equivalent restrictions, the unilateral controls by US only incentivizes defection.
This is a textbook coordination problem. The Nash equilibrium of an uncoordinated export control regime is weaker than what multilateral coordination could achieve. Each country individually benefits from selling to the restricted market while others bear the security cost. The result is a race to the policy floor.
Game theory tells us that sustaining cooperation in repeated games requires shared information and mechanisms for penalizing defection. The current U.S. approach has some bilateral coordination with the Netherlands and Japan through the October 2023 trilateral framework, but enforcement verification remains weak and the incentive structure for sustained cooperation is essentially non-existent. The January 2026 policy shift, allowing U.S. firms to sell H200s with a surcharge, arguably signals that Washington itself has begun defecting from its own restrictive equilibrium, undermining the coalition it spent years building.
Canada’s experience with interprovincial trade barriers illustrates the same dynamic at a smaller scale. Provinces maintain regulatory barriers that fragment the national market because each province benefits locally from its own regulations while bearing only a fraction of the aggregate cost. Resolving this requires not just awareness of the costs (which my work at the Privy Council Office quantified) but institutional mechanisms that shift the equilibrium: shared data infrastructure and a coherent mutual recognition agreement.
For compute governance, the equivalent would be a multilateral coordination with shared monitoring infrastructure, harmonized thresholds, and credible escalation mechanisms. Without it, unilateral controls is unsustainable.
4. You cannot control what you cannot measure
Effective trade policy depends on measurement infrastructure. Customs agencies track goods flows with harmonized classification codes. Central banks monitor capital movements. Statistical agencies produce trade balances. This architecture was built over decades and is continuously refined. Without it, trade policy would be guesswork.
The AI compute supply chain currently lacks anything comparable. There is no standardized system for accounting compute capacity flows across borders. End-use verification for exported chips is rudimentary. Cloud compute, which can be provisioned remotely and resold through nested intermediaries, is especially hard to track.
I faced an analogous challenge building Canada’s Interprovincial Trade Data Hub at the Privy Council Office. Interprovincial trade had been a recognized policy issue for decades, but the measurement infrastructure to actually quantify barriers, track trade flows, and present information visually did not exist. Building it required integrating data from all over and creating adoption mechanisms so that policymakers would actually use the data.
AI compute governance needs the same foundational investment. Before tightening controls, policymakers should be asking: do we have “compute accounts” analogous to national accounts? Can we track where compute capacity is deployed globally, who is purchasing it, and for what purposes? It is possible that some version of this exists in classified settings, but I am not aware of any public equivalent. Without this measurement infrastructure, export controls operate in a fog, and policymakers end up cycling between restrictions that are too broad (generating large circumvention rents) and too narrow (failing to address the actual risk).
What economists can contribute
None of this is to say that national security considerations are secondary. They are not. But the current debate suffers from a disciplinary blind spot: security analysts define the objectives, lawyers design the regulatory instruments, and economists are largely absent from the room.
Trade policy economics suggests four concrete contributions:
- Design controls with circumvention cost curves in mind. Price-based instruments like tariffs and licensing fees are generally more efficient than hard bans when demand is inelastic and circumvention channels exist. Not all restrictions should be tariffs, but the choice should be informed by estimated circumvention elasticities rather than defaults.
- Build input-output models for the AI compute supply chain. Ex ante impact assessment should be standard for any significant export control, just as it is for trade agreements. The models exist. The data collection effort is what’s missing.
- Invest in multilateral coordination mechanisms. Unilateral controls are a dominated strategy in a multi-player game. An OECD or G7 compute governance coordination body with shared monitoring and harmonized thresholds would be more effective and more sustainable than the current patchwork.
- Develop standardized compute measurement infrastructure. You cannot govern what you cannot see. Compute accounts tracking capacity, flows, and end-use across jurisdictions are a prerequisite for effective policy, not a luxury to develop later.
The economics of compute export controls is not a niche subfield. It is the application of well-understood trade policy principles to a new and consequential domain. The tools exist. The question is whether policymakers will use them.
Tom Hwang is an Economist at the Privy Council Office, Government of Canada, where he works on trade policy and economic analysis for Cabinet-level decision-making. He previously built sector-level input-output models at Statistics Canada and studied game theory and industrial organization at the University of Arizona. The views expressed here are entirely his own and do not represent the Government of Canada.
