I'm a globally ranked top 20 forecaster. I believe that AI is not a normal technology. I'm working to help shape AI for global prosperity and human freedom. Previously, I was a former data scientist with five years of industry experience.
See also "A Model Estimating the Value of Research Influencing Funders" that comes to a similar conclusion
You might like "A Model Estimating the Value of Research Influencing Funders" which makes a similar point, but quantitatively
Hi David - I work a lot on semiconductor/chip export policy, so very important to think about the strategy here.
My biggest issue is that "short vs. long" timelines is not a binary. I agree that under longer timelines, say post-2035, China likely can significantly catch up on chip manufacturing. (Seems much less likely pre-2035.) But I think the controls logic matters really strongly for timelines 2025-2035 and still might create a larger strategic advantage post-2035.
Who has the chips still matters, since it determines whether the country has enough compute to train their own models, run any models, and provision cloud providers. You treat "differential adoption" and "who owns chips" as separate when they're deeply interconnected. If you control chip supply, you inherently influence adoption patterns. There would be diffusion of AI of course, but it would be much more likely to come from the US given chip controls, and potentially the AI would remain on US cloud under US control.
Furthermore, if you grant that AI can accelerate AI development itself, a 2-3 year compute advantage could be decisive... and not just in "fast take-off recursive self-improvement" but even in mundane ways where better AI leads to better chip design tools, better compiler optimization better datacenter cooling systems, and better materials science for next-gen chips.
You're right that it is impossible to control 100% of the chips, but that's not the goal. The goal is to control enough of the chips enough of the time to create a structural advantage. Maintaining a 10-to-1 compute advantage of the US over China will mean that even if we had AI parity, we'd still have 10x more AI agents than China. And we'd likely have better AI per agent as well.
For example, consider the same Russian oil example you discuss - yes, there's significant leakage to India and China and these controls aren't perfect, but Russia's realized prices have stayed ~$15-20/barrel below Brent throughout 2024-2025 - forcing Russia to accept steep discounts while burning cash on shadow fleet operations and longer shipping routes.
And chips are much easier to control than oil right now. Currently, OpenAI can buy one million NVIDIA GB300s to power Stargate, but China and Russia can't even come close. Chinese chips are currently much weaker in both quantity and quality, and this will persist for awhile as China lacks the relevant chipmaking equipment and likely will for some time -- the EUV tech that prints chips at nanometer scale took decades to develop and is arguably the most advanced technology ever made. You seem to have some all-or-nothing thinking here or think that we can't possibly block enough chips to matter, but we already have significantly reduced China's compute stock and you even have people like DeepSeek's CEO mentioning that chip controls are their biggest barrier. Chinese AI development would certainly be different if China could freely buy one million GB300s as well.
The key thing is that semiconductor manufacturing isn't a commodity market with fungible goods flowing to equilibrium. You're treating this as a standard economic problem where market forces inevitably equalize and assume a lot of frictionless markets - but neither of these seem true. The chip supply chain has unique characteristics with extreme manufacturing concentration, decades-long development cycles, and tacit knowledge that make it different. Additionally, network effects in AI development could create lock-in before economic pressure equalizes access. Moreover, American/Western AI and chip development isn't going to freely flow to China because the US government would continue to stop that from happening as a matter of national security. Capital does flow, but this technology cannot flow quickly, freely, or easily.
It's also not easy to just arbitrarily make up for chip disadvantage with energy advantage. It's very difficult to train frontier AI models on ancient hardware. DeepSeek has been trying hard all year to train their model on Huawei chips and still haven't succeeded. It doesn't matter how cheap you make energy if chips remain a limiting factor. Arguably, TSMC's lead over SMIC has grown, not shrunk, over the past decade despite massive Chinese investment.
All told, I think that China is at a significant AI disadvantage over the next decade or more and this is due to reasonably effective (albeit imperfect) chip controls. Ideally we would make the chip controls even better and stronger to press that advantage further (I have ideas on how), but that's a different conversation from the strategic wisdom of the controls in the first place.
Congrats! I also thought it was great.
Sorry for the slightly off-topic question but I noticed EAG London 2025 talks are uploaded to YouTube but I didn't see any EAG Bay Area 2025 talks. Do you know when those will go up?
If you're considering a career in AI policy, now is an especially good time to start applying widely as there's a lot of hiring going on right now. I documented in my Substack over a dozen different opportunities that I think are very promising.
Hi. Thanks for writing this. I find electoral reform to be a genuinely interesting cause area, and I appreciate the effort to apply EA frameworks to it. I have a few concerns with the framing and some factual details:
On neglectedness: The claim that this is "the most neglected intervention in EA" doesn't match the track record. The Center for Election Science has received over $2.4M from Open Philanthropy, $100K from EA Funds, and $40K+ from SFF. 80,000 Hours has a problem profile on voting reform calling it a "potential highest priority area" and did a full podcast episode with Aaron Hamlin. There's also a dedicated EA Forum topic page, and another post appeared today advocating for CES funding. This doesn't mean additional funding isn't warranted, but framing this as EA's "most neglected" area overstates the case.
On the theory of change: The post seems to conflate two distinct interventions: (1) running congressional candidates on electoral reform platforms to "prevent authoritarian consolidation" in 2026, and (2) actually implementing approval voting + top-four primaries long-term. The "$125M protects $400B" framing treats these as equivalent, but they're quite different propositions. The 20% success probability is asserted without justification, and it's unclear what "success" means... winning midterms? Passing state initiatives? Both?
I'd also find it helpful to understand the mechanism better: how does "50 congressional candidates running on electoral reform" prevent authoritarianism? Most swing voters care about bread-and-butter issues like affordability and healthcare - the claim that approval voting advocacy serves as an effective "organizing principle" would benefit from more evidence or argument.
Some factual corrections: The post advocates for approval voting but cites Alaska as evidence. Alaska uses ranked-choice voting, not approval voting. It's also worth noting that the repeal measure in Alaska barely failed (50.1% to 49.9%) despite being outspent 100:1 - which cuts both ways on tractability. And while the post mentions Fargo's approval voting was banned by the North Dakota legislature, this seems like an important cautionary tale about scaling that deserves more attention.
I appreciate you sharing but I'd just encourage tightening up the claims and being more precise about what evidence supports which interventions and what your suggested interventions are doing.