Hide table of contents

By Robert Wiblin  |    Watch on Youtube   |   Listen on Spotify   |    Read transcript


Episode summary

People get very mad when government is reactive as opposed to proactive. I think government, especially in a democratic society, is sort of meant to be reactive. The people are supposed to have an impulse and then the government’s supposed to react to it.

— Dean Ball

Former White House staffer Dean Ball thinks it’s very likely some form of ‘superintelligence’ arrives in under 20 years. He thinks AI being used for bioweapon research is “a real threat model, obviously.” He worries about dangerous ‘power imbalances’ should AI companies reach “$50 trillion market caps.” And he believes the agriculture revolution probably worsened human health and wellbeing.

Given that, you might expect him to be pushing for AI regulation. Instead, he’s become one of the field’s most prominent and thoughtful regulation sceptics — recently co-authoring Trump’s AI Action Plan before moving on to the Foundation for American Innovation.

Dean argues that the wrong regulations, deployed too early, could freeze society into a brittle, suboptimal political and economic order. As he puts it, “my big concern is that we’ll lock ourselves in to some suboptimal dynamic and actually, in a Shakespearean fashion, bring about the world that we do not want.”

Dean’s fundamental concern is uncertainty: “We just don’t know enough yet about the shape of this technology, the ergonomics of it, the economics of it… You can’t govern the technology until you have a better sense of that.”

Premature regulation could lock us in to addressing the wrong problem (focusing on rogue AI when the real issue is power concentration), using the wrong tools (using compute thresholds to regulate models when we should regulate companies instead), through the wrong institutions (bodies captured by AI interests), all while making it harder to build the actual solutions we’ll need (like open source alternatives or legal mechanisms newly enabled by AI).

But Dean is also a pragmatist: he opposed California’s AI regulatory bill SB 1047 in 2024, but — impressed by new capabilities shown by “reasoning models” — he supported its successor SB 53 in 2025.

As Dean sees it, many of the interventions that would help with catastrophic risks also happen to improve mundane AI safety, make products more reliable, and address present-day harms like AI-assisted suicide among teenagers. So rather than betting on a particular vision of the future, we should cross the river by feeling the stones and pursue “robust” interventions we’re unlikely to regret.

This episode was recorded on September 24, 2025.

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Katy Moore

The interview in a nutshell

Dean Ball, AI policy analyst at the Foundation for American Innovation and lead writer on the White House’s AI Action Plan, takes seriously the possibility that superintelligence is coming soon — but thinks most current governance proposals are premature. He argues we should focus on layered technical defences and private governance mechanisms rather than heavy-handed regulation that could lock in suboptimal outcomes.

Superintelligence is likely coming, but “rogue AI” isn’t the main concern

Dean puts the probability of general superintelligence within 20 years at 80–90%, though he’s sceptical of the “Bostrominian” vision of godlike AI that can instantly do anything:

  • Integration into physical systems requires capital upgrade cycles, diffusion of technology, and infrastructure buildout that takes decades — overnight AI takeover scenarios don’t account for this
  • A power-seeking AI would likely create enormous economic value rather than stage a violent uprising, because that’s how the most successful “conquerors” of the modern era (corporations) actually operate

His bigger worries are around emergent social outcomes and erosion of control:

  • Just as the iPhone led to Uber and Trump’s presidency through hard-to-predict chains, AI could radically reshape society in ways nobody controls — without any model “going rogue”
  • AIs may become a “meta character” like “the market”: A force with incentives that can be modelled, whose profile is partially but not entirely written by humans
  • The technological balance that made capitalism work for workers might be upset, leaving humans as either rent-collectors (if lucky) or irrelevant
  • If AIs do all cognitive work better, humans might be left with gift-economy work: home healthcare, religious practice, sex work

Dean finds it “very troubling” that nobody has articulated an attractive, stable vision of what a good society looks like after superintelligence.

We should wait for the problem to take shape before regulating

Dean was against California’s SB 1047 but now supports its successor SB 53. His core principle: regulation invites path dependency, and premature rules could cause the very outcomes we’re trying to avoid.

Why wait:

  • Government is meant to be reactive, not proactive: Democratic societies respond to citizen impulses rather than anticipating them
  • We don’t know the shape of the technology yet: Will online learning mean brute-force context windows or real-time weight updates? These require completely different governance approaches
  • AGI will transform government itself: Why extend ageing institutional infrastructure when AGI might enable entirely new institutional designs?
  • The level of confidence needed for policy should be higher than for personal decisions: You can bet on the future privately, but imposing speculation on millions requires considerably more certainty

Exceptions where proactive action makes sense:

  • Well-defined threat models like post-quantum cryptography
  • Biosecurity and cybersecurity, where specific technical solutions exist
  • Transparency requirements and whistleblower protections — these help in almost any future

Better governance mechanisms than compute thresholds

Dean argues that training compute thresholds will age poorly. He proposes entity-based regulation instead:

  • Target company characteristics, not model size: For catastrophic risk, focus on R&D spending — it’s tax-deductible, so companies track it accurately and are inclined to overstate it
  • Think bank supervision, not pharmaceuticals: AI products change constantly; we need macroprudential supervision of business practices, not FDA-style product approvals
  • Private governance can work: Insurance, standards bodies, and regulatory markets (where private supervisory organisations are overseen by government) are promising models

On catastrophic misuse: layered defences, not perfect solutions

Dean advocates a defence-in-depth approach to biosecurity — multiple imperfect checks that compound:

  • AI-side safeguards: Usage monitoring, interpretability, steerability, rapid law enforcement contact
  • Synthesis-side screening: “Know your customer” (KYC), sequence screening for pathogenicity, usage pattern analysis
  • Biosurveillance: Metagenomic sequencing, far-UVC, indoor air quality
  • Medical countermeasures: Rapid vaccine manufacturing capability

“We don’t tend to develop 100% solutions to these sorts of problems”: the goal is to make the right thing easy for most people while accepting that determined bad actors may get through, then relying on downstream defences.

AI may hollow out the human role in the economy — even if GDP soars

Dean is not reassured by techno-optimists who say AI will simply boost productivity. He worries:

  • The balance between labour and capital — which underpinned modern liberal societies — may collapse
  • A tiny elite might collect rents while most meaningful cognitive work is automated
  • Many people could be pushed into a “gift economy” of care, religion, and sex work because AIs perform the cognitive tasks better
  • History shows tech revolutions can go badly: the agricultural revolution reduced wellbeing for centuries

Governance structures must be redesigned — not bolted onto the old state

Dean believes AGI will change the nature of government itself:

  • The US state could shrink toward its 18th-century form — political decisions at the top, automated technocracy underneath
  • Frontier AI oversight should look more like bank supervision — private verification bodies chartered and audited by government, issuing “matters requiring attention” rather than rule-by-fiat
  • Centralised AI regulators are dangerous because interest groups will capture them to protect jobs, not reduce catastrophic risk — as already seen in laws blocking AI mental health services, screenwriting rules, and copyright fights

Policy solutions should focus on low-cost, high-leverage “wins”

Dean argues for targeted, technical, and incremental improvements, noting that many problems (like the Raine v. OpenAI suicide case) require similar technical fixes as bioweapons risk:

  • Transparency and technical standards are “easy wins” that help in a huge number of possible futures, and include:
    • Mandated transparency to force labs to seriously consider and publicly document their risk mitigation efforts.
    • Model specifications that detail the model’s intended behavioral profile and adherence rates.
    • Usage monitoring and technical protocols (like “know your customer” for compute access) to restrict malicious actors.
  • Rethink regulatory targets: He co-proposed replacing compute thresholds with entity-based regulation (e.g., targeting companies based on their R&D expense), which is auditable and better aligns with the goal of governing enterprise risk.
  • Liability and courts: Tort liability provides a dynamic mechanism to respond to unpredicted harms, acting as a powerful incentive and a “brace for impact” technology where direct regulation is premature.
  • Private governance: He supports models like “independent verification organisation,” borrowing concepts like “macroprudential supervision” and “Matters Requiring Attention” (MRAs) from bank regulation.

Policy will be driven by economic protection and political dynamics

Dean predicts that the future of AI governance will be driven less by long-term risk and more by near-term political and economic incentives:

  • AI will be overregulated because every industry (e.g., screenwriters, therapists, teachers’ unions) will use existing and new regulations to protect their “moats” and block job displacement.
  • The AI doom/pause movement may find a more natural political home among the anti-establishment populist right (MAGA) because they are already deeply distrustful of massive tech corporations and centralised power.
  • Dean’s work on the AI Action Plan aimed to put policy substance on the pro-AI White House agenda while threading the needle through the emerging political and economic complexities.

Highlights

Is there any way to get the US government to use AI sensibly?

Dean Ball: Something that I often did in the drafting of the Action Plan is, I didn’t say, “Write the Action Plan” or, “Come up with the policies for me” — but what I might ask for is like, “Give me a comprehensive menu of every statutory lever that I have at my disposal on issue X.” And they’re really good at that. And that’s the exact thing that I think a lot of people are a little too prideful, and they think, “I’m the expert on the law” and blah, blah, blah, so they don’t want to do that as much. Maybe that also feels more like policymaking to them.

My view is that I think that there is still a very important human role in public administration, to be sure. But it can be a very real accelerant for the policymaking process in all kinds of ways. Some of that’s just going to be generational, because there will be people who are just stubborn and who are like, “No, I’m not going to let a machine do that. For whatever reason I’ve demarcated this type of thing as being part of my status. And removing that is part of my status.” But there’s other things that are not. And maybe weirdly, because of the jagged frontier concept, the models are actually worse at the things that the person deems lower status than they are that the person deems higher status. I think that’s something we’ve seen.

You are seeing adoption of AI in government, for sure. You are. It’s slow. There’s a lot of things that make it hard that are unnecessary, and there are things that make it hard that are necessary.

Rob Wiblin: Like what?

Dean Ball: Well, on the unnecessary side of things, there’s lots of data privacy types of rules that make things hard. There’s things like FOIA. This is a good example of too much transparency: the Freedom of Information Act, and every state government has a freedom of information law as well. These are laws that basically are about what is considered a record that a member of the public can request.

Rob Wiblin: People need to be able to ask questions of ChatGPT without it going into the public necessarily, the same way that they need to be able to have conversations without it all being recorded and published.

Dean Ball: Yes, exactly. But that’s not the way the law considers it. And this is one of the very significant dangers of all types of laws that deal with information. Data privacy laws are the same way. If you start categorising information in particular ways… I mean, information I think is fundamental to the universe. So in a way it’s like you’re regulating some of the most fundamental things in the world. So it’s very dangerous.

Lots of other sorts of things can make this harder, and it can also vary by agency. Procurement is also a problem. Literally even just the way you account for your spending on AI can be a problem, because say you have a fixed IT budget. But a lot of federal agencies under the Trump administration have reduced their headcount. You can argue about too much or too little or whatever, but the fact is they’ve reduced their headcount, so they have actually pools of congressionally appropriated money that could be spent on AI to replace some of that human labour. But the AI is in the IT budget, and you can’t shift between them.

So these are the kinds of things that you run into. And this is going to be true in every firm, but government is just particularly inflexible. So I think government probably will be a laggard in adoption, and I think probably, from a median voter theorem perspective, government will be a laggard not because of democratic impulse, but because it’s just really hard to adopt new technology in government.

But also I think that’s probably what the American people want: for government to be slow. So it ends up working out, like, OK fine, American people getting what they want. I don’t think the American people are all that wise in this particular case, but whatever, it’s not my decision. You know, maybe they’re smarter than me, who knows? Masses often are smarter than individuals. Or aggregates, I should say.

But anyway, I think there are lots of interesting things that government already is doing, and I think that’ll increase over time. In terms of getting advice from them, it’s interesting: young staffers I know universally do this, and it’s often very invisible for the exact FOIA reasons I mentioned.

Rob Wiblin: Maybe you just do it on your phone.

Dean Ball: Yeah, they’re just doing it on their personal accounts. They’re just dealing with the inefficiency by using it through their personal accounts.

But they’re probably doing things I wouldn’t do. Like they’re like drafting statutes, and you can pair-program statutes with language models now. You get the exact same ideas, where it’s where coding was a couple years ago. There’s often boilerplate in statutes and it’s like, yep, just write that clause. Just do that. Contracts, same thing. And do that with legal drafting.

The more imaginative stuff, you can have it do a first pass, but there’s going to be a lot of flaws. And if you just literally say, “Write a statute,” it will mess up. The code will not compile, you know? I know people who do that for executive orders and statutes and things like this in both administrations. You probably shouldn’t do that, but it can accelerate a lot of your policymaking work for sure.

Government is best suited to dealing with catastrophic risks

Dean Ball: You know, working at the White House was a real visceral lesson for me in this notion. I heard Ezra Klein once describe government as a grand enterprise in risk management. Specifically, I think the risks that government is best positioned to address are these catastrophic ones. Or at least the tail risks. You can definitely believe that a catastrophic event with very low probability of happening is not going to be efficiently addressed through market solutions and insurance and liability and things like this.

We might build a lot of the technical infrastructure we need to address it through insurance and liability and markets, but the actual… Maybe there’s some additional bit of incentive you need. It’s hard to know, but you can definitely make that case. It’s been true in the past.

But very often the conversations that we would have — without saying anything specific about conversations about these types of things in government — very often the structure of the conversation would be like, “We all know there are no 100% solutions. There are a series of 95% solutions.” And you also have to triage to a certain extent. You just have to be practical. Really the question comes down to not so much how much can you mitigate, but how much can you live with — like what kind of risks are we just willing to silently tolerate, assuming we do X, Y, and Z?

I think what you’re saying for the open source models, to get to a point where we have open source development practices where you can totally — and please — compete and do your open source model and try a novel business. You know, I struggle to imagine what the business model is of training a $10 billion model and then giving the weights away for free, but maybe there is one. I’d be very open to that idea. I hope so, in some sense. But do all that.

But there’s also a set of development practices that don’t really affect your performance that much, that don’t entail that much additional cost, and that are the basic things you need to do to be a good citizen — and most people will just do that.

And if you actually are trying to make an open source model that causes a bioweapon, well, we can’t fundamentally stop you from doing that, and investing in the ability to stop you from doing that — to actually stop you, to be sure that we can stop you — that is the thing that causes massive government intrusion into private life. So we’re not going to do that. We’re just going to make it easy for most people to do the right thing.

We’re going to understand that there are going to be some people who do the wrong thing because they’re malicious actors. And we’re going to solve that, A, through relatively normal means of we spend a tonne of money in this country on intelligence collection, and we’re going to continue to do that, and we’re going to use AI to do it and we’re going to do a really good job.

And we’ll also do things like, something else I worked on in government was the Nucleic Acid Synthesis Framework, updating that from the Biden era to make it so that we can more robustly enforce provisions that require the people who would actually synthesise nucleic acids, the companies that would do that, to engage in basically again KYC screening practices, screen the sequences, and make sure that it’s not obvious that someone’s making a virus.

Is this a perfect solution? No, because you can evade KYC systems, and also because you can split up your order for the genome you want to make. If we say that we’ll screen everything above 50 nucleotides, then you can place nucleotides of length 49 in a bunch of different places, and then you can stitch them all together. Yes, OK. And then after that we’ll have biosurveillance. You know what I mean? You just kind of do this and that’s how it actually all works.

And we don’t tend to develop 100% solutions to these sorts of problems, but…

Rob Wiblin: But we’ve made it so far.

Dean Ball: We’ve made it so far. Yeah.

Regulation invites path dependency

Dean Ball: I will say one thing: regulation invites path dependency.

So let’s just take the example of open source AI. Very plausibly, a way to mitigate the potential loss of control — or not even loss of control, but power imbalances that could exist between what we now think of as the AI companies, and maybe we’ll think of it just as the AIs in the future or maybe we’ll continue to think of IT companies. I think we’ll probably continue to think of it as companies versus humans — you know, if OpenAI has like a $50 trillion market cap, that is a really big problem for us. You can even see examples of this in some countries today, like Korea. In Korea, like 30 families own the companies that are responsible for like 60% of GDP or something like that. It’s crazy. The Chaebols.

But if we have open source systems, and the ability to make these kinds of things is widely dispersed, then I think you do actually mitigate against some of these power imbalances in a quite significant way.

So part of the reason that I originally got into this field was to make a robust defence of open source because I worried about precisely this. In my public writing on the topic, I tended to talk more about how it’s better for diffusion, it’s better for innovation — and all that stuff is also true — because I was trying to make arguments in the like locally optimal discursive environment, right?

Rob Wiblin: Say things that make sense to people.

Dean Ball: Yeah, say things that make sense to people at that time. But in terms of what was animating for me, it does have to do with this power stuff in the long term. I think that there is a world in which regulation is actually quite harmful to open source.

Or maybe not. Maybe open source is actually terrible. I don’t know. I’m willing to entertain that idea too. Maybe we actually really don’t want open source. I think we just don’t know enough yet about the shape of this technology, the ergonomics of it, the economics of it. I think we don’t know: will these companies sell a commodified product, and will they be more like utilities, or will they be more like Standard Oil? It’s very hard to know.

So I think you can’t govern the technology until you have a better sense of that. You can do small things now, but you can’t take big ambitious steps yet, because we just don’t know. And if we try, my big concern is that we’ll lock ourselves into some suboptimal dynamic and actually, in a Shakespearean fashion, bring about the world that we do not want.

The military may not adopt AI that fast

Dean Ball: Will we have automated military systems? Absolutely we will, but in some sense, maintaining human oversight over those things seems like the most tractable area of potential international collaboration. It’s already been in most of the international statements: agreements that have been made will always have some sort of callout to responsible adoption of AI in the military.

I think right now our problems are so much the opposite that I worry more about the opposite set of issues: where the military is slow to adopt AI, because the collaboration between the frontier AI companies and the military is insufficiently deep. So we end up kind of trying to bang consumer chatbots, essentially, consumer agents into military applications. And I’m not sure that’s quite the right way to do it.

And then of course, there are problems that companies like Anduril is trying to solve. The control actuation — data sensing and stuff like this from all these different pieces of hardware in the military — they’re all made by different vendors, and they don’t all talk to each other. So you do need some kind of common operating system layer. This is exactly what a company like Anduril exists to solve.

It’s called the Lattice OS, that’s the thing that’s supposed to interconnect all this stuff. And it can be from different people; different people can plug into it. It’s like a platform.

Even if that’s successful, I think we’re talking 15, 20 years until everything is hooked up that way, maybe more. You know, the current capital upgrade cycles on like nuclear submarines, we have nuclear submarines that are not set to retire until the 2050s.

Rob Wiblin: I mean, we could hand over control of them, of the decision making, potentially.

Dean Ball: We could. And I would say if we did that, and it was not overseen in the right way, I think that would be an unwise move. And I do just sort of doubt that’s where we will go.

Rob Wiblin: Because people will anticipate that this is a foolish thing to do.

Dean Ball: Yeah, probably partially that. Partially we will want to maintain at least the illusion of human control over organisations. That will probably end up being valuable. I think also, in general in the AI world, I think AI forecasting tends to go to these sort of asymptotic extremes — where you think about, “What if we completely automated absolutely everything, and it was completely controlled by AI?” And it’s like, yeah, but it’s not quite like that, you know? It’s somewhat more complex.

And I think modelling those risks that you would encounter along the way is probably the more realistic set of risks to the ones we actually face, as opposed to we’re giving control of the government to the AIs.

Rob Wiblin: Projecting out to the end state, and trying to figure out what you would do then.

Dean Ball: Yeah. That just feels like the kind of thing that no political leader would be incented to do, at least not in a democratic society. An autocracy is perhaps a bit harder to model.

Rob Wiblin: I think if you wanted to use AI in order to seize power and remain in power, then potentially having AI in the military and having that AI just follow your instructions is a pretty attractive path to go down.

Dean Ball: Yeah. And there’s a world in which they really are far superior military planners. But the question that feels to me fundamentally human is: Do we engage in this conflict in the first place? And if we do, what’s our plan to eventually deescalate? What’s our desired end state? That feels like a human set of decisions that can get made through a political process. And then what you actually kind of don’t want to be in the political process is all these other things about what exactly do we do? Where do we position this stuff? What kind of equipment do we need to procure?

When procurement becomes part of the political process, you often end up with quite suboptimal outcomes, as we no doubt observe today. Actually, my hopeful view for where AI ends up in government: there was a time, when this country was first founded, the federal government had 1,000 employees. It was smaller than OpenAI for all of it. It was just the Eastern seaboard of America, really, at the time. But still, that’s doing customs enforcement — remember, we made all of our revenue off of tariffs at that time.

It almost feels to me like, not necessarily 1,000 people, but that the ideal end state for government, if it’s anything like the traditional nation-state post-AGI, is actually much smaller and much more like the 18th century. Because the difference is the 18th century was not burdened with a massive technocracy. What we will do is automate the massive technocracy and then just have political decisions get made at the top, and just use government for doing politics and making decisions that are more traditionally political — as opposed to using the political process and politicians to solve all the deeply technocratic things with which the federal government concerns itself today.

Dean's "two wolves" of AI scepticism and optimism

Dean Ball: Within me there are two wolves.

I think when a lot of people say AI is overwhelmingly likely to go well, you could basically reduce that statement down to “AI is overwhelmingly likely to increase gross domestic product.” And on this, I concur: yes, I do think that is true. I think gross domestic product will be much higher in 2040 — much, much higher than it is today.

But GDP going up is not correlated necessarily. It’s been a decent way of thinking about human wellbeing going up for most of the history of capitalism, because there was this balance between essentially capital and labour. There was a harmony, and that harmony was technologically contingent, and the nation-state kind of exists in the same technologically contingent harmony.

But I think that AI has the potential to upset that balance. Arguably, that balance has already been upset by things like the internet, software, globalisation, things like this, and even financial services in some sense. And if that is the case — that that balance gets upset much more than it currently is — then it’s not obvious to me that sort of by default human wellbeing and agency is still preserved in this world.

Even if, again, it’s not like we lost control per se to AIs, but it’s like the AIs do the vast majority of the cognitive labour with which people in elite cities like Washington concern themselves with every day today. And instead it’s like all that stuff gets done by the AIs, and there’s some people in the government at the top who are having weird political fights with one another. And there’s people that run various companies, the AI companies, the capitalists. And then there’s various chokepoints that get owned for different reasons, like maybe you still need to have a human lawyer to appear in court.

So there’s some very wealthy humans who are essentially just rent-seekers, collecting rents of various kinds. And then for the vast majority of us, the work that we can do for one another starts to resemble more of a gift economy, where we’re doing things for one another. The broad category of that, maybe it’s like home healthcare, maybe it’s religious things, religious practice, or maybe it’s sex work.

So there’s these dark futures you can envision where the actual human scope of things narrows considerably, because the AIs just do all the new cognitive tasks that we invented for ourselves, the AIs do them better, and all the new kinds of cognitive tasks that AI itself will invent, the AI also does better. So we are stuck either collecting our rents if we’re lucky, or not if we’re unlucky.

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities