By Zershaaneh Qureshi | Watch on Youtube | Listen on Spotify | Read transcript
Episode summary
We have this very complex, very subtle system of checks and balances currently on how much power can get concentrated… I think it’s not clear that these checks and balances, which have evolved over many centuries to work at human speeds and human capability levels, will just naturally pour over to AI speeds and AI capability levels. … What if you can just completely circumvent all of this stuff once you have vast AI workforces working really fast for you? — Rose Hadshar |
The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all.
That’s the view of Rose Hadshar, researcher at Forethought, who believes we could see extreme, AI-enabled power concentration without a coup or dramatic ‘end of democracy’ moment.
She foresees something more insidious: an elite group with access to such powerful AI capabilities that the normal mechanisms for checking elite power — law, elections, public pressure, the threat of strikes — cease to have much effect. Those mechanisms could continue to exist on paper, but become ineffectual in a world where humans are no longer needed to execute even the largest-scale projects.
Almost nobody wants this to happen — but we may find ourselves unable to prevent it.
If AI disrupts our ability to make sense of things, will we even notice power getting severely concentrated, or be able to resist it? Once AI can substitute for human labour across the economy, what leverage will citizens have over those in power? And what does all of this imply for the institutions we’re relying on to prevent the worst outcomes?
Rose has answers, and they’re not all reassuring.
But she’s also hopeful we can make society more robust against these dynamics. We’ve got literally centuries of thinking about checks and balances to draw on. And there are some interventions she’s excited about — like building sophisticated AI tools for making sense of the world, or ensuring multiple branches of government have access to the best AI systems.
Rose discusses all of this, and more, with host Zershaaneh Qureshi in today’s episode.
This episode was recorded on December 18, 2025.
Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Nick Stockton and Katy Moore
The interview in a nutshellRose Hadshar, a researcher at Forethought, warns that AI progress could lead to extreme power concentration — a state where a small group of elites gains de facto control over global political and economic decisions. While many focus on deliberate grabs for power, like AI-enabled military coups, Rose argues that more gradual dynamics, like AI-driven erosion of our ability to understand the world (“epistemics”) and rising economic inequality, could be just as dangerous. She thinks the most promising interventions are cross-cutting ones that strengthen societal epistemics and checks and balances. Three dynamics — not just power grabs — could concentrate powerRose identifies three interacting pathways that could strip power from the public and vest it in a small group of AI leaders or state actors:
Rose’s central concern is that these dynamics compound: economic disempowerment weakens resistance, epistemic disruption prevents people from noticing, and power grabs become easier in the resulting vacuum. AI could give small groups the productive power of millionsWhile previous revolutions (like the Industrial Revolution) also concentrated wealth, Rose argues AI is unprecedented for several reasons:
We might not be able to prevent extreme power concentration — even though almost nobody wants itRose acknowledges the strongest objection to her view: almost everyone, including some current elites and people at non-leading AI companies, will lose out if power is concentrated in the hands of a tiny group of people. But she offers several reasons why coordination could fail:
The most cross-cutting interventions focus on epistemicsRose is deliberately uncertain about prioritisation — the space is too poorly mapped. But she currently leans toward:
|
Highlights
Three dynamics that could reshape political power in the AI era
Zershaaneh Qureshi: Right. OK, so there are at least three kinds of dynamics that seem worrying: one is active power grabs, one is the way that the distribution of wealth gets reshaped and these other economic factors, and one is this sort of broad category of things you’re calling “epistemics” — our ability to make sense of things and predict what might happen next and react appropriately and so forth. And these things interact in complex ways, and it seems most likely to you that there’ll be some degree of all of these things going on in worlds where power gets extremely concentrated.
I think it would be helpful if you could make this a bit more vivid by telling a story where you do get some combination of these dynamics, and they lead to really extreme concentration of power.
Rose Hadshar: Yeah, I’ll give it a go. So one illustrative scenario of the sort of thing I’m worried about is: AI progress gets faster and faster. Some actors — maybe a mixture of AI companies, chip supply chain companies, the governments that house those companies — start to become much more capable than everybody else as their AI gets better.
I think a good way of thinking about this is… Because it sounds a bit maybe like they’re better at making nuclear power stations or some other technology. I think it’s really not. I think a good intuition pump for this is imagining that AI systems are kind of equivalent to human workers. These companies might be fielding millions or tens of millions or hundreds of millions of very efficient human workers that are working 24/7. So we can think of this as some companies 10x in size or 100x in size, and other companies stay the same because they don’t have access to AI systems like that.
So you suddenly, relatively quickly anyway, end up with some organisations that are much more capable than others, and are able to analyse things much more deeply, make better sense of the changes that are going on, make better strategic moves. Those AI-powered organisations are in competition with each other. Some of them are commercial companies, some of them are nation-states. And today, they’re trying to gain an advantage. I expect that to continue, and here I’m imagining that they’re going to be leveraging their AI systems to do this.
I’m not centrally imagining this is via building robot armies. I’m more imagining things like poisoning public opinion against your opponents; making strategic deals with other actors, where you give them things that they want in exchange for them letting you carry on with your plans; this sort of thing. So I’m imagining a kind of softer power approach.
And I think there are various things that could happen from this point. One thing is that you might end up with the number of organisations getting more concentrated as the organisations that have better AI systems are able to increasingly leverage that advantage to gain more control over what happens in the world. More de facto control, at least over the questions that will begin to be the most important questions: What AI systems are built? How are they deployed? Who gets access to the benefits?
Another thing that you might see happening is: within organisations you might see similar dynamics playing out, where these big AI workforces are able to be controlled by a very small number of people. Maybe you get lots of layoffs, and human employees are replaced by loyal AI systems who are intensely loyal to a small group of humans at the top of the company. Maybe you have internal power struggles that result in some factions gaining power over other factions, but you could end up with internally the organisations becoming very concentrated as well.
So there’s kind of a spectrum here in terms of exactly how far does this go? In the more power-grabby scenarios — where you do get internal power grabs within organisations, and maybe where there’s a bigger gap in capabilities between the leading AI-powered organisations — you might end up with really just a handful of people de facto controlling the political decisions that matter in the world. In less extreme scenarios, you might end up with a fairly broad set of organisations becoming the new powers that be, and things like democratic elections no longer matter very much in determining what’s going to happen next.
How labour automation erodes human leverage
Rose Hadshar: This would be way more extreme than anything we’ve seen in the past, so it’s definitely a prediction of something very different happening. I also think that if I knew that current systems would not advance at all, and we would stay at this level of AI capabilities forever, I wouldn’t be very concerned about extreme power concentration. I’d be concerned about moderate power concentration: I think we can already see today there are some companies that have a lot of influence, but I think it would be within the same kind of range as we’ve seen historically, rather than something unprecedented.
On why I think AI is going to be different, a core thing here is I’m expecting AI will eventually be able to automate most or all human work in a way that no other technology has been able to do.
A key thing here is it’s going to massively reduce the leverage of the average human person. Right now, humans all have labour that they can sell, and this can’t be taken away from them. I mean, they can be locked up, but it’s a nonseizable, valuable good that humans have. I’m imagining a future where in fact anything economically valuable can be done via AI systems without any humans involved. And I think this is the main generator for the thought that maybe power will become very concentrated, because maybe most people won’t have anything valuable to hold out as leverage.
To get a bit more specific about how that might work, I think one important feature of AI is that very small groups of humans might be able to use AI systems to do vast amounts of work without cooperation from many other humans.
Right now, if you want to launch a coup or if you want to run Amazon or if you want to invade a country, you need to have lots and lots of other humans who are willing to cooperate with you on that, and that puts a limit on how crazy your project can be. I’m expecting this limit to go away, such that a small group of individuals could do things that most people would regard as really abhorrent, but they can do them very successfully using AI systems that can be programmed to follow their particular interests.
Another way of looking at this “AI automating all human labour” thing is: it will make it much easier to turn money into labour than it is today, so then we might get a much stronger feedback loop between wealth and power than we see today.
Today we already see it to some extent. But if you think about human labour: imagine that you’re a billionaire and you want to get more people to do things. It’s hard to get humans to do arbitrary work. People have particular skills or they don’t want to do certain things. You have to fit them in in the right places, and that’s kind of complicated. It’s hard to get more than a certain number of people to work efficiently together. You know, organisations tend to max out at a certain size — beyond that, it’s no longer efficient to add more people.
Also, at the very limit, if you think about the whole economy, it’s quite slow to make more humans. So there are these natural limits on what you can do if you’re trying to turn money into human labour.
None of these limits apply to AI. AI will do any kind of work. Some systems might be very general and have skills in all sorts of things, or you might end up with specialised systems. But I expect their coordination will be much easier than human coordination. That means that you won’t have such limits on the number of workers that you have, you’ll be able to get much bigger AI workforces collaborating effectively together.
And the main thing here really is that you can just, at the flick of a button, make more copies, just with more money, just with more compute. And sure, you might run into compute constraints also, and then that will be slower. But I’m still expecting there’s going to be a much faster feedback loop between money and the ability to get things done than there is in the current world.
How AI filters our reality
Zershaaneh Qureshi: I find it really hard to understand why, if people know that information is coming from an AI, they wouldn’t just be immediately sceptical about what it’s saying about people who are powerful or something.
Rose Hadshar: One thing is you could say the same about previous information technologies: why wouldn’t people immediately dismiss anything they read on the internet? And people in fact read a lot of stuff on the internet that they do believe, because they’ve developed habits around what’s trustable and believable on the internet.
I do think that you’re right that in extremely bad worlds for epistemics, actually maybe we’re OK, because maybe people don’t use AI systems at all, so it can be sufficiently bad that we avoid some of the worst outcomes. But I’m worried that it won’t be so egregiously bad, and then it’ll be hard for people to coordinate to not use them.
Also you were thinking a bunch about the general public, and I think that there are probably narrower groups that are more important checks on power concentration than the public as a whole. I do think the public as a whole is important — for example, through elections, but also other routes. But I think maybe we want to be thinking more about things like: How do journalists get their information in 10 or 20 years’ time? How does the Supreme Court structure analyse information in 10 or 20 years’ time?
And for me, a big thing that makes me accept people will be using these systems is that I think that the pace of change will have gotten so fast, and the pace of information content generation will be so fast, that it won’t actually be possible to keep up at all unless you’re using AI systems in some clever way to filter what you should be paying attention to. So I’m kind of expecting people are going to be forced by necessity into using AI systems.
Zershaaneh Qureshi: Yeah, this is becoming more vivid for me, especially in a world where humans are really clinging on to their last chances at being economically relevant. So if some people who are still in the job market are managing to get by in the job market because they’re utilising AI a lot to get smarter and know more things and do their work more effectively, you could really imagine the pressure to be like, “If I want to stay employed, continue having political relevance and stuff, I really need to adopt these technologies.” So yeah, with the other pressures that we’re imagining, I think this becomes more compelling to me. …
I somewhat feel more worried about the stories that involve bad actors deliberately doing something compared to the ones that are just structural, because in the bad actor case, you’ve got an adversary that’s deliberately trying to make things happen a certain way. Do you think that’s a fair instinct to have?
Rose Hadshar: I definitely think it’s a reasonable instinct to have. I feel a bit confused about it. Part of my confusion here is: are there some kinds of structural force that it makes sense to model as adversaries, even though they’re not actually agentic? And then what even is an agent? And so on. So I started to get confused at some kind of conceptual level about what the distinction really is.
But I think if I had to pick right now, I would agree with you that it’s more worrying in worlds where powerful, strategic actors are trying their best to get here. And honestly, I think that’s probably the world that we’re going to end up in.
Zershaaneh Qureshi: There’s a lot to gain.
Rose Hadshar: There’s a lot to gain, and we see the powerful acting strategically today. And you can tell stories of like, “But people are going to realise just how terrible this is, and everyone’s going to suddenly start being sensible” — but this doesn’t seem to have happened previously, so I’m kind of not betting on it.
How democracy could persist in name but not substance
Zershaaneh Qureshi: Suppose what happens is in a country like the US, where you begin to see this really extreme situation where there are very few people and mostly just AIs who are economically relevant, and everyone else isn’t really contributing to revenue and so forth.
In a place like that, unless you fully veer into “this is now no longer a democracy” — there’s been some kind of power grab and someone’s taken over, some kind of autocratic control — I find it really hard to understand, if it is just this sort of systemic thing rather than this malicious actor taking control, why anyone would be willing to roll back the political power that was so hard won.
I think it’s one thing in a place where people don’t currently have that power, and I think it’s another thing when you’ve had all these groups who spent decades campaigning for being politically enfranchised and really contributed to a huge value shift in society: does it not just become totally outrageous to roll those things back just because there are structural or systemic forces kind of pushing you in that direction?
Rose Hadshar: Yeah, I like the push. I agree with you that I’m not really expecting, in countries like the States for example, the vote will be rolled back. But let me paint a picture of the sort of thing I am imagining could happen in a country like the United States.
There are still elections. The nominations for candidates for those elections are decided by small groups of people within political parties. Those are increasingly automated. They’re automated by biased AI systems that are run by big tech companies. So the choice of candidates becomes candidates who approve of what the big tech companies are doing.
Meanwhile, the things that those candidates are offering to people are basically just versions of better state handouts. Nobody has a job. Everything comes down to what the state will offer you. The candidate who offers people the nicest material sops wins. And people are happy to go along with this, because they’re getting rich.
But what’s happening behind the scenes is that a small group of oligarchs are deciding what the US’s foreign policy should be, deciding what should be invested in and what shouldn’t be, shaping the information environment, shaping people’s ideological commitments and the sorts of things they think are good — for example, what will people’s views be on whether AI deserves rights or not, and how much can that be shaped by biased systems and by political policies that have been captured by those companies?
That’s the sort of thing I’m imagining.
Why AI-powered tyranny will be tough to topple
Zershaaneh Qureshi: OK, so even if we do buy that this very extreme concentration of power from AI does happen at some point, you might still not be convinced that it’s as big a deal as your article might suggest that it is, in that you might question how long lasting that state of affairs might be.
It does seem that there have been lots of cases throughout human history where things like technologies or shifts in the economy have enabled a group to take control to an undesirable degree. But I think previous instances of this, like previous totalitarian regimes like the USSR and Nazi Germany, they’ve all fallen eventually. There are various reasons why it’s quite hard to retain your power for that long, not least because most people don’t want that to be the case.
How likely does it seem that the power concentration enabled by AI in these scenarios would not be reversible, would have exceptionally long-term effects?
Rose Hadshar: I like your bringing in historical examples, and maybe it’s useful to think about a few ways that AI could make it different to how it’s been before.
So if you think about things like the USSR or Nazi Germany, part of why these regimes fell is because there were rival regimes. In the case of Nazi Germany, this is very clear: they were just conquered by other powers. In the case of the USSR it was less direct, but still there’s some sense in which they were in visible competition with other powers that run things differently, and that in the end outcompeted them.
So one concern that you might have with very extreme power concentration is, if we get to the point where there really is just one global hegemon, there won’t be any force for competition anymore. There aren’t other powers that could invade them or that could have a better social structure such that they get outcompeted economically later, so you might end up with stasis because of that. That’s one thing that could be different if power concentration becomes very extreme.
Another thing that could be different, even if you don’t get to a global hegemon, is that AI might enable us to make binding commitments in a way that we haven’t been able to do before, where you can get AI systems to enforce deals in perpetuity. It’s not clear that this will be technologically possible, but it might be. And if it is, even if there are still multiple powers, they might end up making some kind of deal to divide the resources in perpetuity, such that power-concentrated states end up with control for the rest of time over some slice of the universe in a way that seems very worrying.
How to intervene when so many solutions have a catch
Rose Hadshar: So some interventions that you might have this worry about: the alignment audits thing that I mentioned, where you audit models to check whether they’re secretly loyal, might help prevent companies from staging coups, but also might weaken companies as a check on government power. So you could imagine well-intentioned companies that put in secret loyalties such that they can deactivate systems if the government is trying to misuse them to seize power — and then if you prevent secret loyalties, you prevent this route to checking the power of an abusive government.
On the flip side, you can say the same kind of story about measures like government oversight over labs. Maybe this is helpful for reducing the risk of lab coups, but it also makes it more likely that government actors can abuse their position.
So I think there’s some real worry there about shifting probability mass around. You also see this kind of element with other interventions though. So you could imagine that some AI tools for improving epistemics are quite dual use: maybe you develop a tool which helps people coordinate, and that also helps the people who want to stage a coup to coordinate. So there are worries like this at lots of different levels.
Zershaaneh Qureshi: Yeah, OK. So what I’m hearing, of the space of options for how to intervene, it seems like quite a few of them end up in what I would call a “who watches the watchmen?” situation — where we can maybe prevent one group, like AI companies, from becoming disproportionately powerful, but we do that by making government or people who set standards or something like that more powerful.
I guess the worry that I have there then is: how much leverage do we actually have to stop power getting concentrated, full stop, versus just shifting who ends up with the most power?
Rose Hadshar: Yeah. Maybe if I were thinking about it in terms of leverage, it would make me think about the things that you were mentioning about the electorate and about manual work and maybe human wages will go up in the short term and so on. There is a broad base of leverage in the current world where lots of people have a small amount of power. And, particularly if AI tools make it easy for lots of people to coordinate, it might be the case that the majority becomes much more powerful than it has been in the past and is able to coordinate much more nimbly and quickly to push for its own interests.
So I don’t think it’s hopeless, but I imagine that this is a bit how framing the US Constitution felt or something — like, “I want to balance this, so I’ll balance it with this. But then how do I keep that in check?” And at some point you stop adding more checks, and you think, “OK, this is a balanced system.”
Zershaaneh Qureshi: Yeah, OK. So what if we did end up in a situation where there is just a tradeoff that we really struggle to avoid — between either we let companies do whatever they like, or we let the government have really extreme control over more stuff than we would have hoped they’d have — is there an answer for who we should trust more? Like who we pick in this awkward tradeoff?
Rose Hadshar: I don’t think there’s a robust answer. People have very different intuitions here. If you ask somebody, “Who would you rather was dictator of the world: the US government or Google?” I think you will get some people who will say definitely the US government and other people who will say definitely Google for different reasons.
I think that the kind of Google camp is imagining that governments are extremely inefficient and badly incentivised, and Google would do a much more competent job of being world dictator than the US government. And I think the people who are favouring the US government are more tracking things like representation and there being checks on the power of governments that are more robust than the checks on companies.
