By Rober Wilbin | Watch on Youtube | Listen on Spotify | Read transcript
Episode summary
Think of the English aristocracy before the Industrial Revolution: they own all the land, they have all the political connections, they can see what’s happening… but somehow there ends up being this giant new source of wealth created that they mostly don’t participate in. And similarly with the monarchy. Initially the king owns everything, has absolute power. So how is it that kings end up in this figurehead role where they have very little room to manoeuvre, capturing only a very small surplus? — David Duvenaud |
Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of humanity.
For most of history, ordinary people had almost no control over their governments. Liberal democracy emerged only recently, and probably not coincidentally around the Industrial Revolution.
Today’s guest, David Duvenaud, used to lead the ‘alignment evals’ team at Anthropic, is a professor of computer science at the University of Toronto, and recently coauthored the paper “Gradual disempowerment.”
He argues democracy wasn’t the result of moral enlightenment — it was competitive pressure. Nations that educated their citizens and gave them political power built better armies and more productive economies. But what happens when AI can do all the producing — and all the fighting?
“The reason that states have been treating us so well in the West, at least for the last 200 or 300 years, is because they’ve needed us,” David explains. “Life can only get so bad when you’re needed. That’s the key thing that’s going to change.”
In David’s telling, once AI can do everything humans can do but cheaper, citizens become a national liability rather than an asset. With no way to make an economic contribution, their only lever becomes activism — demanding a larger share of redistribution from AI production. Faced with millions of unemployed citizens turned full-time activists, democratic governments trying to retain some “legacy” human rights may find they’re at a disadvantage compared to governments that strategically restrict civil liberties.
But democracy is just one front. The paper argues humans will lose control through economic obsolescence, political marginalisation, and the effects on culture that’s increasingly shaped by machine-to-machine communication — even if every AI does exactly what it’s told.
This episode was recorded on August 21, 2025.
Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Camera operator: Jake Morris
Coordination, transcriptions, and web: Katy Moore
The interview in a nutshellDavid Duvenaud, ex-Anthropic team lead and professor of computer science at the University of Toronto, argues that humanity faces a high risk of gradual disempowerment. The core of his thesis is that even if we solve the “AI alignment problem” — ensuring AIs faithfully follow the goals of the people operating them — civilisational forces could still push us toward an outcome where humans lose control over the future. David places the probability of “doom” (the destruction of much of what humanity values) by 2100 at 70–80%. Economic disempowerment: From unemployment to resource competitionThe first driver of disempowerment is the transition of humans from essential producers to “meddlesome parasites” in a machine-led economy:
Political disempowerment: States no longer need their citizensDavid argues that liberal democracy is a historical “aberration” that emerged because states needed educated, free citizens to be economically and militarily competitive.
Cultural disempowerment: The rise of machine-to-machine memesCulture has historically been a tool for human flourishing, but David warns it is becoming an independent replicator that may drift in anti-human directions:
Why alignment doesn’t save usEven if your personal AI is perfectly aligned with you, “business as usual” competition creates a race to the bottom.
What can be done?David admits this research is in its “beta” stage, but points to several emerging projects:
|
Highlights
Humans will become "meddlesome parasites" in a machine economy
Rob Wiblin: Let’s talk more now about the economic disempowerment mechanism. If it’s the case that all AIs are basically just owned and operated by humans, we’re not really becoming economically disempowered in the sense of having less income. Because all of the work that the AIs would do, all of the profit that they would generate, all of the surplus that’s created by their ability to do amazing things for very little cost — all of that will flow back to human beings who then will be richer and in a sense more empowered than ever before.
Why isn’t that such a strong protection that we should feel pretty good about this?
David Duvenaud: First of all, I’ll say I think it’s a great idea to try to set up these kinds of protections. Probably a good end looks like involving a lot of well-thought-through mechanisms to ensure that surplus is always available to humans.
[An] example is the English aristocracy before the Industrial Revolution: they own all the land, they have all the political connections, they can see what’s happening, they mostly know these entrepreneurs — but somehow there ends up being this giant new source of wealth created that they mostly don’t participate in. And as far as I understand, they ended up a little bit poorer in absolute terms, although the civilisation ended up much richer overall.
And similarly with the monarchy, it’s like the king owns everything, has absolute power: how is it that kings end up in this figurehead role where they have very little room to manoeuvre and they end up capturing a very small surplus?
But the big picture is that there’s going to be this sort of small rump of legacy humans, maybe who have de facto ownership of this giant machine economy that’s going to be maybe hundreds or thousands or more times as big as the current one. And they’re not going to be producing value, they’re not going to be deciding what’s going on. So at this point, it’s not clear that they end up having de facto property rights respected.
And there’s lots of reasons to think maybe we still will respect property rights, and it’ll be very cheap to keep humans alive. This is definitely not a foregone conclusion, and this is one of the fuzzier parts of this whole story. But it just seems like it’s very scary to be this sort of useless head of state of this giant machine economy that you don’t understand. Everyone involved in setting courses, running things, the government doesn’t necessarily share your cultural point of view or even think that you deserve good outcomes in the long run more than some much more interesting, powerful, charismatic beings that you are now competing with culturally.
So again, I don’t have a slam-dunk story for how the humans end up not capturing some small rent forever. That’s plausible. It just seems like we’ll be in a very vulnerable position.
Humans will become a "criminally decadent" waste of energy
David Duvenaud: In the short run, this probably looks really good for the average human, in the sense of the cost of almost every service going way down, services just getting way better, and the cost of almost every good also probably going way down. And there might be actually a long period where things are kind of OK — in the sense that humans are sort of disempowered, but there’s not really much pressure to disempower them more. We’re all just sort of enjoying our luxury apartments or something like that, while the machine economy is just growing and growing.
Then I think this sort of scary phase comes later on, when there’s been enough doublings that some basic resource is starting to become scarce again. Maybe it’s land, maybe it’s power. I don’t really have strong opinions on what exactly it is, but the idea is that eventually we do hit some sort of Malthusian-ish limit, and we have to actually start competing with the machines for some basic resource.
Of course, along the way I think on a faster scale this sort of Malthusian competition for political power might be lost by humans. But let’s just not worry about that for now.
Rob Wiblin: OK, so what happens is the economy continues to grow. Humans, possibly even those who are receiving a small share of the income, because their productivity has risen so much, because the economy has grown so much, their absolute level of income might still have risen. They might be much richer or able to consume much more than they can today.
But then the next challenge for them would be that, as the AI and robot economy basically expands across the entire Earth and is doing all of the productive stuff that it can, humans to some extent get edged out in terms of literally surface area of the Earth.
You need energy, you need space to grow food and to have a comfortable environment for humans — and the opportunity costs of setting aside that space for human beings to live and have a good time and grow their food and so on is going up, because technology is advancing. We’re figuring out how to squeeze more and more AI, more and more productivity, more whatever we value out of each square kilometre of surface on the Earth — so it’s becoming potentially more expensive to keep humans alive than it was before, at least in terms of what we’re giving up.
And then possibly some humans won’t actually be able to afford that increasing price, because their income won’t be going up as fast as that.
David Duvenaud: Yeah, exactly. The scenario I have in mind is that I have my like one acre of land or my big luxury apartment with me and my family. And we’ve made our peace with kind of opting out of the economy, but we have our little sort of commune or whatever that we’re happy to live in, in unimaginable luxury and wealth in some senses.
And the government or the rest of the economy or something starts to view this as sort of criminally decadent — that this small group of humans, like maybe 10 or 100, are using this entire acre of land and this amount of energy and sunshine to keep these small brains working for no particular benefit but their own, when those same resources could be used to simulate maybe like millions of much more sympathetic, morally superior on whatever axis virtual beings. So it’ll start to be seen as selfish — as you say, high opportunity cost: a sort of irresponsible use of resources to keep some legacy humans around.
Political disempowerment: Governments stop needing people
Rob Wiblin: OK, let’s talk about political disempowerment now. How do you imagine that happening and progressing over time?
David Duvenaud: One thing is just that I think human politicians will gradually let themselves be more puppeted, or become like passthroughs for things like ChatGPT. And this isn’t necessarily a bad thing in the short run. Good politicians already rely heavily on human advisors, and I think machine advisors are going to be able to make our political parties and representation mechanisms work better in a lot of ways. So the politicians that use AIs just for normal everyday business are going to be more effective, and we’re going to feel like they represent our interests better, in the short run at least.
The other big thing that’s going to be going on is I think people are going to be afraid of losing their jobs. And every politician is going to have something to say about this, and say, “I’m the ‘pro-human, no AI is ever going to take your job’ politician.” But they’re just not going to have viable policy levers to actually slow automation. And just in general, people don’t get votes on things where the government is really constrained or they think it’s important enough. No one ever had a referendum on should we build nuclear weapons, for instance.
I think it’s also going to be the case that governments’ hands will just be tied. They’ll all say, “I’m going to have humans in the loop or human oversight or more direct human representation” — but it’s just going to be so ineffective that when the rubber hits the road, those policies are just not going to be implemented. And it’s going to frustrate voters every time, and they’re going to say, “No, the next time we want to vote for the one who’s really going to represent human in the loop interests” — or whatever it is that seems most scary — but they just won’t be able to vote for their policy preferences.
…
One thing I’ll say is I don’t exactly fear some new particular party getting in power and staying that way. Rather, it’s going to be more that any party that does get in power is going to be so constrained by competitive pressures that they are forced to basically disempower the population.
Rob Wiblin: How so?
David Duvenaud: If you let people actually do the civil disobedience or whatever that they can do today, roughly that kind of is tolerable when most people have jobs, most people have a bunch of important responsibilities, and they can’t all just block roads all day or something like that.
But in a world where maybe 30% or 40% of people just have this huge amount of free time and energy, it just will be untenable, and the state will collapse if they actually let everybody do this sort of agitation at the effectiveness that they can today.
Rob Wiblin: So it seems like there’s a bit of an internal tension here where you’re saying, on the one hand, people are going to lose their political power, but they’ll have more time to make trouble than ever and more ability to make trouble than ever because they’ll be able to get AIs to assist them. You’re saying it’s almost because they’re able to be such strong advocates or be such potent activists that the government will feel the need to crack down on them, and that will be the proximate cause of them losing their political freedom?
David Duvenaud: Yeah, exactly. And then the other thing is that there was this countervailing force where you just need people to go to work, so they have to be able to move freely and do their own business without constantly getting permission from the government. And there just won’t be that pressure on governments to allow freedom anymore.
The death of liberalism?
Rob Wiblin: What are the ways in which you think liberalism might be less competitive as a system, and a less attractive, less appealing a way of organising society post-AGI than it is today?
David Duvenaud: I think it’ll be a less desirable way to organise society for a few reasons, but the main one is just the zero sumness of UBI.
Right now, when we all create our own wealth, it doesn’t really hurt me if someone else creates their own wealth directly from resources. But in the world where we’re all just living in some apartments, advocating for UBI, to the extent that the UBI pie is fixed then we’re really just like a bunch of baby birds cheeping, and whoever gets food is less food for the other guy.
This also erodes the pluralism of values, because the government’s going to have to have some way of deciding who gets resources. If they end up having any opinions about what way of life is more valuable or needs to be subsidised more or whatever, that could be a threat to you. So you kind of have to argue, “That guy’s way of life is less deserving of resources than my way of life,” and now the government is forced to decide de facto who gets subsidised. That’s the main effect.
Rob Wiblin: I see. So we almost have to imagine a hypothetical society in which no one can make anything. There’s no economic production occurring, at least among this group. There’s just a fixed endowment of resources that they happen to have found — a certain amount of food, a certain amount of houses and all of that — and they’ve got to figure out how to organise themselves.
I guess it’s not necessarily desirable for me for you to have free speech and to be able to advocate for yourself all that well, or to be able to educate yourself and become more powerful and influential — because it is completely zero sum. The more influential you become, the more you’ll be able to advocate for getting stuff that is literally like food out of my mouth, or money out of my bank account. Is that the main thing that has changed?
David Duvenaud: Right. So that’s the big thing that’s changed. There is of course a way in which this might not be zero sum. If humanity manages to convince the AIs or whatever government to give a larger UBI overall, then that is the normal, positive-sum thing. So that might not be a slam-dunk argument.
The other thing though is that we haven’t had to fear domination by other groups very much. We’ve had strong property rights; I’m not afraid that Elon Musk is going to literally take my stuff, even though he could raise a private army or whatever. We have very little variation in reproductive rates, so it’s kind of OK that the Amish live nearby — because even if they’re having more kids than whatever other population, that’s not going to matter over the course of 50 or maybe even 100 years.
Then maybe another thing is just the rough egalitarianism in terms of intelligence and power level of people. There’s definitely very meaningful variation amongst humans in terms of just raw smarts, but people often say, like von Neumann somehow didn’t take over the Earth, right? And he might have wanted to.
These are all reasons why it was sort of fine to just let other people become more powerful in the past that might change in a big way.
Is humans losing control actually bad, ethically?
Rob Wiblin: What’s the moral philosophy underlying this whole perspective? Is it that this is bad because it’s going to be bad for human beings; it’s going to lead to an Earth that is not a good time for you and me and our kids? Or is it bad because it’s going to lead the rest of the universe to be kind of wasted on something that’s useless or harmful or not as good as it could have been? What’s the moral perspective that you’re bringing or that your coauthors are bringing?
David Duvenaud: Sure, sure. I think a lot of people might say, “You don’t have much moral imagination. Why are you insisting on these human wellbeing or human desires, when we know that in principle there’s definitely going to be more morally deserving things in the future?” Or something like that.
My basic answer is that in some sense we decide what is morally deserving. And it would be really surprising if, for those beings to exist in the best possible world, we all had to die and have some terrible time. So we basically don’t have to decide between these different views, and we can just say, let’s try to make sure that something like existing humans get to decide roughly what’s done with the rest of the universe or the future or whatever. If that involves having these sort of Amish style, leave Earth as a nature preserve, whatever it is, let’s just let ourselves decide, and not let it be up to some sort of race to the bottom Molochian dynamics, where we end up choosing something that no one endorses.
Rob Wiblin: Yeah. I think the reason this matters is that some people, the thing that they want to do is work to ensure that humanity has a great time, or that the Earth is good for themselves and their children, which is going to raise one set of concerns. Other people want to use their career, or the thing they want to lean on is ensuring that the future of civilisation or the future of humanity or the future of intelligent life is good.
Do you think that the case for worrying about gradual disempowerment is stronger on one of these than the other? Or do you think that they tend to go together?
David Duvenaud: They’re basically the same. I think it would be really weird if we somehow accidentally killed and disempowered existing humans, and ended up building some future that those humans would otherwise really endorse.
I think that the default is there’s some locust-like beings that just like growth for growth’s sake, and that’s the default thing that all evolutionary pressures select for. And maybe those beings are pretty cool, I don’t know — and if they are, then it doesn’t really matter what we do, so we don’t really have to worry about that scenario. But the scenario where they’re just kind of like this grey goo that we think is a big waste, that’s what we need to avoid.
And if you and I are on Earth, flourishing for a long time, and the state and all our civilisational apparatus is acting in our interests, and we decide that actually it would be amazing to create this type of future, then just as part of serving our interests, we would end up creating that amazing future.
Rob Wiblin: I think it’s not such a given that they necessarily go together. I guess you were saying it’s like humans who decide what is good. I suppose you’re like an anti-realist?
David Duvenaud: Yeah. But that doesn’t mean that I don’t take morality very seriously. I’m just trying to say the sort of fact of the matter is determined by what’s in our heads, and then also whatever conditions that imposes on the world being good.
Rob Wiblin: Well, I guess if you think that there is something that is objectively valuable, independent of whether people believe that, then you could have a future in which humans are disempowered and perhaps the machines end up going and doing that. And no human would have endorsed it at the time, but that could still potentially be a good thing.
I guess if you have also a view on which it’s good to satisfy human preferences, but it will also be good to satisfy machine preferences or AI preferences at the point perhaps where they’re conscious or they have subjective experiences, then you might be a little bit less stressed about handing over control or handing over resources to AIs to pursue their own agenda.
David Duvenaud: I think it’s a really weird corner case to imagine this world where we die, but then our desires are ultimately fulfilled. That just seems like, yes, in principle it could happen, but it would be this weird corner case — because probably if we die, something else has gone horribly wrong.
Rob Wiblin: So you’re saying, whatever it is that we want to happen, why don’t we just maintain control, so that then we can decide whether that is the thing that is happening or not?
David Duvenaud: Exactly.
How important is this problem compared to other AGI issues?
Rob Wiblin: How do you weigh up the importance of this set of ways that things could go bad against all the other ways that things could potentially also go bad? Or the possibility that things are actually quite boring?
David Duvenaud: I guess I’ll say I spend a bunch of time at Anthropic working on the more acute loss of control standard AI safety kind of stuff. And I am still very worried about this sort of thing. As I said, to me the modal future is we get some way along gradual disempowerment and then we screw up alignment actually, or there’s some just much faster takeover.
So I guess I’ll say in absolute terms, normal loss of control AI safety research is still massively underinvested in. In relative terms, I think this more speculative future “how do we align civilisation” question is even more underinvested in — with the major caveat that it’s just way harder to make progress on.
And in a sense it’s less neglected. One of the big things I say is what we need to do is upgrade our sense making and governance and forecasting and coordination mechanisms. All of these things need to be much better and more reliable before the writing is too much on the wall that “there’s no alpha in humans” and “don’t listen to humans” and we lose de facto power. But that’s not a very controversial thing, right? No one’s against better institutions, basically. So they’re not neglected in that sense.
What I do think is neglected, again, is thinking about this institution design, A), with LLMs as this new tool that we can use to help do a better job, and B), with this more radical futurism approach, and saying the stakes are high — it’s not just a question of do we get better outcomes on the margin; it’s more like do we get good outcomes at all?
Rob Wiblin: So what’s your breakdown of probability of doom or probability of a bad outcome from acute disempowerment versus gradual disempowerment?
David Duvenaud: Let me say first of all, by “doom” I mean something like, by 2100, the world is in a state where I can see that almost everything that I value has been destroyed. Maybe we’re not literally dead, but we’ve been forced to be uploaded in some very unfavourable conditions, where it’s just like some crappy lossy copy that never gets run. And I feel like whatever dynamics are in charge of our civilisation are just not going to optimise for anything that seems like it’s going to be valuable.
And I guess I would say something like 70% to 80%. Just because, again, we’re up against competition. I think by my standards, solving or avoiding this kind of fate looks like radically different outcomes than any other sort of being or group of beings has had in history.
From my point of view, every animal has been in a situation where it has to either evolve into something unrecognisable and sort of morally alien to it, or die. And we’re sort of by default in that situation too — and by default, we end up being replaced by something that’s more competitive than us and is probably very morally alien, and again, cares about growth and nothing important.
There’s a small chance that if we allow competition to flourish, that there’s a bunch of amazing beings having awesome lives. And I’m like, actually that’s really cool, even though I don’t get to be part of it. But I guess I’m very parochial in the sense that I’m like, me and my family, if we all die, that’s just so bad that I almost consider that doom if most of humanity is in a similar situation. So if it is just that we have runaway competition and we get replaced by some relatively interesting grey goo, I’m still like, that’s kind of doom.
Rob Wiblin: I see. And how much lower would your p(doom) be if you felt that a very dynamic future full of lots of intelligent beings, doing stuff admittedly that you presently don’t find very beautiful, if you thought that was a good future?
David Duvenaud: It’s very small. Then the fear is more like what Robin Hanson fears: that we end up locking in some very parochial set of values. And maybe it’s a matter of taste, but I still think that to me it looks like competition is probably going to win at the top level.
So this reduces to: what’s the probability that there ends up being this stable hegemon that mostly gets values wrong? I’d say that that’s only probably 5% or 10%. My p(doom) if I think that just nature flourishing or competition flourishing was valuable would probably be only 5% or 10%.
Do we know anything useful to do about this?
Rob Wiblin: I’ve got to say, I feel like this whole set of ideas is at a relatively early stage. It feels like we’re at a sort of beta version of the gradual disempowerment concerns.
The most obvious thing that I think has to be done is getting much more to grips with all these different dynamics, trying to really have a lot of debate about how strong will this effect be, how strong will that effect be? Maybe some of them can be crossed off the list or relegated to the second tier. Other things can be promoted as like, this is going to be sort of the primary effect. And then mapping out the different scenarios, and maybe having half a dozen that seem at least plausible to a decent number of people, and then we can start to organise our thoughts a bit more around those.
Do you agree that that is kind of the first order of business here, or that’s the most obvious order of business here?
David Duvenaud: Oh, absolutely. Part of the reason I wanted to come on this podcast is to just do such an amateur job and insultingly naive version of this analysis that hopefully the sociologists and historians and economists and maybe the public intellectuals of the world will feel baited into saying, “I can do a better job of analysing these things than David.” And I’m like, please, please: be my guest. I’m a computer scientist; I’m an amateur in all these things.
I think the big thing that’s mostly been missing from current people who have expertise — that could and should, I think, be contributing to this — is just being a bit head-in-the-sand about will there be machines that are competitive with humans in all domains. Economists will just run models that end with machines being really good complements to human labour, and then anything more seems somehow inviolable or unimaginable. Again, I know there are economists who are taking this seriously, but most of them I think aren’t. And I don’t want to be harsh, but I want to say this is sad and you’re not doing your job, and please try harder and have a bigger imagination.
Rob Wiblin: Yeah. And even if you think over the next five or 10 years they are only going to be complements to human labour, that’s your median forecast. Think a little bit longer term, think more decades out. Think, what if there’s a 5% chance that perhaps it’s not all just complementarity? It is worth having some people thinking about stuff more than 10 years out, given how impactful some of these changes could be.
David Duvenaud: Exactly, exactly. There are some cool directions that already a lot of people are exploring, like trying to simulate little parts of civilisation. One cool thing you can do with LLMs is make this little village or little mini economy that operates at a much finer-grained level of detail than the normal economic models, so that’s like its own little new field that’s emerging.
And I really think this is going to help us get a grip on when are different types of things stable, and what are the actual drivers of cultural evolution or political stability? I mean, they’re still very ridiculously oversimplified models, but this is a new tool we have. I’m really happy about this kind of work.
Rob Wiblin: Yeah. For people who maybe think you said the wrong thing, but they want to say the right thing: how can they go and get involved in this debate?
David Duvenaud: So one of the first things to do in any debate is try to clarify the questions. One initiative that’s happening is with one of my coauthors Deger, who is the CEO of Metaculus. He and some other people are trying to make the Gradual Disempowerment Index.
I think there’s just a lot of work that we can do in trying to operationalise these claims of “humans won’t be able to advocate for their own interests,” or “this lever of power will be even more disconnected from human interest than it has been.” I think these are very vague claims, and these are very hard to operationalise because you have to define what it means for a group to want something and talk about these counterfactuals. So this is a very hard problem. But I think that’s some of the most basic groundwork that needs to be done at this point, is clarify what we’re even talking about.
Rob Wiblin: If I imagine someone who would say that this isn’t really useful work, I could imagine them responding that there’s so many things going on; this is the most difficult sort of futurism, the most difficult social science you could imagine. Because you imagine many fundamental assumptions about the world have changed; we’re not sure which ones are going to change and when they’re going to change. And we can barely even understand what exists now. We don’t even know necessarily why do we have the government structures that we do now, let alone what would they be in future in some different condition.
David Duvenaud: Yeah. Actually I had the exact same thought, and that leads me to one of the actual technical projects that I’m working on. Me and a few people, including Alec Radford — who’s one of the creators of GPT, who’s now sort of unemployed and just doing fun research projects — are trying to train a historical LLM, like an LLM that’s only trained on data up to let’s say 1930, and then maybe 1940, 1950. The idea being that, as you said, it’s hard to operationalise these questions. Like, I don’t know: What fraction of humans are employed? It might not really matter, or be the right question to ask. What we’d rather ask is something more like, what is the future newspaper headline? Or give it a leader: what’s their Wikipedia page? Or something like that, more like freeform sort of things.
And the cool thing is that LLMs, you can query them to predict this sort of thing, like, “Write me a newspaper headline from 2030” or whatever. They’re not going to do a good job unless they have a lot of scaffolding and specific training, but we can validate that scaffolding on historical data using these historical LLMs.
So the idea is you train a model only on data up to 1930, then you ask it to predict the likelihood that it would give to a headline in 1940 or some other freeform text, and you can evaluate their likelihoods on this text in the past. And then you can also use the same scaffolding on a model trained up to 2025 and then ask it to predict headlines in 2035, and you can iterate on your scaffolding by seeing how well it does on past data.
Rob Wiblin: Yeah, Carl Shulman proposed this on the show a year and a half ago or something like that, I think. I’m so glad to see that it’s actually going ahead. … I guess underlying the forecasting approach is the idea that smarter AI advice will help us to navigate all of this better. If we can foresee the failure modes and say, “Conditional on X happening, do you think Y is a likely outcome?” that’s going to allow us to act earlier to prevent these negative dynamics beginning and then getting reinforced.
