This is a crosspost for Crises reveal centralisation by Stefan Schubert, published on 3 May 2023.
An important question for people focused on AI risk, and indeed for anyone trying to influence the world, is: how centralised is power? Are there dominant actors that wield most of the power, or is it more equally distributed?
We can ask this question on two levels:
On the national level, how powerful is the central power—the government—relative to smaller actors, like private companies, nonprofits, and individual people?
On the global level, how powerful are the most powerful countries—in particular, the United States—relative to smaller countries?
I think there are some common heuristics that lead people to think that power is more decentralised than it is, on both of these levels.
One of these heuristics is what we can call “extrapolation from normalcy”:
Extrapolation from normalcy: the view that an actor seeming to have power here and now (in relatively normal times) is a good proxy for it having power tout court.
It’s often propped up by a related assumption about the epistemology of power:
Naive behaviourism about power (naive behaviourism, for short): the view that there is a direct correspondence between an actor’s power and the official and easily observable actions it takes.
In other words, if an actor is powerful, then that will be reflected by official and easily observable actions, like widely publicised company investments or official government policies.
Extrapolation from normalcy plus naive behaviourism suggest that the distribution of power is relatively decentralised on the national level. In normal times, companies are pursuing many projects that have consequential social effects (e.g. the Internet and its many applications). While these projects are subject to government regulation to some extent, private companies normally retain a lot of leeway (depending on what they want to do). This suggests (more so, the more you believe in naive behaviourism) that companies have quite a lot of power relative to governments in normal times. And extrapolation from normalcy implies that that this isn’t just true in normal times, but holds true more generally.
Similarly, extrapolation from normalcy plus naive behaviourism suggest that power is relatively decentralised on the global level, where we compare the relative power of different countries. There are nearly 200 independent countries in the world, and most of them make a lot of official decisions without overt foreign interference. While it’s true that invasions do occur, they are relatively rare (the Russian invasion of Ukraine notwithstanding). Thus, naive behaviourism implies that power is decentralised under normal times, whereas extrapolation from normalcy extends that inference beyond normal times.
But in my view, the world is more centralised than these heuristics suggest. The easiest way to see that is to look at crises. During World War II, much of the economy was put under centralised control one way or another in many countries. Similarly, during Covid, many governments drastically curtailed individual liberties and companies’ economic activities (rightly or wrongly). And countries that want to acquire nuclear weapons (which can cause crises and wars) have found that they have less room to manoeuvre than the heuristics under discussion suggest. Accordingly, the US and other powerful nations have been able to reduce nuclear proliferation substantially (even though they’ve not been able to stop it entirely).
It is true that smaller actors have a substantial amount of freedom to shape their own destiny under normal times, and that’s an important fact. But still, who makes what official decisions under normal times is not a good proxy for power. On the national level, you’re typically not allowed to act independently when national security is perceived to be threatened. And on the global level, smaller nations are typically not allowed to act independently if global security, or the security of the most powerful nations, is perceived to be threatened. Thus, if you look beyond surface appearances, you realise that the world is more centralised than it seems.
Indeed, more sophisticated analyses of power reveal that that’s true even under normal times. As Steven Lukes argued convincingly in his classic Power: A Radical View, naive behaviourism is not a good way of thinking about power. Actors often refrain from taking certain actions because of veiled threats from more powerful actors—an exercise of power that naive behaviourism fails to classify as such. In other cases, the more powerful actors don’t have to do anything at all—the less powerful will simply know (e.g. based on general knowledge of global politics) that it would be unwise to challenge them in certain ways. Both on the national and the global level, different actors try to second-guess each other all the time. They often reason “no, we won’t do this thing, because then that other actor will do that thing, and that’ll upset this third powerful actor”. (Sometimes these chains of reasoning can be long and intricate.) This means that official and easily observable decisions will not mirror power structures in a straightforward way. Instead, the underlying power structure is often more centralised than surface appearances suggest.
In his Thinking, Fast and Slow, Daniel Kahneman argues that we intuitively think that “what you see is all there is” (WYSIATI). We overemphasise the importance of what’s easily observable, and fail to properly consider factors that are harder to observe. That might be an underlying reason that people employ extrapolation from normalcy and naive behaviourism. It’s easy to observe what’s going on here and now, and it’s easy to observe official decisions. Therefore, people might overemphasise what they say about power in general.
The President of the United States is very powerful, actually.
Relatedly, I think sleepwalk bias/the younger sibling fallacy plays a role: “the failure to see others as intentional and maximising agents”, who predict others’ behaviour and act accordingly. To understand power, we have to consider the fact that we’re looking at sophisticated actors who engage in complex reasoning. They’re thinking several steps ahead, trying to model each other. But we often fail to take that into account, tacitly assuming that people are implausibly myopic. And if people had been myopic, then they wouldn’t have refrained from taking action for fear of retaliation from more powerful actors. As a result, such hard-to-observe manifestations of power would have been rarer, and easy-to-observe behaviour would have been a better proxy for underlying power structures. As such, sleepwalk bias supports naive behaviourism.
Moreover, sleepwalk bias suggests that powerful actors won’t take proper action even when their security is threatened—that they’ll sleepwalk into disaster. That means (besides a greater risk of a catastrophic outcome) that the distribution of power will remain relatively decentralised even in a crisis. Thus, sleepwalk bias supports extrapolation from normalcy as well. But in my view, these assumptions are not right, since powerful actors do tend to take action to prevent disaster, which in turn typically increases their effective power.
Ideological epiphenomenalism, which says that much talked-about ideology actually has little causal influence on events, may also contribute to extrapolation from normalcy. In my view, ideology is a key reason that smaller actors have a fair amount of leeway under normal times. It’s largely because of (liberal) ideology that individuals have a fair amount of liberty to pursue their own projects without government interference under normal times. Likewise, it’s at least in part because of ideology that smaller countries usually don’t have to do exactly what more powerful countries tell them (cf. the Westphalian system).
However, if you deny that ideology has these effects, then you have to come up with some other explanation for why powerful actors don’t constrain weaker actors more. The most obvious candidate is that they simply lack the requisite power. If that’s your view, then you might also think that those powerful actors won’t be able to bend weaker actors to their will even in a crisis. But I think that’s wrong: they often can, and the reason that they don’t do so under normal times is largely ideology. Under extreme circumstances, they let liberal and Westphalian values be trumped by security considerations. (Though they, too, are typically infused with ideology; e.g. regarding government responsibilities for its people).
***
If this analysis is correct, it has some implications for how to think about AI risk. As long as AI systems are relatively weak, we are living in relatively normal times, and consequently, companies are by and large left to their own devices. Similarly, powerful countries don’t interfere with AI developments elsewhere. If extrapolation from normalcy was right, then such hands-off policies would continue under more extreme circumstances as well. But I don’t think that’s right. If the US government saw AI progress initiated by a domestic company or a foreign actor as a security threat, they wouldn’t sit idly by. At that moment, the true power structure would reveal itself.
This isn’t to say that there will be such a moment. There are several reasons why there might not—including technical reasons I won’t cover here—but one that’s of special interest to me is that weaker actors are of course aware of the possibility of an intervention from the US government. Therefore, they will, in turn, likely not just sleepwalk into such an intervention, but will try to prevent it in some way; e.g. by reducing the threat the US government might perceive, and/or by maintaining a close dialogue. As discussed, such dynamics can get pretty complex, making predictions difficult. But regardless of whether push ever comes to shove, the power of the US government will likely influence these dynamics in a big way. Whether overt or covert, power is more centralised than it may seem.
Executive summary: Crises reveal that power is more centralized than it appears during normal times, both within nations and globally, which has implications for thinking about AI risk.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
The Dark Forest Problem implies that people centralizing power might face strong incentives to hide, act through proxies, and/or disguise their centralized power as decentralized power. The question is to what extent high-power systems are dark forests vs. the usual quid-pro-quo networks and stable factions.
Changing technology and applications for power, starting in the 1960s, implies that factions would not be stable and iterative trust is less reliable, and therefore a dark forest system was more likely to emerge.