This is a crosspost for Crises reveal centralisation by Stefan Schubert, published on 3 May 2023.

An important question for people focused on AI risk, and indeed for anyone trying to influence the world, is: how centralised is power? Are there dominant actors that wield most of the power, or is it more equally distributed?

We can ask this question on two levels:

On the national level, how powerful is the central power—the government—relative to smaller actors, like private companies, nonprofits, and individual people? 

On the global level, how powerful are the most powerful countries—in particular, the United States—relative to smaller countries?

I think there are some common heuristics that lead people to think that power is more decentralised than it is, on both of these levels.

One of these heuristics is what we can call “extrapolation from normalcy”: 

Extrapolation from normalcy: the view that an actor seeming to have power here and now (in relatively normal times) is a good proxy for it having power tout court.

It’s often propped up by a related assumption about the epistemology of power: 

Naive behaviourism about power (naive behaviourism, for short): the view that there is a direct correspondence between an actor’s power and the official and easily observable actions it takes.

In other words, if an actor is powerful, then that will be reflected by official and easily observable actions, like widely publicised company investments or official government policies.

Extrapolation from normalcy plus naive behaviourism suggest that the distribution of power is relatively decentralised on the national level. In normal times, companies are pursuing many projects that have consequential social effects (e.g. the Internet and its many applications). While these projects are subject to government regulation to some extent, private companies normally retain a lot of leeway (depending on what they want to do). This suggests (more so, the more you believe in naive behaviourism) that companies have quite a lot of power relative to governments in normal times. And extrapolation from normalcy implies that that this isn’t just true in normal times, but holds true more generally.

Similarly, extrapolation from normalcy plus naive behaviourism suggest that power is relatively decentralised on the global level, where we compare the relative power of different countries. There are nearly 200 independent countries in the world, and most of them make a lot of official decisions without overt foreign interference. While it’s true that invasions do occur, they are relatively rare (the Russian invasion of Ukraine notwithstanding). Thus, naive behaviourism implies that power is decentralised under normal times, whereas extrapolation from normalcy extends that inference beyond normal times.

But in my view, the world is more centralised than these heuristics suggest. The easiest way to see that is to look at crises. During World War II, much of the economy was put under centralised control one way or another in many countries. Similarly, during Covid, many governments drastically curtailed individual liberties and companies’ economic activities (rightly or wrongly). And countries that want to acquire nuclear weapons (which can cause crises and wars) have found that they have less room to manoeuvre than the heuristics under discussion suggest. Accordingly, the US and other powerful nations have been able to reduce nuclear proliferation substantially (even though they’ve not been able to stop it entirely).

It is true that smaller actors have a substantial amount of freedom to shape their own destiny under normal times, and that’s an important fact. But still, who makes what official decisions under normal times is not a good proxy for power. On the national level, you’re typically not allowed to act independently when national security is perceived to be threatened. And on the global level, smaller nations are typically not allowed to act independently if global security, or the security of the most powerful nations, is perceived to be threatened. Thus, if you look beyond surface appearances, you realise that the world is more centralised than it seems.

Indeed, more sophisticated analyses of power reveal that that’s true even under normal times. As Steven Lukes argued convincingly in his classic Power: A Radical View, naive behaviourism is not a good way of thinking about power. Actors often refrain from taking certain actions because of veiled threats from more powerful actors—an exercise of power that naive behaviourism fails to classify as such. In other cases, the more powerful actors don’t have to do anything at all—the less powerful will simply know (e.g. based on general knowledge of global politics) that it would be unwise to challenge them in certain ways. Both on the national and the global level, different actors try to second-guess each other all the time. They often reason “no, we won’t do this thing, because then that other actor will do that thing, and that’ll upset this third powerful actor”. (Sometimes these chains of reasoning can be long and intricate.) This means that official and easily observable decisions will not mirror power structures in a straightforward way. Instead, the underlying power structure is often more centralised than surface appearances suggest.

In his Thinking, Fast and Slow, Daniel Kahneman argues that we intuitively think that “what you see is all there is” (WYSIATI). We overemphasise the importance of what’s easily observable, and fail to properly consider factors that are harder to observe. That might be an underlying reason that people employ extrapolation from normalcy and naive behaviourism. It’s easy to observe what’s going on here and now, and it’s easy to observe official decisions. Therefore, people might overemphasise what they say about power in general.

 

The President of the United States is very powerful, actually.

Relatedly, I think sleepwalk bias/the younger sibling fallacy plays a role: “the failure to see others as intentional and maximising agents”, who predict others’ behaviour and act accordingly. To understand power, we have to consider the fact that we’re looking at sophisticated actors who engage in complex reasoning. They’re thinking several steps ahead, trying to model each other. But we often fail to take that into account, tacitly assuming that people are implausibly myopic. And if people had been myopic, then they wouldn’t have refrained from taking action for fear of retaliation from more powerful actors. As a result, such hard-to-observe manifestations of power would have been rarer, and easy-to-observe behaviour would have been a better proxy for underlying power structures. As such, sleepwalk bias supports naive behaviourism.

Moreover, sleepwalk bias suggests that powerful actors won’t take proper action even when their security is threatened—that they’ll sleepwalk into disaster. That means (besides a greater risk of a catastrophic outcome) that the distribution of power will remain relatively decentralised even in a crisis. Thus, sleepwalk bias supports extrapolation from normalcy as well. But in my view, these assumptions are not right, since powerful actors do tend to take action to prevent disaster, which in turn typically increases their effective power.

Ideological epiphenomenalism, which says that much talked-about ideology actually has little causal influence on events, may also contribute to extrapolation from normalcy. In my view, ideology is a key reason that smaller actors have a fair amount of leeway under normal times. It’s largely because of (liberal) ideology that individuals have a fair amount of liberty to pursue their own projects without government interference under normal times. Likewise, it’s at least in part because of ideology that smaller countries usually don’t have to do exactly what more powerful countries tell them (cf. the Westphalian system).

However, if you deny that ideology has these effects, then you have to come up with some other explanation for why powerful actors don’t constrain weaker actors more. The most obvious candidate is that they simply lack the requisite power. If that’s your view, then you might also think that those powerful actors won’t be able to bend weaker actors to their will even in a crisis. But I think that’s wrong: they often can, and the reason that they don’t do so under normal times is largely ideology. Under extreme circumstances, they let liberal and Westphalian values be trumped by security considerations. (Though they, too, are typically infused with ideology; e.g. regarding government responsibilities for its people).

***

If this analysis is correct, it has some implications for how to think about AI risk. As long as AI systems are relatively weak, we are living in relatively normal times, and consequently, companies are by and large left to their own devices. Similarly, powerful countries don’t interfere with AI developments elsewhere. If extrapolation from normalcy was right, then such hands-off policies would continue under more extreme circumstances as well. But I don’t think that’s right. If the US government saw AI progress initiated by a domestic company or a foreign actor as a security threat, they wouldn’t sit idly by. At that moment, the true power structure would reveal itself.

This isn’t to say that there will be such a moment. There are several reasons why there might not—including technical reasons I won’t cover here—but one that’s of special interest to me is that weaker actors are of course aware of the possibility of an intervention from the US government. Therefore, they will, in turn, likely not just sleepwalk into such an intervention, but will try to prevent it in some way; e.g. by reducing the threat the US government might perceive, and/or by maintaining a close dialogue. As discussed, such dynamics can get pretty complex, making predictions difficult. But regardless of whether push ever comes to shove, the power of the US government will likely influence these dynamics in a big way. Whether overt or covert, power is more centralised than it may seem.

Comments2


Sorted by Click to highlight new comments since:

Executive summary: Crises reveal that power is more centralized than it appears during normal times, both within nations and globally, which has implications for thinking about AI risk.

Key points:

  1. "Extrapolation from normalcy" and "naive behaviourism" heuristics lead people to think power is more decentralized than it actually is.
  2. In crises, governments exert more centralized control nationally, and powerful countries constrain smaller countries globally.
  3. Even in normal times, less powerful actors often refrain from challenging more powerful ones due to implicit threats or power dynamics.
  4. Ideology, such as liberal values, is a key reason smaller actors have leeway under normal circumstances.
  5. If AI progress by companies or foreign actors is seen as a security threat, the US government would likely intervene, revealing the true centralized power structure.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

The Dark Forest Problem implies that people centralizing power might face strong incentives to hide, act through proxies, and/or disguise their centralized power as decentralized power. The question is to what extent high-power systems are dark forests vs. the usual quid-pro-quo networks and stable factions.

Changing technology and applications for power, starting in the 1960s, implies that factions would not be stable and iterative trust is less reliable, and therefore a dark forest system was more likely to emerge.

Curated and popular this week
 ·  · 11m read
 · 
Does a food carbon tax increase animal deaths and/or the total time of suffering of cows, pigs, chickens, and fish? Theoretically, this is possible, as a carbon tax could lead consumers to substitute, for example, beef with chicken. However, this is not per se the case, as animal products are not perfect substitutes.  I'm presenting the results of my master's thesis in Environmental Economics, which I re-worked and published on SSRN as a pre-print. My thesis develops a model of animal product substitution after a carbon tax, slaughter tax, and a meat tax. When I calibrate[1] this model for the U.S., there is a decrease in animal deaths and duration of suffering following a carbon tax. This suggests that a carbon tax can reduce animal suffering. Key points * Some animal products are carbon-intensive, like beef, but causes relatively few animal deaths or total time of suffering because the animals are large. Other animal products, like chicken, causes relatively many animal deaths or total time of suffering because the animals are small, but cause relatively low greenhouse gas emissions. * A carbon tax will make some animal products, like beef, much more expensive. As a result, people may buy more chicken. This would increase animal suffering, assuming that farm animals suffer. However, this is not per se the case. It is also possible that the direct negative effect of a carbon tax on chicken consumption is stronger than the indirect (positive) substitution effect from carbon-intensive products to chicken. * I developed a non-linear market model to predict the consumption of different animal products after a tax, based on own-price and cross-price elasticities. * When calibrated for the United States, this model predicts a decrease in the consumption of all animal products considered (beef, chicken, pork, and farmed fish). Therefore, the modelled carbon tax is actually good for animal welfare, assuming that animals live net-negative lives. * A slaughter tax (a
MarieF🔸
 ·  · 4m read
 · 
Summary * After >2 years at Hi-Med, I have decided to step down from my role. * This allows me to complete my medical residency for long-term career resilience, whilst still allowing part-time flexibility for direct charity work. It also allows me to donate more again. * Hi-Med is now looking to appoint its next Executive Director; the application deadline is 26 January 2025. * I will join Hi-Med’s governing board once we have appointed the next Executive Director. Before the role When I graduated from medical school in 2017, I had already started to give 10% of my income to effective charities, but I was unsure as to how I could best use my medical degree to make this world a better place. After dipping my toe into nonprofit fundraising (with Doctors Without Borders) and working in a medical career-related start-up to upskill, a talk given by Dixon Chibanda at EAG London 2018 deeply inspired me. I formed a rough plan to later found an organisation that would teach Post-traumatic stress disorder (PTSD)-specific psychotherapeutic techniques to lay people to make evidence-based treatment of PTSD scalable. I started my medical residency in psychosomatic medicine in 2019, working for a specialised clinic for PTSD treatment until 2021, then rotated to child and adolescent psychiatry for a year and was half a year into the continuation of my specialisation training at a third hospital, when Akhil Bansal, whom I met at a recent EAG in London, reached out and encouraged me to apply for the ED position at Hi-Med - an organisation that I knew through my participation in their introductory fellowship (an academic paper about the outcomes of this first cohort can be found here). I seized the opportunity, applied, was offered the position, and started working full-time in November 2022.  During the role I feel truly privileged to have had the opportunity to lead High Impact Medicine for the past two years. My learning curve was steep - there were so many new things to
Ozzie Gooen
 ·  · 9m read
 · 
We’re releasing Squiggle AI, a tool that generates probabilistic models using the Squiggle language. This can provide early cost-effectiveness models and other kinds of probabilistic programs. No prior Squiggle knowledge is required to use Squiggle AI. Simply ask for whatever you want to estimate, and the results should be fairly understandable. The Squiggle programming language acts as an adjustable backend, but isn’t mandatory to learn. Beyond being directly useful, we’re interested in Squiggle AI as an experiment in epistemic reasoning with LLMs. We hope it will help highlight potential strengths, weaknesses, and directions for the field. Screenshots The “Playground” view after it finishes a successful workflow. Form on the left, code in the middle, code output on the right.The “Steps” page. Shows all of the steps that the workflow went through, next to the form on the left. For each, shows a simplified view of recent messages to and from the LLM. Motivation Organizations in the effective altruism and rationalist communities regularly rely on cost-effectiveness analyses and fermi estimates to guide their decisions. QURI's mission is to make these probabilistic tools more accessible and reliable for altruistic causes. However, our experience with tools like Squiggle and Guesstimate has revealed a significant challenge: even highly skilled domain experts frequently struggle with the basic programming requirements and often make errors in their models. This suggests a need for alternative approaches. Language models seem particularly well-suited to address these difficulties. Fermi estimates typically follow straightforward patterns and rely on common assumptions, making them potentially ideal candidates for LLM assistance. Previous direct experiments with Claude and ChatGPT alone proved insufficient, but with substantial iteration, we've developed a framework that significantly improves the output quality and user experience. We're focusing specifically on