Richard Ngo recently wrote

Instead of analyzing whether AI takeoff will be “fast” or “slow”, I now prefer to think about the spectrum from concentrated takeoff (within one organization in one country) to distributed takeoff (involving many organizations and countries).

I agree that this element of AI takeoff is highly important. The degree of AI concentration, or conversely, the extent of AI diffusion, is crucial to predicting how AI technologies will unfold and the specific risks they might entail. Therefore, understanding this variable is important not only for forecasting AI’s trajectory but also for informing the policies that should govern its development.

However, I believe that discussions around AI concentration tend to lack clarity that is essential to distinguish what the speaker is referring to. In many cases, when people refer to the concentration of AI, they aren't specific enough about what exactly they mean by "concentration." This vagueness can lead to confusion and miscommunication. To address this, I want to propose three distinct dimensions of AI concentration, which I believe are often conflated but should be treated separately in any serious discussion of the topic.

These dimensions are:

  1. The concentration of AI development itself. Here, the question is: to what extent is the development of cutting-edge AI models dominated by a small number of actors? If a handful of companies or governments are responsible for the majority of significant AI innovations, or the majority of state-of-the-art models, then we would say AI development is highly concentrated. This can be quantitatively assessed by looking at the market share or share of technical contributions of the top AI developers, whether measured in terms of compute resources, revenue, or research breakthroughs. Conversely, if many organizations are actively contributing to AI advancements, then development is more diffuse.
  2. The concentration of AI service providers. Even if AI development is monopolized by a few key players, this would not necessarily mean that AI services are similarly concentrated. The developers of AI models might license their technologies to numerous companies, who in turn host these models on their servers and make them accessible to a broad range of users. In this scenario, while the models originate from a small number of firms, the provisioning of AI services is decentralized, with many independent providers offering access to the AIs that merely derive from a few concentrated sources.
  3. The concentration of control over AI services. This focuses on who has the power to direct AI systems and determine what tasks they perform. At one extreme, control could be highly centralized—for instance, this could occur if all major AI systems were under the command of a single government or individual, such as a scenario in which all relevant AI systems obey the direct orders of the U.S. president, and are specifically under their chain of command. At the other extreme, control could be highly decentralized, with billions of individual users having the ability to dictate how AI systems are deployed, whether by renting AI capabilities for specific tasks or by interacting directly with service providers, e.g. through an API or a chat interface. This end of the spectrum could also be realized if there are billions of distinct AI agents who autonomously pursue their own unique, individual objectives that are separate from each other.

These three axes—development, service provisioning, and control—are conceptually distinct, and each can vary independently of the others. For example, AI development might be concentrated in the hands of a few large organizations, but those organizations could still distribute their models widely, allowing for a more decentralized ecosystem of service providers. Alternatively, you could have a situation where AI services are concentrated under a small number of providers, but users retain considerable autonomy over how those services are used, and how AIs are fine-tuned to match individual preferences, leading to a decentralized form of control.

Based on the current state of AI, I believe we are witnessing clear indications that AI development is becoming concentrated, with a small number of actors leading the way in producing the most advanced models. There is also moderately strong evidence that the provisioning of AI services is consolidating, as larger players build out vast AI infrastructures. However, when it comes to control, the picture seems to be more diffuse, with a large number of users likely retaining substantial power over how AI systems are applied through their usage and demand for the technology.

These distinctions matter a great deal. Without clearly distinguishing between these different dimensions of AI concentration, we risk talking past one another. For instance, one person might argue that AI is highly concentrated (because a few firms dominate development), while another might claim that AI is highly decentralized (because billions of users ultimately have control over how the technology is used). Both could be correct, yet their conclusions might seem contradictory because they are referring to different axes of concentration.

This distinction between different forms of AI concentration is especially important when considering how it relates to the risks of AI misalignment. A common concern is that all powerful AIs in the world could effectively "merge" into a single, unified agent, acting in concert toward a singular goal. In this scenario, if the collective AI entity were misaligned with human values, it could potentially have strong incentives to violently seize control of the world or orchestrate a coup, posing a dire existential threat to humans. This vision of risk implicitly assumes a high degree of concentration in the third sense, where control over AI systems is centralized and tightly unified under one entity—the AI agent itself.

However, this outcome becomes less plausible if AI is concentrated only in the first or second sense—meaning that development or service provisioning is controlled by a small number of organizations, but control over what AIs actually do remains decentralized. If numerous actors, such as individual users, retain the ability to direct AI systems toward different tasks or goals, including through fine-tuning models, the risk of all AIs aligning under a single objective diminishes. In this more decentralized control structure, even if a few organizations dominate AI development or service infrastructure, the risk of a unified, misaligned super-agent that is more powerful than the entire rest of the world combined, becomes significantly less pressing.

Therefore, in discussions about the future of AI, it is crucial to be precise about which dimension of concentration we are referring to. Terms like "concentrated AI takeoff" and "distributed AI takeoff" are too ambiguous on their own. These words are not clear enough to pick out some highly important and policy-relevant features of the situation we find ourselves in. To help mitigate this issue, I suggest we adopt clearer language that differentiates between concentration in development, service provisioning, and control in order to have more meaningful conversations about the trajectory of AI.

30

1
1
1

Reactions

1
1
1

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Executive summary: The concept of AI concentration needs to be clarified by distinguishing between three dimensions: development, service provisioning, and control, each of which can vary independently and has different implications for AI risks and governance.

Key points:

  1. Three distinct dimensions of AI concentration: development (who creates AI), service provisioning (who provides AI services), and control (who directs AI systems)
  2. Current trends show concentration in AI development and moderate concentration in service provisioning, but more diffuse control
  3. Distinguishing these dimensions is crucial for accurately assessing AI risks, particularly misalignment concerns
  4. Decentralized control over AI systems may reduce the risk of a unified, misaligned super-agent
  5. More precise language is needed when discussing AI concentration to avoid miscommunication and better inform policy decisions

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities