Hide table of contents
You are viewing revision 1.6.0, last edited by Pablo

An AI race is a competition between rival teams to first develop advanced artificial intelligence.


The expression "AI race" may be used in a somewhat narrower sense to describe a competition to attain military superiority via AI. The expressions AI arms race,[1] arms race for AI,[2] and military AI arms race[3] are sometimes employed to refer to this specific type of AI race.

AI races and technological races

An AI race is an example of the broader phenomenon of a technology race, characterized by a "winner-take-all" structure where the team that first develops the technology gets all (or most) of its benefits. This could happen because of various types of feedback loops that magnify the associated benefits. In the case of AI, it is generally believed that these benefits are very large, perhaps sufficient to confer the winning team a decisive strategic advantage.

Significance of AI races

AI races are significant primarily because of their effects on AI risk: a team can plausibly improve its chances of winning the race by relaxing safety precautions, and the payoffs from winning the race are great enough to provide strong incentives for that relaxation. In addition, a race that unfolds between national governments—rather than between private firms—could increase global instability and make great power conflicts more probable.

A model of AI races

Stuart Armstrong, Nick Bostrom and Carl Shulman have developed a model of AI races.[4]  (Although the model is focused on artificial intelligence, it is applicable to any technology where the first team to develop it gets a disproportionate share of its benefits and each team can speed up its development by relaxing the safety precautions needed to reduce the dangers associated with the technology.)

The model involves n different teams racing to first build AI. Each team has a given AI-building capability c, as well as a chosen AI safety level s ranging from 0 (no precautions) to 1 (maximum precaution). The team for which cs is highest wins the race, and the probability of AI disaster is 1 – s.

Utility is normalized so that, for each team, 0 utility corresponds to an AI disaster and 1 corresponds to winning the AI race. In addition, each team has a degree of enmity e towards the other teams, ranging from 0 to 1, such that it gets utility 1 – e if another team wins the race. The model assumes a constant value of e for all teams.

Each team's capability is drawn randomly from a uniform distribution ranging over the interval [0, μ], for a single given μ, with lower values representing lower capability.

From this model, a number of implications follow:

  • As μ increases, capability becomes increasingly important relative to safety in determining the outcome of the race, and teams become correspondingly less inclined to skimp on safety precautions. Conversely, lower values of μ are associated with fewer precautions; at the limiting case of μ = 0, teams will take no precautions at all.
  • As enmity increases, the cost to each team of losing the race increases, and so teams become more inclined to skimp on safety precautions. But whereas the relative importance of capability is largely determined by technology, and is as such mostly intractable, there are various interventions reasonably expected to decrease enmity, such as "building trust between nations and groups, sharing technologies or discoveries, merging into joint projects or agreeing to common aims."[5]
  • A less intuitive finding of the model concerns how capability and enmity relate to scenarios involving (1) no information; (2) private information (each teams knows its own capability); and (3) public information (each team knows the capability of every team). No information is always safer than either private or private information. But while public information can decrease risk, relative to private information, when both capability and enmity are low, the reverse is the case for sufficiently high levels of capability or enmity.
  • Another surprising finding concerns the impact of the number of teams under different informational scenarios. When there is either no information or public information, risk strictly increases with the number of teams. But although this effect is also observed for private information when capability is low, as capability grows the effect eventually reverses.

AI races and information hazards

AI races are sometimes cited as an example of an information hazard, i.e. a risk arising from the spread of true information. There are in fact a number of different hazards associated with AI races. One such hazard is the risk identified by Armstrong, Bostrom and Shulman: moving from a situation of no information to one of either private or public information increases risks. Another, more subtle information hazard concerns the sharing of information about the model itself: widespread awareness that no information is safer might encourage teams to adopt a culture of secrecy which might constitute an impediment to the building of trust among rival teams.[6] More generally, the framing of AI development as involving a winner-take-all dynamic in discussions by public leaders and intellectuals may itself be considered hazardous, insofar as it is likely to hinder cooperation and exacerbate conflict.[7][8][9]


(Read more)

Posts tagged AI race