An AI race is a competition between rival teams to first develop advanced artificial intelligence.
The expression "AI race" may be used in a somewhat narrower sense to describe a competition to attain military superiority via AI. The expressions AI arms race, arms race for AI, and military AI arms race are sometimes employed to refer to this specific type of AI race.
An AI race is an example of the broader phenomenon of a technology race, characterized by a "winner-take-all" structure where the team that first develops the technology gets all (or most) of its benefits. This could happen because of various types of feedback loops that magnify the associated benefits. In the case of AI, it is generally believed that these benefits are very large, perhaps sufficient to confer the winning team a decisive strategic advantage.
AI races are significant primarily because of their effects on AI risk: a team can plausibly improve its chances of winning the race by relaxing safety precautions, and the payoffs from winning the race are great enough to provide strong incentives for that relaxation. In addition, a race that unfolds between national governments—rather than between private firms—could increase global instability and make great power conflicts more probable.
Stuart Armstrong, Nick Bostrom and Carl Shulman have developed a model of AI races. (Although the model is focused on artificial intelligence, it is applicable to any technology where the first team to develop it gets a disproportionate share of its benefits and each team can speed up its development by relaxing the safety precautions needed to reduce the dangers associated with the technology.)
The model involves n different teams racing to first build AI. Each team has a given AI-building capability c, as well as a chosen AI safety level s ranging from 0 (no precautions) to 1 (maximum precaution). The team for which c – s is highest wins the race, and the probability of AI disaster is 1 – s.
Utility is normalized so that, for each team, 0 utility corresponds to an AI disaster and 1 corresponds to winning the AI race. In addition, each team has a degree of enmity e towards the other teams, ranging from 0 to 1, such that it gets utility 1 – e if another team wins the race. The model assumes a constant value of e for all teams.
Each team's capability is drawn randomly from a uniform distribution ranging over the interval [0, μ], for a single given μ, with lower values representing lower capability.
From this model, a number of implications follow:
AI races are sometimes cited as an example of an information hazard, i.e. a risk arising from the spread of true information. There are, in fact, a number of different hazards associated with AI races. One such hazard is the risk identified by Armstrong, Bostrom and Shulman: moving from a situation of no information to one of either private or public information increases risks. Another, more subtle information hazard concerns the sharing of information about the model itself: widespread awareness that no information is safer might encourage teams to adopt a culture of secrecy which might impede the building of trust among rival teams. More generally, framing AI development as involving a winner-take-all dynamic in discussions by public leaders and intellectuals may itself be considered hazardous, insofar as it is likely to hinder cooperation and exacerbate conflict....