AI race

Applied to Is AI like disk drives? 8mo ago
  • As μ increases, capability becomes increasingly important relative to safety in determining the outcome of the race, and teams become correspondingly less inclined to skimp on safety precautions. Conversely, lower values of μ are associated with fewer precautions; at the limiting case of μ = 0, teams will take no precautions at all.
  • As enmity increases, the cost to each team of losing the race increases, and so teams become more inclined to skimp on safety precautions. But whereas the relative importance of capability is largely determined by technology, and is as such mostly intractable, there are various interventions reasonably expected to decrease enmity, such as "building trust between nations and groups, sharing technologies or discoveries, merging into joint projects or agreeing to common aims."[5]
  • A less intuitive finding of the model concerns how capability and enmity relate to scenarios involving (1) no information; (2) private information (each teamsteam knows its own capability); and (3) public information (each team knows the capability of every team). No information is always safer than either private or private information. But while public information can decrease risk, relative to private information, when both capability and enmity are low, the reverse is the case for sufficiently high levels of capability or enmity.
  • Another surprising finding concerns the impact of the number of teams under different informational scenarios. When there is either no information or public information, risk strictly increases with the number of teams. But although this effect is also observed for private information when capability is low, as capability grows the effect eventually reverses.

AI races are sometimes cited as an example of an information hazard, i.e. a risk arising from the spread of true information. There areare, in factfact, a number of different hazards associated with AI races. One such hazard is the risk identified by Armstrong, Bostrom and Shulman: moving from a situation of no information to one of either private or public information increases risks. Another, more subtle information hazard concerns the sharing of information about the model itself: widespread awareness that no information is safer might encourage teams to adopt a culture of secrecy which might constitute an impediment toimpede the building of trust among rival teams.[6] More generally, the framing of AI development as involving a winner-take-all dynamic in discussions by public leaders and intellectuals may itself be considered hazardous, insofar as it is likely to hinder cooperation and exacerbate conflict.[7][8][9]