In this post, we (Nicholas Emery-Xu, Andrew Park, and Robert Trager) summarize the results of our paper modeling the role of information in risky technology races. The current version can be accessed here.

1 Is revealing technological capability beneficial?

Imagine in the not too distant future that actors are developing a new technology that gives the winner a discontinuous advantage, economic or geopolitical, over their rivals. Because of this advantage, actors are tempted to cut corners on safety to focus on capabilities research, producing a safety-performance tradeoff.[1] In such a scenario, is it better if actors knew each other’s capability levels? On the one hand, public knowledge of capabilities prevents actors from mistakenly believing they are behind in capability and then entering a dangerous arms race. On the other hand, if actors learn they are close in capability, they may engage in a dangerous race to the bottom, cutting corners on safety to gain a small edge. In many cases, however, we do not observe actors engaging in such a race. Top AI labs, for example, regularly publish results on capabilities but continue to invest significantly in safety research, defying the predictions in Armstrong et al. (2016).[2] What’s going on? In our paper, we find that when the technological development path is even moderately uncertain, even actors who know they are close in capability are unwilling to engage in a dangerous race to the bottom because the expected returns to doing so aren’t worth the risk of an accident.

We build on the model in Armstrong et al. (2016)[2] (see “AI race” for a useful description) to study a technology race with a safety-performance tradeoff in which capabilities investments are unknown, privately known, or publicly known. However, we argue that their results depend on two unrealistic assumptions: that the actor with higher capability investment wins the race for sure and that only the winner’s actions contribute to the risk of an accident. In our work, we relax these assumptions.

2 Decisiveness

First, we argue that actors are unlikely to be certain about how additional units of capability investment translate into success in the race. In a race for a novel technology, there is likely to be randomness in the development process. For example, while Enrico Fermi was the first to use graphite as a neutron moderator, the lagging Soviets had actually already made such a discovery but failed to apply it to their nuclear program.[3] We control for this level of randomness with a decisiveness parameter. If the parameter is zero, each actor is equally likely to win the race, regardless of effort. As it increases, the more capable actor is increasingly likely to win the race. Thus, for more decisive races, each additional unit of investment in capability is more likely to bring success.

What are the effects? At low levels of decisiveness, the randomness outweighs the benefits of taking risks, so the race is perfectly safe under all information states. As decisiveness increases, risk almost always increases. In addition, as decisiveness increases, at first the private information case is most dangerous. Only at high levels of decisiveness does the public information case become most dangerous. In other words, there is no information hazard unless the race is highly decisive. For less decisive races, it is better to reveal capabilities than keep them private.

3 Safety provision

The other assumption that we challenge is that only the winner of the race contributes to safety. Instead, we add a safety-sharing parameter to allow both actors to contribute a weighted sum of overall safety. We might imagine, for example, that a lagging actor could conduct tests, either in preparation for the mature technology, or at a lower level of technological competence in response to the leader. We find that increasing the proportion of risk contributed by the losing actor produces two effects. First, a moral hazard effect increases risks. Just as in the provision of carbon emissions, the less actors bear the benefit of their own safety provision, the more they will shirk. Second, less capable actors increase their safety provision because their choices matter even if they lose the race. We find that the moral hazard effect is stronger, so reducing the winner’s share of safety provision increases expected disaster risk.

4 Conclusion

As a result of this work, we present three tentative conclusions. First, when deciding whether to incentivize actors to reveal capabilities, it is important to know their uncertainty over technological progress. At least at the early, uncertain stages of development, public knowledge of capabilities seems to be beneficial. Second, if losers of the race cause a disaster, policymakers should shift their focus away from low-capability to high-capability actors, providing them with incentives to reduce shirking, for example by sharing safety knowledge with the leader. Third, we caution against updating too strongly on any specific model (including our own!) and instead urge readers to reason through or empirically test its modeling assumptions in the context of real-world races.

  1. ^

    Robert Trager, Paolo Bova, Nicholas Emery-Xu, Eoghan Stafford, and Allan Dafoe, "Welfare Implications of Safety-Performance Tradeoffs in AI Safety Research," Working paper, August 2022.

  2. ^

    Stuart Armstrong, Nick Bostrom, and Carl Shulman, "Racing to the precipice: a model of artificial intelligence development," AI & Society, 31(2):201–206, May 2016, http://link.springer.com/10.1007/s00146-015-0590-y.

  3. ^

    Toby Ord, "Lessons from the development of the atomic bomb," Working paper.

15

2 comments, sorted by Click to highlight new comments since: Today at 9:51 AM
New Comment

Fantastic summary, Nicholas, Andrew, and Robert. I'm looking forward to reading the paper.

A few quick thoughts on the summary:

  1. It's reassuring to hear that information hazards are unlikely for lower values of the decisiveness parameter.  One relevant follow-up question is how might AI developers form an opinion on what value the decisiveness parameter takes? Is this something we can hope to influence? 
  2. It's not quite as reassuring to hear that framing AI Safety as a group effort might discourage safety investments due to moral hazard. I do find your proposal to share safety knowledge with the leader to be promising. We might also want policymakers to have some way to ensure that those sharing this safety knowledge were well compensated. Doing so might give a preemptive motive for companies to invest in safety, as they might be able to sell it to the leader if they fall behind in the race.
  3. I really like that you caution against updating only on the basis of a model alone. It encourages me to think about how we might empirically test these claims concerning moral hazard and decisiveness.

Thanks for your thoughts!

  1. Good question! I'll need to think more about this, but my initial impression is that regular surveying of developers about AI progress could help by quantifying their level of uncertainty over the arrival rate of particular milestones, which is likely correlated with how they believe expected capabilities investments map onto progress.
  2. That seems right, though it likely depends upon how substitutable safety research is across firms.