How does existentially dangerous technology get adopted and then locked in? I’d like to draw the attention of Forum readers to my new paper in the European Journal of International Relations, ‘Macrosecuritisation Failure and Technological Lock-in: Lessons from the History of the Bomb’. Building on arguments Nathan Sears developed before his tragic death, it argues that nuclear weapons had two characteristics that explain why they were built and are still with us today. They are also likely to pertain to climate engineering, and possibly to artificial intelligence.
First, at the time nuclear weapons were invented, people disagreed whether they presented an existential threat, either in the sense of endangering the future of all humanity, or of requiring extraordinary measures to address them. Leading scientists and even many politicians called for the international control of atomic energy, or even world government. But Harry Truman was only half persuaded, and Stalin seems never even to have considered it. Nuclear weapons thus triggered the Collingridge dilemma: at the time of their invention, there was no consensus on the threat the new technology posed; once this was widely recognised, social arrangements had formed around it that made it costly and difficult to eliminate.
Second, nuclear weapons are widely perceived to reduce the frequency of a serious problem—major war—at the cost of much greater damage when it finally materialises. If we try to make nuclear deterrence work forever, we will fail. But the probability that it will fail soon may be fairly low. Nuclear deterrence thus involves the ‘tragedy of the uncommons’: low-frequency, high-impact threats tend to get less attention than they should, even when their consequences could be apocalyptic. One reason is that low-frequency catastrophic threats are not very salient. Few people are still alive who have personally experienced nuclear war. But another is that they shift much of the burden of risk to future generations.
Considering the interests of all generations—both present and future—nuclear weapons are a bad bargain: sooner or later we will use them, and on our present trajectory, eventually in large numbers. Each postwar generation will have to live with the consequences. But present-day citizens and their leaders capture most of nuclear weapons’ benefits, while externalising most of the expected costs to other countries, other species, and future generations. For them, nuclear deterrence might be a good self-interested gamble. Worse yet, the same cost-benefit calculus could apply for each successive generation, clear up to the point that nuclear war arrives.
In fact, it doesn’t appear that mass publics are consciously calculating in this cynical way. A fair amount of survey evidence, both from the cold war and more recently, suggests that many Americans and Russians believe that nuclear war has a sizable chance of killing them. However, most don’t cite it as one of the country’s biggest problems. Why? Two key answers seem to be, as Howard Schuman and his colleagues wrote in 1986, that they regard ‘nuclear war is something to worry about for the distant future, [whereas] unemployment/inflation/budget cuts is an important problem here and now’, and ‘that the problem is one that the respondent can do nothing about’. Politicians—particularly those with a limited term in office—appear to take the same view. When there’s no simple solution to a threat, and it’s unlikely to materialise soon, it is easy to put off addressing it—and the same incentive applies at each time point.
That nuclear risk has got locked in this way is bad enough. Unfortunately, the same logic seems likely to apply to other low-frequency, high impact risks. Suppose we start spraying sulphates into the upper atmosphere to reduce global heating. As with nuclear deterrence, it’s questionable that we could keep this up forever: eventually something—say, a nuclear war or a devastating pandemic—would likely interrupt it. When geoengineering finally collapsed, the temperature could soar. But this might be sufficiently improbable in any given year to encourage governments to go on pumping greenhouse gases into the atmosphere. As in the case of nuclear weapons, we would have converted a serious problem—major war, global heating—to a much more existential catastrophic risk.
Nuclear war and geoengineering termination shock are what Bostrom calls ‘state risks’—the longer we remain in the vulnerable state, the more likely they are to materialise—whereas superintelligent AI is often seen as a transition risk: if we successfully align it, we should be out of the woods. However, it’s not clear that this is the case. First—as Simon Friederich and Leonard Dung argue in an article I didn’t read till after I published the paper—AI alignment may not be a discrete all-or-nothing process, but rather a messy, extended affair, less like curing polio than curing all disease. Second, even if we do solve it, it might not stay solved.
Over an extended period, the risk of misaligned AI might be high. But in each time period it might seem fairly low. As Yudkowsky and Soares put it in their new book, 'Imagine that every competing AI company is climbing a ladder in the dark. At every rung but the top one, they get five times as much money....But if anyone reaches the top run, the ladder explodes and kills everyone. Also, nobody knows where the ladder ends.’ Under these circumstances, it might always be tempting to climb another rung, at least for actors who discount the future. That is, after all, what we’ve done with nuclear weapons.
My article develops these arguments at much greater length. I’d be very interested to know what EA Forum readers think of them.
