Hide table of contents

Summary: There are historical precedents where bans or crushing regulations stop the progress of technology in one industry, while progress in the rest of society continues. This is a plausible future for AI.

Epistemic Status: My intuition strongly disagrees with other people here. I hope to explain my intuition, and provide enough historical evidence to make this intuition at least plausible.

 

Introduction: An Intuition Pump

Suppose you told someone in 1978 that no new nuclear power plants would be built in the US until 2023.[1] This would probably be very surprising. Nuclear power was supposed to be the power of the future.[2] The Nuclear Regulatory Commission had only been created 3 years earlier.

Given this information, someone in 1978 might predict that something terrible was about to happen. Maybe a nuclear war between the USA and USSR that destroys America’s industrial capacity. Maybe economic collapse due to overpopulation or global warming. Maybe an Orwellian police state in time for 1984, or a World Authority designed to regulate nuclear weapons that got out of hand.[3]

None of this happened. Instead, the Nuclear Regulatory Commission increased the regulatory ratchet[4] until building new nuclear power plants became uneconomical. These regulations only applied to the USA, but they seem to have significantly impacted nuclear power research globally. Countries that are building new nuclear power plants are still using designs that were developed before 1970.[5]

Regulation on nuclear power probably did slow US economic growth over the next 45 years compared to the counterfactual.[6] But the past 45 years have hardly been catastrophic. Economic growth and innovation did continue, driven by other industries.

 

Consequences of Stopping AGI?

Some people involved in the debate about slowing or pausing AI seem to think that successfully stopping AI progress over the long term would likely lead to death or dystopia:

Either we figure out how to make AGI go well or we wait for the asteroid to hit.

 - Sam Altman[7]

 

If we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela.

 - Scott Alexander[8]

 

I think we should be quite worried that the global government needed to enforce such a ban would greatly increase the risk of permanent tyranny, itself an existential catastrophe.

 - Nora Belrose[9]

 

It seems likely that we would need to create a worldwide police state, as otherwise [an indefinite AI pause] would fail in the long run.

 - Matthew Barnett[10]

It feels to me like this is the same sort of mistake that our hypothetical person from 1978 made. It might seem like AI will be an extremely important thing in the future, and so something dramatic would have to happen in order to prevent it. I think that we should put more probability on the boring future where regulation stifles this one field, while the rest of society continues as it had before.

This seems like an important disagreement. If you think that our descendants’ lives will be pretty good, and getting better if not unimaginably quickly, then stopping AI progress might be worth it for them. If you think that our descendants’ future will be “short and grim,”[8] then they might be less of a consideration when deciding whether to take this risk now.

 

Specific Concerns

Scott Alexander mentions several specific concerns that cause him to be pessimistic about a future without AI progress. Each of them seems like a real problem that people now and in the future should be trying to solve. We should be improving biosecurity,[11] promoting economic growth,[12] spreading democracy & freedom,[13] and giving people hope in future generations. But none of these things seem even close to a 50% chance of causing death or dystopia in the next 100 years. It would not be worth accepting existential risk from AI, which Scott Alexander estimates as having a ~20% chance of causing human extinction, to avoid these. 

Both Nora Belrose and Matthew Barnett are concerned that a global police state would be needed to enforce a long term ban on AI progress. This position does not seem uncommon in the AI safety community. The concerns are that research might shift to locations with fewer regulations, and that algorithmic progress will make AGI possible on a personal computer. The only way to avoid AGI then is a massive expansion of global government power. 

 

Other Historical Examples

I do not think that these concerns have been realized with other technologies. 

Regulations in one industry do not stop progress in all other industries. People in the Bay Area likely underestimate the importance of emerging technologies other than AI, or software more generally, because information technology is disproportionately important in the local economy.[14] I would similarly expect that people living in Detroit in 1950 would underestimate the importance of emerging technologies other than cars. Lots of progress is still possible without AI. Two emerging technologies I am particularly excited about are fusion and space colonization.

Regulations in one country can stop progress in a single industry. Progress stopping in a particular industry is not that uncommon.[15] Most innovation in a particular industry is done in one or a few cities. These clusters of innovation are difficult to build and maintain, so if one is crushed by regulation, it typically does not just move to another country. On a broader scale, some countries are much more innovative than others. In most industries, including heavily regulated ones, the USA is clearly more innovative than (most of) Europe or East Asia, which are much more innovative than the rest of the world. A lot has to go right: a high standard of living, an educated populace, the rule of law, the possibility of future profit, available capital, and a culture that encourages innovation. Countries which flaunt international regulations or norms typically do not attract innovation. Once a technology exists, it is much easier for other countries to copy it. The designs and skills needed already exist, and the benefits of the technology are clear. Regulation to prevent innovation is much easier than regulation to prevent proliferation.

I have previously investigated some Resisted Technological Temptations,[16] or technologies where a long term pause has been achieved through our current institutions:

  • Nuclear power, discussed above.[17]
  • Geoengineering is not explicitly illegal, but opposition from scientists and activists has prevented even research from being done.[18]
  • Vaccine development is heavily regulated in Western countries. During the recent pandemic, both Russia and China relaxed some of these restrictions and approved vaccines before the West. The resulting vaccines were less effective, because the best medical research is still located in the West. In particular, human challenge trials have been regulated into almost non-existence.[19]
  • Nuclear weapons are sometimes mentioned as a technology where regulation has failed to prevent their spread. I think the evidence on this example is mixed. There are about 10 countries which have nuclear weapons (much fewer than the number which could), but only 2 developed them independently: the USA and France. Cutting edge research has not moved to countries which are not party to the Non-Proliferation Treaty. India, Pakistan, and North Korea seem to have similar capabilities as the USA or USSR in the 1940s and 50s.[20] Testing bans have probably also helped keep nuclear weapons from becoming increasingly powerful: the most powerful bomb ever was detonated in 1961.
  • Biological weapons have some of the strongest treaties and taboos against their development or use,[21] and no country openly has a biological weapons program. A problem with this example is that the USSR signed treaties against developing biological weapons - and then continued developing them.
  • Various nuclear technologies, like atomic gardeningusing nuclear explosions in construction, or Project Orion, have been proposed but not developed.
  • Cloning the most effective soldiers has never been done.
  • Genetic modification of humans has been done by one researcher in China, before he was arrested.
  • It is unclear whether colonialism counts as a technology. The Ming dynasty’s decision to stop their Treasure Fleets in the early 1400s delayed colonialism globally by about 50 years and may have impacted China’s developmental trajectory for centuries.
  • Bell Labs invented or discovered the transistor, charged-coupled device, photovoltaic cell, information theory, Unix, C, and the cosmic microwave background radiation between 1945-1980.[22] After the Bell System was broken up by antitrust laws in 1982, the research community there fragmented. It seems plausible that there are some technologies that were not invented because this unusually prolific center of innovation was destroyed.

Most technologies are not banned, nor have had their progress stifled by regulation. Most technologies are also not as scary as AI: I have a hard time imagining how solar panels or ballpoint pens could constitute an x-risk. Scary-sounding technologies, like weapons of mass destruction or some kinds of medical research, often do face bans or regulations that make their development no longer worth it, and the bans sometimes work.

I don’t want to say that effective bans on scary-sounding technologies happen by default. When they do work, they are the result of a concerted effort. But making a ban on a new, potentially dangerous technology seems very doable without disrupting the rest of society.

 

Maybe AI Will Be Different

While this post is mostly about historical precedents of other technologies being stopped, it seems worth saying a few words on AI in particular. There are several reasons why AI might be different from other technologies: 

  1. AI research is easier to do remotely than other emerging technologies. 
  2. Once an AI system is created, it can be transmitted easily, as software.
  3. Simple economic models suggest that powerful AI would be extremely economically advantageous to whoever adopts it.

All technologies are different. Some differences make regulations easier or harder, but none of these feel so different that they make regulation impossible:

  1. Law enforcement can enforce the law based on where the research is done or where the researchers live. The USA in particular has an expansive view of where its law can apply.[23]
  2. Most proposed regulations focus on the hardware required to train powerful AI.
  3. Policy makers do not know this. They know that someone is telling them this. They definitely do not know that they will get the economic promises of AGI on the timescales they care about, if they support this particular project. These promises are not that distinguishable from other technologies’ hype.[24]

There are also ways in which regulating AI is easier.

There are multiple stages of the supply chain where there are only one or a few companies in the world capable of cutting edge work. There are only a few actors who need to coordinate in order for regulation to be effective.

Current leading AI models require a lot of compute, which is capital intensive and easy to keep track of. This might change with enough improvements from algorithmic efficiency. But we should expect algorithmic progress to dramatically slow down in response to a long term pause on AI as capital and talent moves to other industries.

Lots of substances and items are regulated, and the details of this regulation vary widely based on what it is and what the government is trying to avoid.[25] Regulating GPUs will have some unique challenges, but does not seem impossible under our current institutions.

 

How Long of a Pause?

Most of the historical evidence is for global pauses that have lasted about 50 years. This is useful evidence for discussing a 100 year pause. If “long term” means 1,000 years, then there is much less historical evidence. Matthew Barnett has argued that a regulatory ratchet within existing institutions might accomplish a 50 year pause in AI research, but something more dramatic would be needed for a 1,000 year pause.

I am skeptical that a global police state would be easier to maintain than more normal regulations for 1,000 years. My model for how to sustain institutions on this time scale is:

  1. Build an institution that lasts for a generation.
  2. Convince the rising generation that this institution is a good thing to maintain.

If you fail at (2), then it does not matter what institution was built. If not even the elite believe that the police state is a good thing, then it will not maintain itself.[26] An institution which has less hard power, but is better at getting people to believe in it, is more likely to last 1,000 years.

 

Conclusion

Building AGI is an extremely uncertain endeavor. It might lead to Our Glorious Future. It might lead to human extinction. It might not even be possible. If we decide to not try to build AGI, the future seems much less uncertain. Society will continue to be clearly not optimal, but also far from dystopian. Making scientific, technological, economic, social, and political progress will continue to be hard, but people will continue to do it. We can continue to hope for at least marginal improvements for our children, and they for their children, long into the future.

It should not be surprising if a scary-sounding technology faces a regulatory ratchet that slows and then stops all progress in that field. This is not death or dystopia - it’s normal.

 

Thanks to Aaron Scher, Matthew Barnett, Rose Hadshar, Harlan Stewart, and Rick Korzekwa for useful discussion on this topic.

Preview image by Theen Moy: https://www.flickr.com/photos/theenmoy/8003177753.

  1. ^

    This is not quite fair because the date range extends from the start of construction of one plant (Shearon Harris) to the end of construction of a different plant (Vogtle Unit 3). Vogtle Unit 3 started construction in 2013. There is also a nuclear power plant (Watts Bar Unit 2) that started construction in 1973 and was completed in 2016.

  2. ^

    In 1973, the Atomic Energy Commission projected that 55.8% of the USA’s electricity would come from nuclear power by 2000, which was lower than it had previously projected. This did not happen: nuclear power has accounted for about 20% of the USA’s electricity since the late 1980s.

    Anthony Ripley. A.E.C. Lowers Estimate Of Atom Power Growth. New York Times. (1973) https://www.nytimes.com/1973/03/08/archives/aec-lowers-estimate-of-atom-power-growth.html.

  3. ^

    Some prominent people, including Bertrand Russell, were advocating the creation of a World Authority to prevent the existential risk from nuclear weapons:

    A much more desirable way of securing world peace would be by a voluntary agreement among nations to pool their armed forces and submit to an agreed International Authority. This may seem, at present, a distant and Utopian prospect, but there are practical politicians who think otherwise. A World Authority, if it is to fulfill its function, must have a legislature and an executive and irresistible military power. All nations would have to agree to reduce national armed forces to the level necessary for internal police action. No nation should be allowed to retain nuclear weapons or any other means of wholesale destruction. … In a world where separate nations were disarmed, the military forces of the World Authority would not need to be very large and would not constitute an onerous burden upon the various constituent nations.

    Bertrand Russell. Has Man A Future? (1961) Quoted from Global Governance Forum. (Accessed October 17, 2023) https://globalgovernanceforum.org/visionary/bertrand-russell/.

  4. ^

    Mark R. Lee. The Regulatory Ratchet: Why Regulation Begets Regulation. University of Cincinnati Law Review 87.3. (2019) https://scholarship.law.uc.edu/cgi/viewcontent.cgi?article=1286&context=uclr.

  5. ^

    For example, one “new” design for a nuclear power plant is a molten salt reactor. One currently exists: TMSR-LF1, an experimental reactor producing 2 MW of thermal power in northwestern China. The design is based on the molten salt reactor experiment (MSRE) which produced 7 MW of thermal power at Oak Ridge National Lab in the USA from 1965-1969. 

    Similarly, China has a small modular reactor which began power production in 2021, HTR-PM. It is a pebble-bed reactor, based on a demonstration reactor in Germany (AVR), which ran from 1967-1988. 

    All other nuclear power plants use reactor types that are even older.

  6. ^

    I have previously estimated the direct value foregone by the prohibitively high costs of nuclear power in the USA. I also expect there to have been additional indirect value as a result of having less expensive electricity.

    Resisted Technological Temptation: Nuclear Power. AI Impacts Wiki. (Accessed October 18, 2023) https://wiki.aiimpacts.org/responses_to_ai/technological_inevitability/incentivized_technologies_not_pursued/nuclear_power.

  7. ^
  8. ^

    The entire quote is:

    Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism + mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality. I don’t spend much time worrying about any of these, because I think they’ll take a few generations to reach crisis level, and I expect technology to flip the gameboard well before then. But if we ban all gameboard-flipping technologies (the only other one I know is genetic enhancement, which is even more bannable), then we do end up with bioweapon catastrophe or social collapse. I’ve said before I think there’s a ~20% chance of AI destroying the world. But if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela. That doesn’t mean I have to support AI accelerationism because 20% is smaller than 50%. Short, carefully-tailored pauses could improve the chance of AI going well by a lot, without increasing the risk of social collapse too much. But it’s something on my mind.

    Scott Alexander. Pause for Thought: The AI Pause Debate. Astral Codex Ten. (2023) https://www.astralcodexten.com/p/pause-for-thought-the-ai-pause-debate.

  9. ^

    Nora Belrose. AI Pause Will Likely Backfire. EA Forum. (2023) https://forum.effectivealtruism.org/s/vw6tX5SyvTwMeSxJk/p/JYEAL8g7ArqGoTaX6.

  10. ^

    The entire quote is:

    Note that I am not saying AI pause advocates necessarily directly advocate for a global police state. Instead, I am arguing that in order to sustain an indefinite pause for sufficiently long, it seems likely that we would need to create a worldwide police state, as otherwise the pause would fail in the long run. One can choose to “bite the bullet” and advocate a global police state in response to these arguments, but I’m not implying that’s the only option for AI pause advocates.

    One reason to bite the bullet and advocate a global police state to pause AI indefinitely is that even if you think a global police state is bad, you could think that a global AI catastrophe is worse. I actually agree with this assessment in the case where an AI catastrophe is clearly imminent.

    However, while I am not dogmatically opposed to the creation of a global police state, I still have a heuristic against pushing for one, and think that strong evidence is generally required to override this heuristic. I do not think the arguments for an AI catastrophe have so far met this threshold. The primary existing arguments for the catastrophe thesis appear abstract and divorced from any firm empirical evidence about the behavior of real AI systems.

    Matthew Barnett. The possibility of an indefinite AI pause. EA Forum. (2023) https://forum.effectivealtruism.org/s/vw6tX5SyvTwMeSxJk/p/k6K3iktCLCTHRMJsY.

  11. ^

    Toby Ord estimates the biosecurity x-risk over the next century to be about 1/30 in The Precipice. The biosecurity community seems to be being more successful at fighting x-risk than the AI safety community. There are already extensive regulations in the countries where most research is done and major international treaties against developing biological weapons. If you think that AI is more dangerous than synthetic biology, then it does not make sense to advance AI in order to improve biosecurity. It is not even clear if increasingly powerful AI would make biosecurity better or worse.

    For comparison, Toby Ord estimates the x-risk from asteroid impacts over the next century to be about 1/1,000,000. I interpret Sam Altman’s stated concern about asteroids as a proxy for all other existential risk. Otherwise, his risk estimates seem off by many orders of magnitude.

  12. ^

    I do not think that we have run out of human-achievable economic, technological, or scientific progress. The median person will likely be much wealthier in 100 years than today, even without AGI.

  13. ^

    Political and social trends in most countries over the last decade don’t seem good. Political and social trends in most countries over the last century seem wonderful. We should look at both when predicting the next century.

  14. ^

    What fraction of US GDP would you predict is in the information sector? The information sector includes both information technology and traditional media.

    5.5%

    https://www.bls.gov/emp/tables/output-by-major-industry-sector.htm.

  15. ^

    Examples of Progress for a Particular Technology Stopping. AI Impacts Wiki. (Accessed October 19, 2023) https://wiki.aiimpacts.org/ai_timelines/examples_of_progress_for_a_particular_technology_stopping.

  16. ^
  17. ^

    Resisted Technological Temptation: Nuclear Power. AI Impacts Wiki. (Accessed October 18, 2023) https://wiki.aiimpacts.org/responses_to_ai/technological_inevitability/incentivized_technologies_not_pursued/nuclear_power.

  18. ^

    Resisted Technological Temptation: Geoengineering. AI Impacts Wiki. (Accessed October 18, 2023) https://wiki.aiimpacts.org/responses_to_ai/technological_inevitability/incentivized_technologies_not_pursued/geoengineering.

  19. ^

    Resisted Technological Temptation: Vaccine Challenge Trials. AI Impacts Wiki. (Accessed October 18, 2023) https://wiki.aiimpacts.org/responses_to_ai/technological_inevitability/incentivized_technologies_not_pursued/vaccine_challenge_trials.

  20. ^

    I do not know what Israel’s nuclear program is like, or how much of it is the result of technology transfer from the US as opposed to indigenous innovation.

  21. ^

    Offensive biological weapons use is banned by the Geneva Protocol (1925) and development, production, acquisition, transfer, stockpiling & use of biological weapons is banned by the Biological Weapons Convention (1972). In addition to the treaties, biological weapons seem to have a significant taboo against their use.

    Michelle Bentley. The Biological Weapons Taboo. War on the Rocks. (2023) https://warontherocks.com/2023/10/the-biological-weapons-taboo/.

  22. ^

    Iulia Georgescu. Bringing back the golden days of Bell Labs. Nature Reviews Physics 4. (2022) p. 76-78. https://www.nature.com/articles/s42254-022-00426-6.

  23. ^

    For example, Sam Bankman-Fried is being tried in a US federal court, despite having moved himself and his business to The Bahamas.

    Another example involves the US Justice Department having FIFA officials from various countries arrested in Switzerland for corruption. “United States law allows for extradition and prosecution of foreign nationals under a number of statutes … In this case, she said, FIFA officials used the American banking system as part of their scheme.”

    Stephanie Clifford and Matt Apuzzo. After Indicting 14 Soccer Officials, U.S. Vows to End Graft in FIFA. New York Times. (2015) https://www.nytimes.com/2015/05/28/sports/soccer/fifa-officials-arrested-on-corruption-charges-blatter-isnt-among-them.html.

  24. ^

     For example, Project Excalibur promised to neutralize the threat of Soviet nuclear weapons by destroying dozens of ICBMs (with hundreds of warheads) as they launched. It ended up being infeasible.

  25. ^

    Examples of Regulated Things. AI Impacts Wiki. (Accessed October 19, 2023) https://wiki.aiimpacts.org/responses_to_ai/examples_of_regulated_things.

  26. ^

    This is my oversimplified model of what happened to the USSR.

87

0
2

Reactions

0
2

More posts like this

Comments3
Sorted by Click to highlight new comments since: Today at 8:13 AM

Thank you for writing this post!

I think it is really important to stay flexible in the mind and to not tie ourselves into race dynamics prematurely. I hope that reasonable voices such as yours can broaden the discourse and maybe even open up doors that were only closed in our minds but never truly locked.

Great post, Jeffrey! I had been having thoughts along these lines, so I am glad there is now a post I can point to!

In my mind, a long pause should also be conditional on safety levels, i.e. the pause is not just for the sake of pausing. However, I would say such safety levels should be quite high, because non-AI risks are manageble without AI, and are often exagerated (although I also believe risk from AI is exagerated).

I don't think world dystopia is entirely necessary, but a successful long stop for AI (the ~30+ years it'll probably take) is probably going to require knocking over a couple of countries that refuse to play ball. It seems fairly hard to keep even small countries from setting up datacentres and chip factories except by threatening or using military force.

To be clear, I think that's worth it. Heck, nuclear war would be worth it if necessary, although I'm not sure it will be - the PRC in particular I rate as >50% either a) agreeing to a stop, and/or b) getting destroyed in non-AI-related nuclear war in the next few years.