Hide table of contents

This post is the second part of my summary of  The Precipice, by Toby Ord. The last post was about natural sources of extinction risk and the limited danger they pose over the coming century. This post covers the risks we impose on ourselves. The next post will tie everything together with an overview of the risk landscape, and the final post will explore our place in the story of humanity and the importance of reducing existential risk.

We saw in the last post that our 2,000-century track record allows us to estimate that the risk of extinction from natural disasters must be very low. What about our track-record of anthropogenic (human-caused) risks? It has been less than 3 centuries since the Industrial Revolution and less than a century since the invention of nuclear weapons, so our track record is compatible with a 50% risk per century. Instead of looking at this track record, we need to look at the details of these risks.

Thanks for reading Million Year View! Subscribe for free to receive new posts and support my work.

 

Nuclear weapons

At 3 a.m. one morning in 1979, four independent US command centres saw many incoming nuclear warheads. They only had minutes to respond before the bulk of their own missiles would be destroyed by the incoming strike. When they checked the raw data from the early-warning systems, they realised that there was no attack, and a realistic simulation of a Soviet attack had accidentally been streamed to their system (Brezhnev, 1979; Gates, 2011; Schlosser, 2013).

Cold War tensions led us astonishingly close to nuclear war over 32 times (US Department of Defense, 1981). What would happen if a nuclear war occurred? The worst-case scenario is an all-out nuclear exchange between two countries with many nuclear weapons, such as the US and Russia. This would kill tens or even hundreds of millions of people in the cities hit by the bombs. Radioactive dust would be blown outward, spreading deadly radiation. Smoke from the burning cities would darken the skies, block out the sun, and cool the earth.

This would be unlikely to result in extinction. Most people living outside of major cities in the countries that were bombed would survive the initial blast; the blasts wouldn’t produce enough radioactive dust to make the entire earth inhospitable. The worst effects would be from the darkening of the sky and the ensuing nuclear winter. Our best models suggest that the growing season might be too short for most crops, in most places, for five years (Robock, Oman & Stenchikov, 2007). Billions of people would be at risk of starvation, but humanity would likely survive by growing less efficient crops, building greenhouses, fishing, and perhaps even farming algae.[1]

Climate change

Carbon dioxide, together with water vapour and methane, creates a kind of gaseous blanket around Earth. This is essential for life (without it, Earth would be a frozen wasteland). Since the Industrial Revolution, we have burned fossil fuels and rapidly increased the amount of carbon dioxide in the atmosphere from about 280 parts per million to 412 parts per million in 2019 (Lindsey, 2018; National Oceanic and Atmospheric Administration, 2019). Unless we significantly reduce emissions, this will quickly warm the planet. While this probably won’t be the end of humanity, there is substantial uncertainty about how much we might emit and what effect it will have.

The Intergovernmental Panel on Climate Change (2014) estimates that a fourfold increase from preindustrial carbon dioxide levels has a two-thirds chance of warming Earth between 1.5 and 9 degrees Celsius (and therefore a one-third chance of warming above 9 degrees or below 1.5 degrees). This is before considering feedback loops, such as increased bushfires, which release additional carbon, and the melting of ice which contains trapped greenhouse gases. This leaves us with a substantial chance of very significant warming. Even though it would be an unprecedented disaster, and this is reason enough to stop emissions, even 20 degrees of warming would leave many coastal areas habitable all year round.

One particularly bad feedback loop involves increased temperatures evaporating water in the oceans, creating a denser blanket of water vapour around Earth and accelerating warming.[2] Though current research suggests such an effect would not be strong enough to entirely evaporate the ocean, and probably won’t happen at all, we cannot rule it out. While this is the only known mechanism for climate change to cause the extinction of humanity, there may be unknown mechanisms, and given that Earth has never seen such a rapid period of warming, we have substantial uncertainty about the eventual effects.

Environmental damage

The world’s population grew ever faster between 1800 and 1968. Seeing this, Paul Ehrlich predicted that in the coming decades this growing population would become unsustainable and there would be “an utter breakdown of the capacity of the planet to support humanity.”[3]

Because of improvements in agriculture and slower population growth, this breakdown has not yet happened, and the global population is expected to peak at 11 billion. This is still many more people than Earth has ever supported, and as people grow in material wealth, the per-person strain on the environment increases beyond what it ever was. Does this present an existential risk?

Resource depletion is unlikely to present any real risk to our potential.[4] A bigger threat is biodiversity loss. Some suggest we are witnessing the next mass extinction. While it is difficult to compare current extinction rates with the fossil record, current species loss appears to be both much smaller (1% of species versus at least 75%) and much faster (10 to 100 times faster) than previous mass extinctions. From the perspective of humanity’s survival, the most important thing is that ecosystems do not break down so far that they stop providing vital services such as purifying water, providing energy and resources, improving our soil, and creating breathable air. These risks are not well understood, and if we continue to put enormous pressure on our environment this century, this may result in large, currently unforeseen risks.

Emerging pandemic risk

From 1347 to 1353, between one-quarter and one-half of Europeans were killed by plague (Ziegler, 1969). After World War I, the 1918 flu (also known as Spanish flu) spread to six continents, infected over a third of the world’s population, and killed more people than the war (Taubenberger & Morens, 2006). Neither of these events was devastating enough to end humanity, or even collapse civilisation, and we would likely recover from a pandemic on a similar scale. We can also infer from the fossil record that, like other natural risks, the risk from a natural pandemic must be incredibly low.[5] However, improvements in biotechnology mean that we face a substantial new risk from engineered pandemics.

One risk comes from well-intentioned scientists trying to study viruses. Although most of this research poses no danger to humanity, a few experiments involve trying to give viruses new abilities — for instance, making them more deadly or transmissible.[6] While these experiments take place in the highest-security labs, there have been multiple leaks of deadly pathogens such as smallpox (1971 and 1978) and anthrax (1979 and 2015). This is particularly worrying because these labs lack transparency and we very likely do not know about all of the leaks.

There is also the threat of misuse. Fifteen countries are known to have developed bioweapons programs at some point in the last century. The largest program was in the Soviet Union, with a dozen labs employing 9,000 scientists to weaponise diseases like plague and smallpox.[7] That we have seen few deaths from bioweapons so far, compared to natural pandemics, is not as reassuring as it first appears. Deaths from war follow a power-law distribution where most deaths occur in just a few very large wars and most wars kill far fewer people; if this is the case with biological risks, then the risk might be quite high despite few deaths so far.

Biotechnology is also increasingly democratised. It took 13 years from — 1990 to 2003 — and over $500 million to produce the full DNA sequence of the human genome. In 2019 it cost less than $1,000 and took less than an hour. While this trend will result in fantastic applications of this technology to improve our lives, over the coming century it will also give people increasing access to dangerous pathogens.

There are clear efforts to reduce these risks, but more is needed. For instance, the Biological Weapons Convention of 1972 is monitored by just four employees with a budget smaller than an average McDonald’s restaurant.[8] And while many companies that synthesize DNA are careful to ensure that pathogens don’t fall into the wrong hands, perhaps only 80% of their orders are screened for dangerous pathogens (DiEuliis, Carter & Gronvall, 2017).

Artificial intelligence

Initially, AI dominated tasks that were previously thought to require our unique human intelligence, such as chess. But progress was faltering and slow on seemingly simple tasks, such as recognising a dog versus a cat. Now, AI can do many of these tasks — for instance, recognising faces better than a human.[9] Experts even find it plausible that we could invent a fully general artificial intelligence this century; a 2016 survey of researchers who had published in NeurIPS and ICML indicated that the average respondent gave this a 50% chance by 2061 (Grace et al., 2018).

Chimpanzees are not going to decide the fate of the world, or even the fate of chimps. Instead, we get to decide, because we are the most intelligent and technologically advanced species. In the absence of other evidence, we should expect that losing our position as the most intelligent species would be a big deal (and perhaps not one that will favour humans).

Importantly, ensuring that AI is aligned to human values appears to be a difficult and unsolved problem. The methods we have for producing intelligence tend to either involve letting a human decide on a reward function and then training a neural network to act in ways that produce greater rewards or letting an AI observe human choice and infer a reward system based on this. But humanity’s values are too complex and subtle to write down as a simple formula, and we do not know how to guide an AI system to learn them.[10]

Initially, we may have the option to turn off an AI system. But over time, these systems will likely become resistant to this. Indeed, to maximise their reward function they must survive and thwart our attempts to bring their reward function in line with human values. Ultimately, an AI system is incentivised to take control of resources and shape the world — wresting control from humans. Since humans would predictably interfere with these goals, it would also be incentivised to hide its true goals until they are powerful enough to resist our attempts to stop them. 

Contrary to Hollywood blockbusters, AI would not need robots in order to gain control. The most powerful figures of history were not the strongest; Hitler, Stalin, and Genghis Khan used words to convince millions to fight their battles. AI could well do the same. Even if many humans are left alive, this could permanently destroy humanity’s potential — and thus be an existential catastrophe.

Dystopian scenarios

We could lose humanity’s potential by letting the world become locked into a permanent state of little value. The most obvious scenario is permanent authoritarian rule made possible by advances in technology for detecting and eliminating dissent.[11] Even if there was a slim chance that we were able to recover, an event like this could destroy most of our potential. There is little such risk in the near future, but as technology improves, this may change.

The next post in this series will tie everything together with an overview of the risk landscape, including quantitative estimates of the risk from nuclear war, climate change, pandemics, and artificial intelligence. 

Sources

Biological Weapons Convention Implementation Support Unit (2019). Biological Weapons Convention—Budgetary and Financial Matters.

Diane DiEuliis, Sarah R. Carter, and Gigi Kwik Gronvall (2017). Options for Synthetic DNA Order Screening, Revisited. mSphere 2/4.

Leonid Brezhnev (1979). Brezhnev Message to President on Nuclear False Alarm, Diplomatic Cable (No. 1979STATE295771) from Sec State (D.C.) to Moscow American Embassy. National Security Archive, United States Department of State.

Matthew Collins, Reto Knutti, Julie Arblaster, Jean-Louis Dufresne, Thierry Fichefet, Pierre Friedlingstein, Xuejie Gao, William J Gutowski Jr., Tim Johns, Gerhard Krinner, Mxolisi Shongwe, Claudia Tebaldi, Andrew J Weaver and Michael Wehner (2013). Long-Term Climate Change: Projections, Commitments and Irreversibility. In Climate Change 2013—The Physical Science Basis: Working Group I Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press.

Robert M Gates (2011). From the Shadows: The Ultimate Insider’s Story of Five Presidents and How They Won the Cold War. Simon and Schuster.

Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans (2018). Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts. Journal of Artificial Intelligence Research 62.

Sander Herfst, Eefje J A Schrauwen, Martin Linster, Salin Chutinimitkul, Emmie de Wit, Vincent J Munster, Erin M Sorrell, Theo M Bestebroer, David F Burke, Derek J Smith, Guus F Rimmelzwaan, Albert D M E Osterhaus and Ron A M Fouchier (2012). Airborne transmission of Influenza A/H5N1 Virus Between Ferrets. Science 336/6088.

Intergovernmental Panel on Climate Change (2014). Summary for Policymakers. In Climate Change 2014—Impacts, Adaptation and Vulnerability: Part A: Global and Sectoral Aspects: Working Group II Contribution to the IPCC Fifth Assessment Report. Cambridge University Press.

Rebecca Lindsey (2018). Climate Change: Atmospheric Carbon Dioxide. Climate.gov.

Charles C Mann (2018). The Book that Incited a Worldwide Fear of Overpopulation. Smithsonian Magazine.

McDonald’s Corporation (2018). Form 10-K. (McDonald’s Corporation Annual Report).

National Oceanic and Atmospheric Administration (2019). Global Monthly Mean CO2. Global Monitoring Laboratory.

Max Popp, Hauke Schmidt and Jochem Marotzke (2016). Transition to a Moist

Greenhouse with CO2 and Solar Forcing. Nature Communications 7.

Alan Robock, Luke Oman and Georgiy L Stenchikov (2007). Nuclear Winter Revisited with a Modern Climate Model and Current Nuclear Arsenals: Still Catastrophic Consequences. Journal of Geophysical Research: Atmospheres 112/D13.

Eric Schlosser (2013). Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety. Penguin.

Jeffery K Taubenberger and David M Morens (2006). 1918 Influenza: The Mother of all Pandemics. Emerging Infectious Diseases 12/1.

US Department of Defense (1981). Narrative Summaries of Accidents Involving US Nuclear Weapons (1950–1980). Homeland Security Digital Library.

Philip Ziegler (1969). The Black Death. Harper Collins.

 

Image of the earth from: www.tobyord.com/earth

  1. ^

     This might not even collapse civilisation entirely, as places such as New Zealand and the southeast of Australia would avoid the worst effects by being unlikely targets and surrounded by ocean; they could likely survive with most of their technology and institutions intact.

  2. ^

    If this was possible, and we emitted more than the Intergovernmental Panel on Climate Change expects us to even on the high-emissions pathway, then 40 degrees of warming is plausible (Collins et al., 2013, p. 1096; Popp, Schmidt & Marotzke, 2016).

  3. ^

     From a speech given in 1969 (see Mann, 2018).

  4. ^

    If we failed to find new sources of fossil fuels, this might reduce existential risk from climate change. We have 26 million litres of accessible fresh water per person, and if we needed to, we could desalinate seawater at a cost of $1 per 1,000 litres. If we began to face shortages of certain metals, markets would likely slow consumption, encourage recycling, and develop alternatives. Indeed, there is no clear danger, though it is possible that there is a (currently unidentified) material that is rare, essential, irreplaceable and difficult to recycle.

  5. ^

    We might adjust our estimate to account for changes in the world. Some increase the risk: the global population is a thousand times greater than over most of human history, our farming practices create vast numbers of unhealthy animals that live in close proximity with humans, and we are more interconnected than ever before. Others reduce the risks: we are healthier than our ancestors, we have better sanitation and hygiene, we can fight disease with our improved scientific understanding of pathogens, and we have spread to many different environments throughout the world.

  6. ^

    For example, Dutch virologists published an experiment where he took a strain of bird flu which could kill over 60% of infected people (Taubenberger & Morens, 2006) and modified it to be directly transmissible between mammals (Herfst et al., 2012).

  7. ^

    They reportedly built up a stockpile of more than 20 tons of smallpox and plague.

  8. ^

    The international body responsible for the continued prohibition of bioweapons has a budget of $1.4 million (Biological Weapons Convention Implementation Support Unit, 2019) compared to an average $2.8 million to run a McDonald’s (McDonaldʼs Corporation, 2018, pp. 14, 20).

  9. ^

    This only includes advances before 2020. I am writing this summary in 2023, after three years of qualitative leaps in AI capabilities.

  10. ^

    Even if we could, these values are uncertain, complex, held by billions of people with slightly different views, and liable to change over time. And solving these problems would be hard even if we assumed that the values of AI are not shaped by other motives such as winning a war or turning a profit.

  11. ^

    Such a future might not be forced upon us but instead caused by population-level forces. This would be similar to how market forces can create a race to the bottom or how Malthusian population dynamics can push down the average quality of life. It might also be our own choice, likely because the predominant ideology gets something wrong. For instance, we may forever fail to recognise some form of injustice or we may renounce technological advancement, and with it our chances to fulfill our potential.

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities