Hide table of contents

This post is the final part of my summary of The Precipice, by Toby Ord. Previous posts gave an overview of the existential risks. We learned that some of these risks (especially the emerging anthropogenic risks) are alarmingly high. This post explores our place in the story of humanity and the importance of reducing existential risk.

 

A single human in the wilderness is nothing exceptional. But together humans have the ability to shape the world and determine the future of our species, planet, and universe.

We learn from our ancestors, add minor innovations of our own, and teach our children. We are the beneficiaries of countless improvements in technology, mathematics, language, institutions, culture, and art. These improvements make our lives much better than the lives of our ancestors.[1]

We hope that life will continue to improve. And we could have a lot of time to get things right. Humans have walked the earth for around 200,000 years, but a typical mammalian species lasts for a million years, and our planet will remain habitable for a billion years. This is enough time to eradicate malaria and HIV, eliminate depression and dementia, and create a world free from racism, sexism, torture, and oppression. With so much time ahead of us, we might even figure out how to leave our solar system and settle the stars. If so, we could have a truly staggering number of descendants who can explore the universe and build wonders and masterpieces better than we can imagine. If we go extinct, all of this will be lost.

We have always faced a small risk from asteroids, pandemics, and volcanoes. But it was only recently that we began to face larger risks of our own creation. This period of heightened risk began last century with the invention of nuclear weapons (we now have enough to kill everyone on earth). Over the next century we will face additional risk from emerging developments in biotechnology and AI. In the words of Toby Ord, we are standing on “a crumbling ledge on the brink of a precipice.”

Safeguarding humanity is the defining challenge of our time.[2] If we rise to it, there may be trillions of people living meaningful lives in the future. If we fail, then in all likelihood we will destroy ourselves. The fate of the world rests on our collective decisions.

Why should we try to prevent extinction?

If a large asteroid was hurtling towards Earth, few would argue against building a deflection system. This indicates that our collective inaction is driven by a shared sentiment that the risk of extinction is low, rather than the belief that humanity is not worth protecting. However, it is still worth reflecting on why preventing extinction is so important.

A tragedy on the grandest scale

Sudden extinction, such as from an asteroid collision, would involve the sudden and gruesome deaths of billions of people, perhaps everyone. This alone would make it the most severe tragedy in history.

The destruction of our potential

Extinction would destroy our immense potential. Almost all humans that will ever live are yet to be born. Almost all human well-being and flourishing is yet to happen.[3] All of this would be lost if the present generation went extinct.

Intergenerational projects

Our ancestors set in motion great projects for humanity — ending war, forging a just world, and understanding the universe. No single generation can complete these projects. But humanity can, with each generation contributing just a little. We benefit immensely from knowledge and wisdom passed down to us from previous generations, and we owe it to our children and grandchildren to protect this legacy and pass it down to them. Extinction would also destroy all cultural traditions, languages, poetry, and culture. We ought instead to protect, preserve, and cherish these things.[4]

Civilisational virtues

We are accustomed to understanding virtues on an individual level, but we could also think of the collective virtues of humanity. When we fail to take these risks seriously, humanity might collectively demonstrate a lack of prudence. When we value our own generation so much as to put all future generations at risk, we demonstrate a lack of patience. And when we fail to prioritise well-known risks, we display a lack of self-discipline. When we do not rise to the challenge, we display a lack of hope, perseverance, and responsibility for our own actions.

Cosmic significance

We may be alone in the universe. If there are no aliens, then all life on Earth may have cosmic significance. Humanity would be in a unique position to explore and understand the universe. We would also have a responsibility to all life, as we would be the only ones who could protect it from harm and promote flourishing on other planets.

Uncertainty

Correctly accounting for our uncertainty about the future tends to strengthen the case for protecting our potential because the stakes are asymmetrical: overinvesting in safety is simply much better than letting everyone die. This means that even if we believe the risks are low, but we are not completely confident, then some efforts to safeguard humanity are warranted.[5]

Why are existential risks neglected?

Are existential risks neglected?

Humanity spends less money attempting to prevent existential risk than it does on ice cream each year.[6] The most risky emerging technologies are biotechnology and AI (see parts 2 & 3). Yet the international body responsible for the continued prohibition of bioweapons has an annual budget less than that of an average McDonald’s restaurant.[7] And while we spend billions of dollars improving the capabilities of AI systems, we only spend tens of millions of dollars on ensuring safety. Research similarly neglects the most severe risks: for instance, there is plenty of research on the possible effects of climate change, but scenarios involving more than six degrees of warming are rarely studied or given space in policy discussions (King et al., 2015). There are several reasons for this neglect.

Existential risk as a global public good

When one organisation or government reduces the risk, they improve the situation for everyone in the world. Everyone is incentivised to wait for someone else to solve the problem and benefit from the hard work of others. This dynamic happens across generations too. So, many of the people who benefit if we safeguard humanity have not even been born yet. We do not yet have robust ways to coordinate on these issues.

Short-term institutions

Additionally, political decisions are notoriously short term. Existential risk tends to be ignored in favour of more urgent issues. That said, most existential risks are new relative to our political institutions, which have been built up over thousands of years. We only began to have the power to destroy ourselves in the midde of the last century, and since then there has begun to be serious thought about the possibility of extinction. Perhaps our institutions and practices will gradually adapt.

Patterns of thinking

Our brains are not built to grasp these risks intuitively, and there are several patterns of thinking that lead us to neglect existential risk. For instance, we tend to estimate the likelihood of an event based on how easy it is to recall examples of it happening in the past. This availability heuristic serves us well most of the time, but when we are dealing with risks like extinction, these heuristics allow us to ignore even large and growing risks. We also lack sensitivity to the scale of various catastrophes.

Sources

Biological Weapons Convention Implementation Support Unit (2019). Biological Weapons Convention—Budgetary and Financial Matters.

Mark Nathan Cohen (1989). Health and the Rise of Civilization. Yale University Press.

Joe Hasell, Max Roser, Esteban Ortiz-Ospina and Pablo Arriagada (2022). Poverty. Our World in Data. (This article has been updated since The precipice was published in 2020).

David King, Daniel Schrag, Zhou Dadi, Qi Ye and Arunabha Ghosh (2015). Climate Change: A Risk Assessment. Centre for Science and Policy.

IMARC Group (2019). Ice Cream Market: Global Industry Trends, Share, Size, Growth, Opportunity and Forecast 2019–2024.

McDonald’s Corporation (2018). Form 10-K. (McDonald’s Corporation Annual Report).

Max Roser and Esteban Ortiz-Ospina (2019). Literacy. Our World in Data.

World Health Organization (2016). World Health Statistics 2016: Monitoring Health for the SDGs, Sustainable Development Goals.

Image of the earth from: www.tobyord.com/earth

 

  1. ^

    While 1 person in 10 is so remarkably poor today that they live on less than $2 per day, before the Industrial Revolution 19 out of 20 people were this poor. Throughout history, only a tiny elite was ever much above subsistence (Hasell, Roser, Ortiz-Ospina & Arriagada, 2022). Our health and education are also much better than ever before. Before the Industrial Revolution, 1 in 10 could read and write; now more than 8 in 10 can (Roser & Ortiz-Ospina, 2019). For 10,000 years, life expectancy was between 20 and 30 years; now it is 72 years (Cohen 1989; World Health Organization, 2016). According to Toby Ord, “It is not that things are great today, but that they were terrible before” (p. 294).

  2. ^

     The importance of safeguarding humanity is familiar at the smallest scale. Consider a child who has a bright future ahead of them. They must be protected from accident, trauma, or lack of education that would prevent their flourishing. We must put safeguards in place to preserve their potential.

  3. ^

     Though Ord focusses on humanity, he does not believe that we are the only source of value in the universe but that we appear to be the only beings capable of shaping the future in a way that is particularly valuable. He also uses the term very inclusively to include (perhaps very different from us) moral agents that we morph into or create.

  1. ^

     We may have duties to properly acknowledge and remedy past horrors. If we went extinct, there would be no opportunity to ever do so.

  2. ^

     Indeed, even if we thought the future was likely to be worse than nonexistence, protecting our potential might still be worthwhile. First, some risks would still be clearly worth preventing, such as the risk of stable global totalitarianism. Second, there would be a strong reason to gather more information about the value of the future, and it would be incredibly reckless to let humanity destroy itself now.

  3. ^

     The ice-cream market was estimated at $60 billion in 2018 (IMARC Group, 2019).

  4. ^

    The international body responsible for the continued prohibition of bioweapons has a budget of $1.4 million (Biological Weapons Convention Implementation Support Unit, 2019) compared to an average $2.8 million to run a McDonald’s (McDonaldʼs Corporation, 2018, pp. 14, 20).

Show all footnotes
Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 12m read
 · 
Economic growth is a unique field, because it is relevant to both the global development side of EA and the AI side of EA. Global development policy can be informed by models that offer helpful diagnostics into the drivers of growth, while growth models can also inform us about how AI progress will affect society. My friend asked me to create a growth theory reading list for an average EA who is interested in applying growth theory to EA concerns. This is my list. (It's shorter and more balanced between AI/GHD than this list) I hope it helps anyone who wants to dig into growth questions themselves. These papers require a fair amount of mathematical maturity. If you don't feel confident about your math, I encourage you to start with Jones 2016 to get a really strong grounding in the facts of growth, with some explanations in words for how growth economists think about fitting them into theories. Basics of growth These two papers cover the foundations of growth theory. They aren't strictly essential for understanding the other papers, but they're helpful and likely where you should start if you have no background in growth. Jones 2016 Sociologically, growth theory is all about finding facts that beg to be explained. For half a century, growth theory was almost singularly oriented around explaining the "Kaldor facts" of growth. These facts organize what theories are entertained, even though they cannot actually validate a theory – after all, a totally incorrect theory could arrive at the right answer by chance. In this way, growth theorists are engaged in detective work; they try to piece together the stories that make sense given the facts, making leaps when they have to. This places the facts of growth squarely in the center of theorizing, and Jones 2016 is the most comprehensive treatment of those facts, with accessible descriptions of how growth models try to represent those facts. You will notice that I recommend more than a few papers by Chad Jones in this
LintzA
 ·  · 15m read
 · 
Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achieve 25% on its Frontier Math
Omnizoid
 ·  · 5m read
 · 
Edit 1/29: Funding is back, baby!  Crossposted from my blog.   (This could end up being the most important thing I’ve ever written. Please like and restack it—if you have a big blog, please write about it). A mother holds her sick baby to her chest. She knows he doesn’t have long to live. She hears him coughing—those body-wracking coughs—that expel mucus and phlegm, leaving him desperately gasping for air. He is just a few months old. And yet that’s how old he will be when he dies. The aforementioned scene is likely to become increasingly common in the coming years. Fortunately, there is still hope. Trump recently signed an executive order shutting off almost all foreign aid. Most terrifyingly, this included shutting off the PEPFAR program—the single most successful foreign aid program in my lifetime. PEPFAR provides treatment and prevention of HIV and AIDS—it has saved about 25 million people since its implementation in 2001, despite only taking less than 0.1% of the federal budget. Every single day that it is operative, PEPFAR supports: > * More than 222,000 people on treatment in the program collecting ARVs to stay healthy; > * More than 224,000 HIV tests, newly diagnosing 4,374 people with HIV – 10% of whom are pregnant women attending antenatal clinic visits; > * Services for 17,695 orphans and vulnerable children impacted by HIV; > * 7,163 cervical cancer screenings, newly diagnosing 363 women with cervical cancer or pre-cancerous lesions, and treating 324 women with positive cervical cancer results; > * Care and support for 3,618 women experiencing gender-based violence, including 779 women who experienced sexual violence. The most important thing PEPFAR does is provide life-saving anti-retroviral treatments to millions of victims of HIV. More than 20 million people living with HIV globally depend on daily anti-retrovirals, including over half a million children. These children, facing a deadly illness in desperately poor countries, are now going