Hide table of contents

Summary

In The Precipice, Toby Ord defines an existential risk as “a risk that threatens the destruction of humanity’s longterm potential”. This could involve extinction, an unrecoverable collapse, or an unrecoverable dystopia. (See also.)

Ord uses the term existential security to refer to “a place of safety - a place where existential risk is low and stays low”. This doesn’t require reaching a state with zero existential risk per year. But it requires that existential risk per year either (a) indefinitely trends downwards (on average), or (b) is extremely low and roughly stable. This is because even a very low but stable risk per year can practically guarantee existential catastrophe happens at some point, given a long enough time.[1]

My own one-sentence description of existential security would therefore be: A state where the total existential risk across all time is low, such that humanity’s long-term potential is preserved and protected.[2]

Purpose and epistemic status of this post

This post consists primarily of quotes from The Precipice, along with some commentary. I hope this post can:

  • Serve as a summary of the concept of existential security for people who haven’t read The Precipice.
  • Serve as an online summary that can be linked to.
  • Set the stage for my next few planned posts, which are related to the “grand strategy for humanity” that Ord presents in The Precipice. Ord summarises his “grand strategy” as follows:

I think that at the highest level we should adopt a strategy proceeding in three phases:

  1. Reaching Existential Security
  2. The Long Reflection
  3. Achieving Our Potential

Preserving and protecting our long-term potential

Ord writes that reaching existential security means reaching:

a place of safety - a place where existential risk is low and stays low.

[... This] has two strands. Most obviously, we need to preserve humanity’s potential, extracting ourselves from immediate danger so we don’t fail before we’ve got our house in order. This includes direct work on the most pressing existential risks and risk factors, as well as near-term changes to our norms and institutions.

But we also need to protect humanity’s potential - to establish lasting safeguards that will defend humanity from dangers over the longterm future, so that it becomes almost impossible to fail. Where preserving our potential is akin to fighting the latest fire, protecting our potential is making changes to ensure that fire will never again pose a serious threat. This will involve major changes to our norms and institutions (giving humanity the prudence and patience we need), as well as ways of increasing our general resilience to catastrophe. This needn’t require foreseeing all future risks right now. It is enough if we can set humanity firmly on a course where we will be taking the new risks seriously: managing them successfully right from their onset or sidestepping them entirely.

[...]

Ultimately, existential security is about reducing total existential risk by as many percentage points as possible. Preserving our potential is helping lower the portion of the total risk that we face in the next few decades, while protecting our potential is helping lower the portion that comes over the longer run. We can work on these strands in parallel, devoting some of our efforts to reducing imminent risks and some to building the capabilities, institutions, wisdom and will to ensure that future risks are minimal.

Ord’s distinction between “lower[ing] the portion of the total risk that we face in the next few decades” and “lower[ing] the portion that comes over the longer run” seems useful to me.[3]

Elsewhere, Ord alludes to the same distinction when he writes that we could characterise reaching existential security as requiring that we “Avoid failing immediately & make it impossible to fail”.

Continually declining levels of risk

However, “make it impossible to fail” seems to be overstating things (presumably in an understandable effort to summarise the essence of the idea). As Ord himself writes:

Note that existential security doesn’t require the risk to be brought down to zero. That would be an impossible target, and attempts to achieve it may well be counter-productive. What humanity needs to do is bring this century’s risk down to a very low level, then keep gradually reducing it from there as the centuries go on. In this way, even though there may always remain some risk in each century, the total risk over our entire future can be kept small. We could view this as a form of existential sustainability. Futures in which accumulated existential risk is allowed to climb towards 100 percent are unsustainable. So we need to set a strict risk budget over our entire future, parcelling out this non-renewable resource with great care over the generations to come.

He further writes:

A numerical example may help explain this. First, suppose we succeeded in reducing existential risk down to 1% per century and kept it there. This would be an excellent start, but it would have to be supplemented by a commitment to further reduce the risk. Because at 1% per century, we would only have another 100 centuries on average before succumbing to existential catastrophe. This may sound like a long time, but it is just 5% of what we’ve survived so far and a tiny fraction of what we should be able to achieve.

In contrast, if we could continually reduce the risk in each century, we needn’t inevitably face existential catastrophe. For example, if we were to reduce the chance of extinction by a tenth each successive century (1%, 0.9%, 0.81% . . .), there would be a better than 90% chance that we would never suffer an existential catastrophe, no matter how many centuries passed. For the chance we survive all periods is:

(100% - 1%) × (100% - 0.9%) × (100% - 0.81%) × . . .

≈ 90.4598%

This means there would be a better than 90% chance we survive until we reach some external insurmountable limit - perhaps the death of the last stars, the decay of all matter into energy, or having achieved everything possible with the resources available to us.

Such a continued reduction in risk may be easier than one would think. If the risks of each century were completely separate from those of the next, this would seem to require striving harder and harder to reduce them as time goes on. But there are actions we can take now that reduce risks across many time periods. For example, building understanding of existential risk and the best strategies for dealing with it; or fostering civilisational prudence and patience; or building institutions to investigate and manage existential risk. Because these actions address risks in subsequent time periods as well, they could lead to a diminishing risk per century, even with a constant amount of effort over time. In addition, there may just be a limited stock of novel anthropogenic risks, such that successive centuries don’t keep bringing in new risks to manage. For example, we may reach a technological ceiling, such that we are no longer introducing novel technological risks.

So when Ord writes that existential security is “a place where existential risk is low and stays low”, it seems that he means a place where the total existential risk across all time “stays low”. He implies that that requires that the risk per unit of time indefinitely trends downwards, rather than merely being brought to a low and stable level.[4]

What about non-declining but extremely low risk levels?

That said, it seems possible that we could achieve low total risk across all time even if risk levels do not indefinitely trend donwards, as long as we reach extremely low risk levels. For example, suppose that, in a trillion years, we’d reach “some external insurmountable limit [such as] the death of the last stars”. And suppose we reduce existential risk to 1 in 100 trillion per year, and then don’t reduce existential risk any further. In that case, I believe the total risk, across all time, would be around 1%.

I’m guessing that Ord omitted mention of such possibilities due to:

  • A desire for brevity and simplicity;
  • Uncertainty about whether or when we’d reach an “external insurmountable limit”; and/or
  • It being perhaps hard to imagine bringing existential risk per year down to such extremely low levels

(Alternatively, my reasoning may be flawed.)

Closing remarks

Existential security can be summarised as referring to a state where the total existential risk across all time is low, such that humanity’s long-term potential is preserved and protected. Moving towards such a state seems highly valuable and urgent. Indeed, the first phase of Ord’s grand strategy for humanity consists of reaching existential security.

I plan to soon publish a series of posts which build on this one by discussing:

  • What types of futures remain possible if we reach existential security, including futures in which humanity does not “[fulfil] its potential: achieving something close to the best future open to us”
  • How likely it is that humanity will achieve its potential, as long as existential security is reached
  • Arguments for and against Ord’s grand strategy
  • A typology of strategies for influencing the future

This post is related to my work with Convergence Analysis. My thanks to David Kristoffersson for useful comments on an earlier draft.

This is one of a series of posts I’ve written or plan to write that summarise, comment on, or take inspiration from parts of The Precipice. You can find a list of all such posts here.


  1. We could also choose to focus on the risk per any other unit of time (e.g., century). ↩︎

  2. Shortly after the release of The Precipice, an alternative (though related) meaning of the term “existential security” was introduced in a paper by Nathan Alexander Sears. Sears uses the term to refer to “a new framework for security policy [...] that puts the survival of humanity at its core”. That meaning of “existential security” is not the focus of this post. ↩︎

  3. That said, using the terms “preserving” and “protecting” to refer to those two concepts, in that order, doesn’t seem intuitive to me. ↩︎

  4. But note that it is only necessary for the risk per unit of time to tend to decline over time, not for the risk per unit of time to decline at every single time step. That is, it could be possible to have reached existential security even if existential risk sometimes ticks upwards slightly, as long as the overall trend is downward. ↩︎

Comments1
Sorted by Click to highlight new comments since: Today at 1:18 PM

Unimportant bonus info about the history of the term/concept “existential security”, and of this post:

It seems that concepts corresponding to what Ord calls “existential security”, or something similar, had been discussed under various names by various authors for several years prior to the release of The Precipice. But there didn’t seem to be any really detailed discussion of the concept until The Precipice.

And the term “existential security” had almost never been used for this concept, based on the first two pages of results when I googled ““existential security” “existential risk”” in February 2020 (before The Precipice was released). The only really relevant result was Will MacAskill, in a 2018 podcast interview, saying “The first [stage] is to reduce extinction risks down basically to zero, put us a position of kind of existential security”. Most results were just things calling climate change an “existential security risk”. 

I was doing that googling in February because I was pretty sure I’d heard of this concept, but I couldn’t find any proper write-up on it, and thus decided I might write a post about it. I was intending to use the term “existential security”, but to also suggest the terms “existential safety” and “existential stability” as options. But then I decided that, as The Precipice would be released a month later, I’d hold off till I read that, in case Ord discussed this idea. 

And indeed, it turned out Ord discussed this concept thoroughly and well, and using the term I’d been leaning towards.[1]

But there was no summary of Ord’s conceptualisation of existential security on the EA Forum or LessWrong. So I decided to adapt my draft into such a summary, as well as a discussion of how this concept relates to other terms and concepts. And then I later abandoned the idea of comparing the concept to other terms and concepts, though you can find my unpolished notes on that here.

[1] I’m unsure whether this is a result of:

  1. me independently converging on the same idea (perhaps primed by MacAskill's one mention of the term), or
  2. the idea having been occasionally discussed verbally in ways that reached me - but that I’ve since forgotten - despite the idea having not been on the internet yet.
Curated and popular this week
Relevant opportunities