This post outlines the types of civilisational vulnerabilities described in Nick Bostrom's article The Vulnerable World Hypothesis (discussed in this TED talk). Any errors/misinterpretations are my own.


  • Civilizational devastation: "destructive event that is at least as bad as the death of 15 per cent of the world population or a reduction of global GDP by > 50 per cent per cent lasting for more than a decade".
  • Semi-anarchic default condition:
    • Limited capacity for preventive policing: "States do not have sufficiently reliable means of real-time surveillance and interception to make it virtually impossible for any individual or small group within their territory to carry out illegal actions – particularly actions that are very strongly disfavored by > 99 per cent of the population".
    • Limited capacity for global governance: "There is no reliable mechanism for solving global coordination problems and protecting global commons – particularly in high-stakes situations where vital national security interests are involved".
    • Diverse motivations: "There is a wide and recognizably human distribution of motives represented by a large population of actors (at both the individual and state level) – in particular, there are many actors motivated, to a substantial degree, by perceived self-interest (e.g. money, power, status, comfort and convenience) and there are some actors (‘the apocalyptic residual’) who would act in ways that destroy civilization even at high cost to themselves".

Type-1 vulnerability ("easy nukes")

Definition: "There is some technology which is so destructive and so easy to use that, given the semi-anarchic default condition, the actions of actors in the apocalyptic residual make civilizational devastation extremely likely".

  • "So destructive and [or] so easy to use" because what matters is the "expected harm", which is proportional to the likelihood of enabling a certain harm and to the severity of that harm. (And the total expected harm would be the sum of the expected harms of the various harmful technologies.)
  • Consequently, the lower is the feasibility of causing harm, the higher its severity has to be such that civilizational devastation is extremely likely. Some examples of “type-1” vulnerabilities:
    • "Very easy nukes" scenario (VEN), where the likelihood and severity are large.
    • "Moderately easy bio-doom" scenario (MEBD), where the likelihood is smaller than in the VEN, but the severity is larger.
    • "Easy nukes" scenario, where the likelihood is between those of the VEN and the MEBD, but sufficiently high to result in a large expected harm.

Type-2a vulnerability ("safe first strike")

Definition: "There is some level of technology at which powerful actors have the ability to produce civilization-devastating harms and, in the semi-anarchic default condition, face incentives to use that ability".

  • "Powerful actors" could be, for instance, leaders of powerful countries (e.g. US and China) or influential companies (e.g. Google and Facebook).
  • Example of "incentives to use that ability":
    • If both the US and Soviet Union had not been able to ensure second strike capability during the Cold War (a "hard nukes" scenario), there would have been an incentive for a "safe first strike".
    • The "first striker" would alleviate the fear of becoming the victim of a first strike while surviving relatively unscathed (neglecting nuclear winter).

Type-2b vulnerability ("worse global warming")

Definition: "There is some level of technology at which, in the semi-anarchic default condition, a great many actors face incentives to take some slightly damaging action such that the combined effect of those actions is civilizational devastation".

  • As in the type-2a vulnerability, actors face incentives to porsue the course of action that leads to damage.
  • However, for the type-2b vulnerability, the incentices are faced by "a great many actors" (e.g. typical citizens of many countries).
  • The paper describes describes a "worse global warming" scenario as an example of type-2b vulnerability. For this scenario, we could imagine that:
    • The incentives to produce CO2e emissions are roughly as strong as in the "actual global warming" scenario, but the transient climate sensitivity (roughly speaking, the medium term temperature change per amount of CO2e emissions) is much higher.
    • The transient climate sensitivity is the same as in the "actual global warming" scenario, but the incentives to produce CO2e emissions are much stronger (much cheaper fossil fuels due to much greater availability).

Type-0 vulnerability ("surprising strangelets")

Definition: "There is some technology that carries a hidden risk such that the default outcome when it is discovered is inadvertent civilizational devastation".

  • "Inadvertent" refers to an adverse outcome which sprang from bad luck, not coordination failure.
  • The paper describes a "surprising strangelets" scenario as an example of type-2b vulnerability: 
    • "Some modern high-energy physics experiment turns out to initiate a self-catalyzing process in which ordinary matter gets converted into strange matter, with the result that our planet is destroyed".