Hide table of contents

Summary:  If a very large global catastrophic pandemic requires a stealthy pathogen, as suggested as a key pathway by Manheim, 2018, then preventing pathogens from being stealthy in at least one place could be enough to greatly reduce the existential risk. If this is the case, it could mean that just pushing for the implementation of metagenomic sequencing (at least) at the entries of one single country could be enough to greatly reduce the risk. If this is true, it could be a relatively low-hanging fruit.

Epistemic status: I'm not sure of all my assertions. This is intended to spark a discussion, and build a clearer idea of the type of risk associated with each scenario. Take everything I’ve said in this article as meaning "I have the impression that". (I have thought and read quite extensively about the topic for 3 weeks as part of a research project on GCBRs reduction. I’ve spent 2 months more chatting with some people about it and having it in mind). At some point, I give some estimates of risk using words. It enables to rank the risks. I didn't feel like I could give meaningful probability estimates.

We'll be considering in this post five different kinds of agents:

  • State Agents, for which there are three main classes of scenarios that could lead to a catastrophic event:
    • Agents release some pathogens to cause as much harm as possible:
      • A malevolent agent has enough power to lead a country to release bioweapons on purpose.
      • A state feels threatened and does everything to survive, or to cause as much harm as possible before disappearing.
    • There is an accidental leak from a bioweapon program (we include this below)
  • Non-State Agents with large capabilities/resources (Al-Qaeda, for instance).
    • Most such groups are not omnicidal / are not motivated to pursue the most worrying bioweapons technologies. (Omnicidal groups are thankfully not popular.)
    • They are much less likely to be cautious and are not subject to the same pressures against the development or use of bioweapons that states are.
  • Non-State Agents without extensive resources (most terrorist and omnicidal groups)
  • Accidental Leaks (from insufficiently cautious state bioweapons programs or from other research of concern)
  • Natural Pandemic Emergence

A Risk Factor Which is a Game-Changer

Before going into the details of each scenario, we need to talk about one of the factors that can vary the overall risk and its distribution.

According to Kevin Esvelt, a microbiologist who works on evolutionary and ecological engineering, it's really difficult to develop new pathogens and especially with the kind of functions that could be lethal to humanity. Sonia Quigham-Gormley, a bioweapons expert, seems to strongly support this viewpoint in her book, “Barriers to Bioweapons.” It seems unclear if there are experts who disagree, at least in the near-term future. If this view is correct, it means that most of the risk over the coming few years will come from the biggest research laboratories, whether research labs or state-controlled labs.

As a result, two of the biggest risk factors for each location are the extent to which:

  • There is gain-of-function or other pathogen enhancement or dual-use research of concern involving pathogens which could plausibly lead to or enable the development of extremely dangerous pathogens.
  • Such research is made public, with released DNA sequences, open access, or other forms of information hazard.

Depending on the magnitude of these risk factors, the distribution of the risk changes:

  • Greater publicity about this kind of research greatly increases the risk coming from non-state agents; it tends to increase more than proportionally by enabling many little independent agents such as non-state agents with few resources to engage in research of concern. This would be especially worrying for existential risks.
  • More gain of function research means that all else being equal, there is a higher likelihood of a lab leak. This would be worrying mainly for catastrophic risks. It would also make future attempts to intentionally develop bioweapons more likely to succeed.


 

Finally, it is important to note that there is a qualitative difference between the risk of accidents caused by the research of concern and the risk linked to the publication of information of concern. While gain-of-function research causes a transient risk (i.e. when research is stopped, the risk ceases), the risk of publication permanently increases the potential for small agents to be harmful on a large scale.

Below I will estimate the risk with a set of parameters, assuming that we are not able to totally stop the research of concern but are somewhat able to prevent worse blueprints from being made publicly available.

One key risk modeling technique which could be useful is to approximate or elicit the distributions for these factors. In such a model, it seems likely that big agents have much fatter tails (greater probability of extreme risks) than small agents, but they have much less probability to cause any accident. When looking at expected value, however, it is less clear, and because the tail of the distribution for large agents includes existentially risky scenarios, from a longtermist viewpoint, these risks could easily dominate the calculation.

Existential Risks

Definition: Humanity goes extinct.

Consequences: Humanity's potential is lost forever. As argued by Toby Ord in The Precipice, this outcome is tremendously worse than a catastrophic but non-extinction-level event, as long as we think that humanity’s future is large and valuable, and do not rapidly discount the value of that future.

Type of scenarios:

I see mainly three biorisk scenarios that could lead to humanity's extinction:

  1. A pathogen is released and kills basically almost everyone at the same time, without sufficient warning to respond. It spreads quickly enough to take the human population below the minimum viable population.
  2. A pathogen kills most people and a proactive organization kills the survivors.
  3. A pathogen kills most people, and other indirect causes make humanity go extinct (Note: this is unlikely according to Luisa Rodriguez).

For each of these scenarios, a stealthy pathogen (i.e with a long incubation time) would be of great help and represent an important mass of the probability. Killing most people without a stealthy pathogen seems almost impossible. This is quite clear for the first scenario, but even for scenarios 2 and 3, it seems to be an almost necessary condition. Indeed, World War II as a whole killed no more than 3% of the population, so having a single organization that kills more than 1% of the population seems rather unlikely. For scenario 2 to kill everyone, the pathogen would probably need to kill more than 99% of the people to enable the organization to kill the last survivors. For scenario 3 to kill everyone, the pathogen would have to kill 99,9% of the population, and even given this, Luisa Rodriguez argues that the main way such an event could lead to extinction is if there is only one big group that is surviving. In any of these scenarios, the required amount of deaths seems to make the stealthy characteristic really important because having so many deaths due to a pathogen would probably have few chances to happen if we are aware of it and have any time to prepare or mitigate the spread.

Agents:

  • State Agent: Low to moderate risk (Malevolent agent could make this, and this is where the risk is coming from. State actors, even those acting to ensure their own survival, seem unlikely to have the motivation to develop and widely release such a universally deadly pathogen.)
  • Non-State Actors with extensive resources (Al-Qaeda for instance, or in the past, Aum Shinrikyo): Moderate risk. Given the median scenario for which I estimate the risk, the crafting of a pathogen such as the ones mentioned above and an efficient release of it requires lots of resources and a fairly long time for development. Thus this kind of agent that might be both very malevolent and powerful seem to concentrate a lot of the risk on very few potential actors. Moreover, if we are concerned about scenario 2, it requires that the omnicidal non-state agent can expect to stay alive long enough to be able to kill the last survivors.
  • Non-State Actors without much resources: Low Risk, mainly through scenario 1 or 3. Scenario 2 (a proactive organization which can lead to end game) seems to be highly unlikely. The magnitude of the risk here greatly depends on the publicly available information because such an organization could probably only craft what's already existing. But as there are many more agents like this than the bigger ones, if dangerous pathogens are publicly available, this risk could easily become the main one.
  • Accidental Leaks: Low risk except through scenario 1 or 3. The risks are comparable to those coming from weak non state agents because there's no intentionality to kill humanity (which makes update downward on the risk) but there are many labs, there were many leaks in the past and in our median scenario there is till some gain of function research that make scenario 1 and 3 possible.
  • Natural Pandemics: Likely a very low risk. One big uncertainty, as argued in Manheim’s 2018 paper, is that:
    • In the past, the world was not interconnected so one pathogen couldn't kill humanity as a whole.
    • Thus there could have been pathogens that wiped out entire civilizations without us having any record of it. Thus the risk could still be as high as 1/5000 due to this.

Partial Conclusion:

If this analysis captures most of the X-risks, it thus means that eliminating a stealthy scenario might greatly reduce the risk of extinction coming from GCBRs. This implies that broad or universal surveillance could be a critical risk mitigation measure.

Catastrophic Risks

Definition: A catastrophe that kills more than 10% of the population[1] but that doesn't drive the world population close to or below the minimum viable population (Minimal viable population = 100 - 1,000) .

Type of scenarios:

  • A pathogen that can spread very rapidly, with a high mortality rate, seems to be a required condition to have such a catastrophe. Being stealthy would enable spread before any response can be mounted, which would also greatly increase the likelihood that a pathogen leads to a catastrophe.

Distribution of the risk:

  • State Agent: Moderate risk. The risk coming from a state in survival mode seems higher for catastrophic risks than for existential ones and there is still risk from malevolent agents who become highly influential in a country.
  • Non-State Actors but with much resources (~Al-Qaeda for instance): high risk
  • Non-State Actors without much resources: moderate to high risk
  • Accidental Leaks: high risk
  • Natural Pandemics: a quite low risk

Moderate Global Risks

0.01% of the population is killed, in at least 10 different countries, but less than 10% of the population is killed. (This is intended to be similar to Covid).

Type of scenarios:

  • A natural event
  • A state agent bioweapon program accident
  • An intentionally targeted attack that spreads only to a limited extent / can be contained

Distribution of the risk:

  • State Agent: moderate to high risk (because the kind of agent we can expect from such an organization is quite likely to reach 10% if it reaches 0.01%)
  • Non-State Actors but with extensive resources (Al-Qaeda for instance): high to great risk (for the same reasons as mentioned above)
  • Non-State Actors without much resources: great risk
  • Accidental Leaks: great risk
  • Natural Pandemics: high risk

 

One Country to Safeguard Humanity

Safeguarding Humanity’s Potential Might Require Only One Country

The biggest difference between catastrophic risks reduction and anthropogenic existential bio risks reduction is that:

  • For existential risk reduction, you only need one country to survive. Thus, under the assumption that it's unlikely that everyone dies in a country if the country is aware of the danger, for this risk, investing a lot of resources into one or a few countries' risk reduction is possibly much more efficient than trying to improve the standards of the global community. A few thoughts deriving from this statement:

    • We could thus maximize a quantity that accounts for :
      • Likelihood that a country accepts the required measures
      • Potential of X-risks mitigation. This factor would include parameters such as given the information of a very dangerous pathogen, how likely is it that the country takes very strong measures ?
      • Self-sufficiency ability.
  • Islands have some comparative advantages but few of them have a very strong self-sufficiency ability. The United States seems to be a good candidate in the short term, because of its emphasis on national security (including biosecurity) and because of its huge resources that could ensure the preservation of human potential. (For example, it is largely self-sufficient in terms of food and energy, and could likely become self-sufficient in other ways if needed.) One big downside is that it is among the most likely targets for asymmetrical bioterrorism or biological warfare. It is also highly connected to most countries in the world. But if we think that it's impossible that the entire population of a country gets wiped out if this country is aware of the threat, then the US seems to be a good target.
  • Given that metagenomic sequencing seems to be enough to prevent any stealthy scenario, due to its ability to capture any exponentially growing DNA sequence, pushing for it in the US could be enough to greatly reduce X-risks. According to Kevin Esvelt, it would cost 1 billion per year with current technologies to implement this as a standard precaution for screening most people entering the country.
  • One of the other advantages is that having such a few very strong countries protects every other country — because this scenario greatly reduces the potential of bioweapons as a way to kill humanity and thus disincentivize big organizations (from whom a significant amount of the risk is currently coming) to use this to try to destroy humanity.

 

Conclusion

I hope this article provides a useful breakdown of the risk and gives some food for thought to discuss the probability that we give to various scenarios. If my analysis is right it means that we're lucky and that we don't need that much coordination to get rid of the bulk of the existential risk coming from pathogens. If not, I'm happy to discuss what you think are the most plausible scenarios and whether there are normative consequences on public policies. I'm currently running a project with other people that suggests that the amount of coordination (understood as the number of countries that we need to get onboard to solve most of a problem) that we need to mitigate GCBRs could be decreasing with the magnitude of the catastrophe. More on this in a later post. 
Thanks for reading, and please, share your thoughts! 

  1. ^

    This threshold is arbitrary, but aims at designating a memorable event that would affect humanity for at least decades and probably centuries.

26

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since: Today at 5:12 AM

Thanks Simon - this is great. I do want to add a few caveats for how and why the "One Country" idea might not be the best approach.

The first reason not to pursue the one-country approach from a policy perspective is that non-existential catastrophes seem likely, and investments in disease detection and prevention are a good investment from a immediate policy perspective. Given that, it seems ideal to invest everywhere and have existential threat detection be a benefit that is provided as a consequence of more general safety from biological threats. There are also returns to scale for investments, and capitalizing on them may require a global approach.

Second, a key question for whether the proposed "one country" approach is more effective than other approaches is whether we think early detection is more important than post-detection response, and what they dynamics of the spread are. As we saw with COVID-19, once a disease is spreading widely, stopping it is very, very difficult. The earlier the response starts, the more likely it is that a disease can be stopped before spreading nearly universally. The post-detection response, however, can vary significantly between countries, and those most able to detect the thread weren't the same as those best able to suppress cases - and for this and related reasons, putting our eggs all in one basket, so to speak, seems like a very dangerous approach.

Thanks David, that's great ! 

"The first reason not to pursue the one-country approach from a policy perspective is that non-existential catastrophes seem likely, and investments in disease detection and prevention are a good investment from a immediate policy perspective. Given that, it seems ideal to invest everywhere and have existential threat detection be a benefit that is provided as a consequence of more general safety from biological threats. There are also returns to scale for investments, and capitalizing on them may require a global approach." 
 

I feel like there are two competing views here that are very well underlined thanks to your comment :

  • From a global perspective, the optimal policy is probably to put metagenomic sequencing in the key nodes of the travel network so that we're aware of any pathogen as soon as possible. I feel like it's roughly what you meant
  • From a marginalist perspective, given the governance by country it's probably much easier to cover one country who cares about national security with metagenomic sequencing (e.g the US) than to apply the first strategy.

 I expect the limiting factor not to be our own resource allocation but the opportunities to push for the relevant policies at the right moment. If we're able to do the first strategy, i.e if there's an opportunity for us to push in favor of a global metagenomic plan that has some chances to work, that's great ! But if we're not, we shouldn't disregard the second strategy (i.e pushing in one single country to have a strong metagenomic sequencing policy being implemented) as a great way to greatly mitigate at least X-risks from GCBRs.

"Second, a key question for whether the proposed "one country" approach is more effective than other approaches is whether we think early detection is more important than post-detection response, and what they dynamics of the spread are. As we saw with COVID-19, once a disease is spreading widely, stopping it is very, very difficult. The earlier the response starts, the more likely it is that a disease can be stopped before spreading nearly universally. The post-detection response, however, can vary significantly between countries, and those most able to detect the thread weren't the same as those best able to suppress cases - and for this and related reasons, putting our eggs all in one basket, so to speak, seems like a very dangerous approach."

 

Yes, I agree with this for GCBRs in general but not for existential ones ! My point is just that conditionally on a very very bad virus and on awareness about this virus, I expect some agents who are aware quite early about it (hence the idea to put metagenomic sequencing in every entry points of a country) to find ways to survive it, either due to governments or due to personal preparation (personal bunkers or this kind of stuff).

I hope I answered your points correctly ! 

Thanks for the comment ! 

I thought this was a great article raising a bunch of points which I hadn't previously come across, thanks for writing it!

Regarding the risk from non-state actors with extensive resources, one key question is how competent we expect such groups to be. Gwern suggests that terrorists are currently not very effective at killing people or inducing terror --- with similar resources, it should be possible to induce far more damage than they actually do. This has somewhat lowered my concern about bioterrorist attacks, especially when considering that successfully causing a global pandemic worse than natural ones is not easy. (Lowered my concern in relative terms that is --- I still think this risk is unacceptably high and prevention measures should be taken. I don't want to rely on terrorists being incompetent.) This suggests both that terrorist groups may not pursue bioterrorism even if it were the best way to achieve their goals and that they may not be able to execute well on such a difficult task. Hence, without having thought about it too much, I think I might rate the risks from non-state actors somewhat lower than you do (though I'm not sure, especially since you don't give numerical estimates --- which is totally reasonable). For instance, I'm not sure whether we should expect risks of GCBRs caused by non-state actors to be higher than risks of GCBRs caused by state actors (as you suggest).

Curated and popular this week
Relevant opportunities