Plagues throughout history suggest the potential for biology to cause global catastrophe. This potential increases in step with the march of biotechnological progress. Global Catastrophic Biological Risks (GCBRs) may compose a significant share of all global catastrophic risk, and, if so, a credible threat to humankind.
Despite extensive existing efforts addressed to nearby fields like biodefense and public health, GCBRs remain a large challenge that is plausibly both neglected and tractable. The existing portfolio of work often overlooks risks of this magnitude, and largely does not focus on the mechanisms by which such disasters are most likely to arise.
Much remains unclear: the contours of the risk landscape, the best avenues for impact, and how people can best contribute. Despite these uncertainties, GCBRs are plausibly one of the most important challenges facing humankind, and work to reduce these risks is highly valuable.
After reading, you may also like to listen to our podcast interview with the author about this article and the COVID-19 pandemic.
Our overall view: Recommended
This is among the most pressing problems to work on.
We think work to reduce global catastrophic biological risks has the potential for a very large positive impact. GCBRs are both great humanitarian disasters and credible threats to humanity’s long-term future.
This issue is somewhat neglected. Current spending is in the billions per year, although this large portfolio is not perfectly allocated. Our guesstimated quality adjustment yields ~ $1 billion per year.
Making progress on reducing global catastrophic biologial risks seems moderately tractable. Attempts to complement the large pre-existing biosecurity portfolio have fair promise. On the one hand, there seem to be a number of pathways through which risk can be incrementally reduced; on the other, the multifactorial nature of the challenge suggests there will not be easy ‘silver bullets’.
What is our analysis based on?
I, Gregory Lewis, wrote this profile. I work at the Future of Humanity Institute on GCBRs. It owes a lot to helpful discussions with (and comments from) Christopher Bakerlee, Haydn Belfield, Elizabeth Cameron, Gigi Gronvall, David Manheim, Thomas McCarthy, Michael McClaren, Brenton Mayer, Michael Montague, Cassidy Nelson, Carl Shulman, Andrew Snyder-Beattie, Bridget Williams, Jaime Yassif, and Claire Zabel. Their kind help does not imply they agree with everything I write. All mistakes remain my own.
This profile is in three parts. First, I explain what GCBRs are and why they could be a major global priority. Second, I offer my impressions (such as they are) on the broad contours of the risk landscape, and how these risks are best addressed. Third, I gesture towards the best places to direct one’s career to reduce this danger.
What are global catastrophic biological risks?
Global catastrophic risks (GCRs) are roughly defined as risks that threaten great worldwide damage to human welfare, and place the long-term trajectory of humankind in jeopardy. Existential risks are the most extreme members of this class. Global catastrophic biological risks (GCBRs) are a catch-all for any such risk that is broadly biological in nature (e.g. a major pandemic).
I write from a broadly longtermist perspective: roughly, that there is profound moral importance in how humanity’s future goes, and so trying to make this future go better is a key objective in our decision-making (I particularly recommend Joseph Carlsmith’s talk). When applying this perspective to biological risks, the issue of whether a given event threatens the long-term trajectory of humankind becomes key. This question is much harder to adjudicate than whether a given event threatens severe worldwide damage to human welfare. My guesswork is the ‘threshold’ for when a biological event starts to threaten human civilisation is high: a rough indicator is a death toll of 10% of the human population, at the upper limit of all disasters ever observed in human history.
As such, I believe some biological catastrophes, even those which are both severe and global in scope, would not be GCBRs. One example is antimicrobial resistance (AMR): AMR causes great human suffering worldwide, threatens to become an even bigger problem, and yet I do not believe it is a plausible GCBR. An attempt to model the worst case scenario of AMR suggests it would kill 100 million people over 35 years, and reduce global GDP by 2%-3.5%. Although disastrous for human wellbeing worldwide, I do not believe this could threaten humanity’s future – if nothing else, most of humanity’s past occurred during the ‘pre-antibiotic age’, to which worst-case scenario AMR threatens a return.
To be clear, a pandemic that killed less than 10% of the human population could easily still be among the worst events in our species’ history. For example, the ongoing COVID-19 pandemic is already a humanitarian crisis and threatens to get much worse, though it is very unlikely to threaten extinction according to this threshold. It is well worth investing great resources to mitigate such disasters and prevent more from arising.
The reason to focus here on events that kill a larger fraction of the population is firstly, that they are not so unlikely, secondly, that the damage they could do would be vastly greater still — and potentially even more long-lasting.
These impressions have pervasive influence on judging the importance of GCBRs in general, and choosing what to prioritise in particular. They are also highly controversial: One may believe that the ‘threshold’ for when an event poses a credible threat to human civilisation is even higher than I suggest (and the risk of any biological event reaching this threshold is very remote). Alternatively, one may believe that this threshold should be set much lower (or at least set with different indicators) so a wider or different set of risks should be the subject of longtermist concern. On all of this, more later.
Continue reading on 80,000 Hours' website
Different attempts at a definition of GC(B)Rs point in the same general direction:
“[GCRs are a] risk that might have the potential to inflict serious damage to human well-being on a global scale.”
“[W]e use the term “global catastrophic risks” to refer to risks that could be globally destabilising enough to permanently worsen humanity’s future or lead to human extinction.”
“The Johns Hopkins Center for Health Security’s working definition of global catastrophic biological risks (GCBRs): those events in which biological agents—whether naturally emerging or reemerging, deliberately created and released, or laboratory engineered and escaped—could lead to sudden, extraordinary, widespread disaster beyond the collective capability of national and international governments and the private sector to control. If unchecked, GCBRs would lead to great suffering, loss of life, and sustained damage to national governments, international relationships, economies, societal stability, or global security.” ↩︎
For more on longtermism, see other 80,000 Hours work here. ↩︎
Although the modelling (by KPMG and RAND Europe) was only directed to illustrate a possible worst case scenario, a plausible ‘worst case’ from AMR is less severe:
- The Rand scenario of all pathogens achieving 100% resistance to all antibiotics in 15 years is implausible – many organisms show persistent susceptibility to antimicrobials used against them for decades.
- The KPMG assumptions of a 40% increase in resistance also seems overly pessimistic. Further, the assumption that resistance would lead to doubled transmission seems unlikely.
- Incidence of infection is held constant, whilst most infectious disease incidence is on a declining trend, little of which can be attributed to antimicrobial use.
- Mechanisms of antimicrobial resistance generally incur some fitness cost to the pathogen.
- No attempts at mitigation or response were modelled (e.g. increasing R&D into antimicrobials as resistance rates climb).
E.g. Saskia Popescu has written a paper on “The existential threat of antimicrobial resistance”, illustrating that pervasive antimicrobial resistance could compound the danger of an influenza outbreak (typically, large proportions of influenza deaths are due to secondary bacterial infection).
One explanation of this apparent disagreement is simply that Popescu’s meaning of ‘existential threat’ is not the same as (longtermist-style) existential risk. However, I suspect she (and I am confident others) would disagree with my ‘ruling out’ AMR as a plausible GCBR ↩︎