Future pandemics could arise from an accident (a pathogen being used in research accidentally infecting a human). The risk from accidental pandemics is likely increasing in line with the amount of research being conducted. In order to prioritise pandemic preparedness, forecasts of the rate of accidental pandemics are needed. Here, I describe a simple model, based on historical data, showing that the rate of accidental pandemics over the next decade is almost certainly lower than that of zoonotic pandemics (pandemics originating in animals).

Before continuing, I should clarify what I mean by an accidental pandemic. By 'accidental pandemic,' I refer to a pandemic arising from human activities, but not from malicious actors. This includes a wide variety of activities, including lab-based research and clinical trials or more unusual activities such as hunting for viruses in nature.

The first consideration in the forecast is the historic number of accidental pandemics. One historical pandemic (1977 Russian flu) is widely accepted to be due to research gone wrong, with the leading hypothesis being a clinical trial. The estimated death toll from this pandemic is 700,000. The origin of the COVID-19 pandemic is disputed, and I won’t go further into that argument here. Therefore, historically, there have been one or two accidental pandemics.

Next, we need to consider the amount of research that could cause such a pandemics, or the number of “risky research units” that have been conducted. No good data exists on risky research units directly. However, we only need a measure that is proportional to the number of experiments.[1] I consider three indicators: publicly reported lab accidents, as collated by Manheim and Lewis (2022); the rate at which BSL-4 labs (labs handling the most dangerous pathogens) are being built, gathered by Global BioLabs; and the number of virology papers being published, categorised by the Web of Science database. I find a good fit with a shared rate of growth at 2.5% per year.

Number of events per year in each of the three datasets (dots). Lines show the line of best fit from a Poisson regression, with 95% prediction interval.

A plateau in the number of virology papers in the Web of Science database is plausibly visible. It is too early to tell if this trend will feed through to the number of labs or datasets, but this is a weakness of this analysis. However, a similar apparent plateau is visible in the 1990s, yet growth then appeared to restart along the previous trendline.

The final step is to extrapolate this growth in risky research units and see what it implies for how many accidental pandemics we should expect to see. Below I plot this: the average (expected) number of pandemics per year. Two scenarios are considered: where the basis is one historical accidental pandemic (1977 Russian flu) and where the basis is two historical accidental pandemics (adding COVID-19). For comparison, I include the historic long-run average number of pandemics per year, 0.25.[2]

Predictions for the mean number of accidental pandemics each year, in comparison to the long-run historical average.

Predictions for the ten years starting with 2024 are in the table below. This gives, for each scenario: the number of accidental pandemics that are expected, a range which the number of pandemics should fall in with at least 80% probability, and the probability of at least one accidental pandemic occurring.

ScenarioExpected number80% predictionProbability at least 1
1 previous1.20-256%
2 previous2.10-376%

Overall, the conclusion from the model is that, for the next decade, the threat of zoonotic pandemics is likely still greater. However, if lab activity continues to increase at this rate, accidental pandemics may dominate.

The model here is extremely simple, and a more complex one would very likely decrease the number forecast. In particular, this model relies on the following major assumptions.

First, the actual number of risky research units is proportional to the three indicators chosen. That all three indicators are growing at similar rates lends credence to this view. However, this assumption is, in practice, almost impossible to verify. Each of the datasets used here have issues,[3] and further work here would certainly be useful.

Second, the number of risky research units is growing exponentially, and this will continue over the extrapolation period. The plateau in the number of virology papers being published is the most concerning feature of the data here. This implies that the number of risky research units might be slowing down in growth. On the other hand, increasing access to biological research could cause an increase in risky research, but I think a step-change over the next decade is unlikely.

Third, the probability of an accidental pandemic per risky research is constant. This seems unlikely. Biosafety (actions to reduce the risk of lab accidents) is becoming more prominent and, like most of society, safety measures are increasing. This is especially true in comparison to the 1970s, when the Russian flu pandemic occurred. In fact, rerunning the above projection but only considering that there was 1 accidental pandemic by 1977 implies a more than 90% probability of two or more accidental pandemics by the present day. However, as risky research is done more broadly (e.g.: in less developed countries) biosafety may decrease. Hence, the current risk of an accidental pandemic per risky research unit is overstated by this analysis, but this is uncertain.

Fourth, and finally, the occurrence of accidental pandemics is independent. We might expect that, if a future pandemic was confirmed to be leaked from a lab, actions would be taken to reduce the probability of a future one. While this would not affect the view of the probability of at least one accidental pandemic, it should reduce the probability of two or more, and hence the expected number too.

These factors imply that the model here is overestimating the likely rate of future accidental pandemics. Therefore, it is almost certain that accidental pandemics are currently not the majority of pandemics we’d expect to see, even based on this model which likely overestimates their frequency. This may change towards the end of the projection period, hence, considerations over improved biosafety are still important.

In order to fully assess the relative impact of accidental pandemics, compared with other sources, it is also important to consider their severity. In general, I would expect the types of pathogens that research is being conducted on to be similar to those we expect to see pandemics. However, there may be a bias towards more severe pandemics since these are the ones which we would most want to prevent or mitigate.

Thank you for reading to the end. I am currently looking for a job! If you think your organisation could benefit from this type of thinking, please get in touch.

Many thanks to Sandy Hickson and Hena McGhee for commenting on drafts of this post, and the Cambridge Biosecurity Hub for many discussions informing my thinking. I am also grateful to the researchers who released the data I used: David Manheim, Gregory Lewis, the Global Biolabs project, and Web of Science.

  1. ^

     Assuming exponential growth, which appears to be a good fit, growth at the same rate implies a constant multiplier between the different indicators. The gamma-Poisson model employed for the prediction makes the same predictions if the amount of risk being incurred is scaled by a constant multiplier. Mathematical details are available in this R Markdown notebook.

  2. ^

     Using the dataset of Marani et al. (2021).

  3. ^

     BSL-4 labs are a small subset of all research, and have small numbers. The lab accidents dataset (from Manheim and Lewis) is likely incomplete, and possibly biased. Virology papers do not necessarily track risky research.

Comments4


Sorted by Click to highlight new comments since:

Nice! This is helpful, and I love the reasoning transparency. How did you get to the 80% CI?(sorry if I missed this somewhere)

Thank you Ben! The 80% CI[1] is an output from the model.

Rough outline is.

  1. Start with an uniformative prior on the rate of accidental pandemics.
  2. Update this prior based on the number of accidental pandemics and amount of "risky research units" we've seen; this is roughly to Laplace's rule of succession in continuous time.
  3. Project forward the number of risky research units by extrapolating the exponential growth.
  4. If you include the uncertainty in the rate of accidental pandemics per risky research unit, and random variation, then it turns out the number of events is a negative binomial distribution.
  5. Include the most likely number of pandemics to occur until the probability is over 80%. Due to it being a discrete distribution, this is a conservative interval (i.e. covers more than 80% probability).

For more details, here is the maths and code for the blogpost and here is a blogpost outlining the general procedure.


    1. Technically a credible interval (CrI), not confidence interval because it's Bayesian. ↩︎

Awesome, thanks!

Executive summary: A simple model based on historical data suggests the rate of accidental pandemics originating from research activities over the next decade is likely lower than that of zoonotic pandemics, but could become more prominent if risky research continues growing exponentially.

Key points:

  1. One or two historical pandemics are considered to have accidentally emerged from research activities.
  2. Three indicators related to risky research activities all show exponential growth around 2.5% per year.
  3. Extrapolating growth suggests 1-3 accidental pandemics in the next decade, with 56-76% probability of at least one.
  4. This is likely an overestimate due to assumptions like constant probability per research unit and independence.
  5. Accidental pandemics remain less likely than zoonotic ones based on historical frequencies, but could rival them in the long term if research growth continues.
  6. Improved biosafety practices could reduce risks, but need to expand to less developed countries conducting more research.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would