Hide table of contents

TL;DR

The field of biosecurity is more complicated, sensitive and nuanced, especially in the policy space, than what impressions you might get based on publicly available information. As a result, say / write / do things with caution (especially if you are a non-technical person or more junior, or talking to a new (non-EA) expert). This might help make more headway on safer biosecurity policy. 

Generally, take caution in what you say and how you present yourself, because it does impact how much you are trusted, whether or not you are invited back to the conversation, and thus the potential to make an impact in this (highly sensitive) space.

Why Am I Saying This? 

An important note: I don’t represent the views of the NIH, HHS, or the U.S. government and these are my personal opinions. This is me engaging outside of my professional capacity to provide advice for people interested in working on biosecurity policy.

I work for a U.S. government agency on projects related to oversight and ethics over dual-use research of concern (DURC) and enhanced pandemic potential pathogens (ePPP[1]). In my job, I talk and interface with science policy advisors, policy makers, regulators, (health) security professionals, scientists who do DURC / ePPP research, biosafety professionals, ethicists, and more. Everyone has a slightly different opinion and risk categorisation of biosecurity / biosafety as a whole, and DURC and ePPP research risk in specific. 

As a result of my work, I regularly (and happily) speak to newer and more junior EAs to give them advice on entering the biosecurity space. I’ve noticed a few common mistakes with how many EA community members – both newer bio people and non-bio people who know the basics about the cause area – approach communication, stakeholder engagement, and conversation around biosecurity, especially when engaging with non-EA-aligned stakeholders whose perspectives might be (and very often) are different than the typical EA-perspective on biosecurity and biorisk[2]

I've also made many of these mistakes! I'm hoping this is educational and helpful and not shaming or off-putting. I'm happy to help anyone unsure communicate and engage more strategically in this space. 


Some Key Points that you might need to Update On. 

Junior EAs and people new to biosecurity / biosafety may not know how to or that they should be diplomatic. EA communities have a trend of encouraging provoking behaviour and absolutist, black-and-white scenarios in ways that don't communicate an understanding of how grey this field is and the importance of cooperation and diplomacy. If possible, even in EA contexts, train your default to be (at least a bit more) agreeable (especially at first). 

Be careful with the terms you use and what you say

Terms matter. They signal where you are on the spectrum of ‘how dangerous X research type is’, what educational background you have and whose articles / what sources you read, and how much you know on this topic. 

Example: If you use the term gain-of-function with a virologist, most will respond saying most biomedical research is either a gain or loss of function and isn’t inherently risky. In an age where many virologists feel like health security professionals want to take away their jobs, saying gain-of-function is an easy and unknowing way to discredit yourself. 

Biosafety, biorisk, and biosecurity[3] all indicate different approaches to a problem and often, different perspectives on risk and reasonable solutions. What terms you use signal not only what ‘side’ you represent, but in a field that’s heavily political and sensitive can discredit you amongst the other sides. 

Recognise how little (or how much) you know 

Biosecurity is a field with a lot of information asymmetry. Firstly, many parts of the story simply aren’t public knowledge. I’m not saying the state of the field is perfect. But unfortunately you can’t, and likely don’t, know everything. I can say that I believe the field of U.S. government oversight on DURC and ePPP is much stronger than many believe. 

Second, many people work on this for a living, have advanced degrees on this topic, and multiple years of experience. Your two hours of research on a topic doesn’t mean you know more than them. If you are going to disagree with their opinions (which it’s okay to do so), make sure you are confident that what you are saying is true. 

A mistake I once made was stating confidently something along the lines of criticising “US government dual-use regulations'” in front of someone who wrote and implemented the DURC policies, not regulations. There are no legal regulations over DURC[4]. I was called out by them and told to read through the policy more carefully for future conversations. That could have easily discredited me and lost that key relationship. Mistakes happen, but being confident that what you are saying is correct, especially if going against an expert opinion, is important. 

Third, many EAs aren't as special or way ahead of the curve as you might think. You likely don’t know more than professionals if your information only comes from publicly available EA sources like The Precipice, Future Perfect, or the EA Forum. I love reading the above, but it’s most definitely a one sided and simplistic view. Many of these resources aim to persuade rather than solve the nitty-gritty details of how to implement solutions in real-life. If you’re seriously interested in a career in this space or are someone whose words have influence, make sure you form your own detailed views. Be able to say more than “biosecurity is important” or “there’s a chance of a lab accident”. Ideally you should be able to answer questions like “what are the actual risks of dual-use research, what are the policies in X country, how does Y increase the risk of a GCBR occurring, where are gaps in the existing oversight mechanisms and what are feasible additional interventions, etc” because learning how to answer those questions teaches the nuance of how difficult biosecurity and biosafety and risk-mitigation are. 


Scientific researchers, biosecurity, and biosafety professionals know what they are doing (sometimes) and aren’t trying to kill us all

(The quotes mentioned below have been paraphrased from non-EA virologists, research scientists, and biosafety profession)

The media has attacked biosafety professionals and scientists who work on DURC / ePPP nonstop for the past 3 years. In the U.S. (and I’m sure in other countries too) this has gone as far as death threats, doxxing, and investigations from (non-scientist) members of the government. I’m all for accountability, but what’s been happening isn’t productive accountability and has created an environment of fear. I’ve heard things like

 I’m afraid to say or do anything publicly or I’ll get doxxed” or “X institution won’t communicate in writing for fear of getting called to Congress”. In fact, I’ve even heard that some people feel that they have “worked really hard to build trust so scientists can come to us when they have an uncertain or adverse scientific result. The intense scrutiny on this field is ruining that”. 

Many professionals in this space are scared and stressed. Adding to that isn’t necessarily building trust and needed allies. The professionals in this space are good people – no reputable virologist is trying to do research that intentionally releases or contributes to a pandemic. Biosafety professionals spend their life working to prevent lab leaks. If I’m being honest, many professionals in and around the biosecurity field don’t think incredibly highly of recent (the past few years) journalistic efforts and calls for total research bans. A common sentiment among the very professionals we need to work productively with are:

those people who write op-ed’s and blog posts don’t know anything about how the science actually works or how oversight and biosafety work” and “many people criticising the field are just trying to make a name for themselves and going about it the wrong way”

Choose your words and actions carefully because the most bold ones aren’t the most impactful – in terms of updating policies and actions – ones. 


Dual-use, ‘gain-of function’ and ePPP research is not so black-and-white. 

One view I often hear from EA’s is “we should just ban all gain of function research” or something that’s extremely one-sided and simplified. There’s a variety of inflammatory op-eds and articles that critique topics in biosecurity (I think) unfairly. I’ve had many meetings with people interested in working on biosecurity policy with extreme, oversimplified, and sometimes antagonistic views[5]

ePPP / DURC research isn’t inherently bad and does have value[6]. To stop all DURC/ePPP research outright (in my view) would be net negative. What is minimal-risk oversight that allows necessary and useful research to happen is the key, unanswered question. Especially problematic is approaching a research scientist who does DURC/ePPP research and telling them their research adds no value. That isn’t leading to anything productive and is just adversarial. 

Second, this isn’t possible or realistic from a regulatory standpoint. For ‘gain-of-function’ – you can’t just pass a law to ban it. How do you define it? Who provides oversight? Implementation is challenging. And gain-of-function isn’t inherently bad – biomedical research can be ‘gain-of function’ and not dangerous (aka not a GCBR risk). 

So what next - tentative advice for people new to biosafety / security 

  1. Read, read a lot of different sources. Understand what all sides and stakeholders think on a topic. 
  2. Look at the science and form your own views related to biosafety, biosecurity, ePPP, DURC and risk. What is actually risky? Know and understand the science, the existing oversight / regulation, feasible interventions, and how implementation works.
  3. Know what you’re talking about (whether it’s regulation, policy, technical solutions) and the challenges related to working in the space before you criticise it. 
  4. Talk to (EA and non-EA) professionals and keep an open mind (ex. virologists, microbiologists, biosafety professionals, implementers, policy makers, health security specialists, ethicists, etc) 
  5. Know who you are talking to and use diplomatic language (at least at first_– use biosafety lingo with the biosafety professionals, talk about security with the security people, and so on. You don’t have to agree on everything with everyone – but having a mutual language is important. 
  6. Define terms - to yourself, in conversation, etc 
  7. Always make sure ‘you’re invited back to the table’

Readings I recommend that do a good job painting more nuance than the perspective that is sometimes common in the EA movement

Dual-Use and Infectious Disease Research 

Highly recommended as it breaks down the dual-use dilemma and will provide nuance of the ethical concerns and challenges of providing oversight over dual-use biomedical research. 

The Ethical Issues of Dual-Use and the Life Sciences

Gives a great history of the field of dual-use (research of concern) - although it’s mostly U.S. based - , ethical issues, and the development of the concern over time.

Biotechnology Research in an Age of Terrorism aka the Fink Report 

A 2004 report that led to the founding of the U.S. government National Science Advisory Board for Biosecurity (NSABB), and set the framework for current day policies and oversight efforts. 

H5N1: A Case Study for Dual-Use Research

A report that uses H5N1 as an example for the biosecurity, biosafety, and risks of DURC research. It’s a great deep dive into what full time research in this field looks like and the challenges of research oversight. 

Rapid Proliferation of Pandemic Research: Implications for Dual-Use Risks | mBio 

Gives a great overview of the value and costs of dual-use research and risks and grounds thoughts of risk more concretely. 

  1. ^

     ePPP is a sibling / renamed cousin to what most people know as gain of function (GoF). See more 

  2. ^

     This is something along the lines of there’s a considerable GCBR risk warranting banning X research type, the US government is doing dangerous research without oversight or care, people involved in biomedical research and biosafety don’t care about safety, etc. 

  3. ^

    Biosafety: Biosafety refers to the controls and standards that protect against the accidental release of pathogens. 

    Biosecurity: Biosecurity refers to the protections against deliberate release of pathogens. 

    Bio-risk management: Biorisk management refers to risk-benefit assessment for high-risk research on pathogens.

  4. ^

    There is the U.S. Select Agent programme which overlaps with the DURC policy, but there isn't 'regulation' in the typical sense. 

  5. ^

    This could just be because ‘I’m an EA and they talk more honestly to me than they would to non-EA’s’. It still seems important to set the norm to be ‘diplomatic and agreeable’ before ‘antagonistic and challenging’

  6. ^
Comments28
Sorted by Click to highlight new comments since: Today at 12:52 AM
Larks
1y57
19
0

Thanks very much for writing this. Unnecessarily alienating people for stupid reasons like misusing terminology or not understanding their perspective seems like a clear mistake and hopefully this can help people avoid these pitfalls.

I was curious about your suggestion that a lot of researchers think that basically all biomedical research is gain/loss of function. My impression is that one of the key defenses that the Fauci/NIH/EcoHealth/etc. offered for their research in Wuhan was that it was technically not Gain of Function, even if some parts of it might sound like Gain of Function to the layperson, which seems in tension with this claim. Do you think they were wrong about this?

Your point about many researchers in this area feeling defensive was a good one, and it's important to take this into account when talking to people. It's not fun to be blamed for causing a global crisis, especially if its possible you didn't do it!

I do think there's a risk of a missing mood, however. Speaking as an outsider, the amount of regulation on what you refer to as ePPP (adding functionality to make diseases more dangerous) seems shockingly low. The article you link to tries to make it sound like there are a lot of safeguards, but it seems to me like virtually all the steps only apply if you are seeking federal funding. This is not a standard we accept in other areas! If you are making a car, or building a nuclear power plant, or running a bank or airline, you have to accept extremely intrusive regulation regardless of your funding, and for many things - like nuclear weapons or money laundering - US regulation has world-wide reach. 

For comparison, here are some of the regulatory approaches used in finance, another industry that was widely blamed for causing a global crisis:

  • Monitoring of all electronic communications.
  • Quarterly reporting of activities to regulators.
  • Regulators placed inside your organization.
  • Risk Management departments at every firm, making up a non-trivial fraction of their total size.
  • Compliance departments at every firm, making up a non-trivial fraction of their total size.
  • Mandatory compliance training for all employees.
  • Novel regulatory bodies with broad mandate to find and punish things that seem bad.
  • AML & KYC rules that impose stringent penalties for doing business with suspisious actors.
  • Mandatory insurance.
  • Direct government control over key areas.
  • 'Voluntary' advice from regulators (100% adoption due to legal risks of disobedience).
  • Multi-million-dollar whistleblower awards.
  • Billion-dollar fines.
  • Decades long jail sentences.
  • Lifetime bans from the industry.
  • ... and most importantly, if you mess up, you will probably lose a lot of money.

As far as I can, few if any of these approaches are imposed on Gain of Function research, despite the potential to directly cause the death of all of humanity.

Hi, thanks for the comments! Some broad thoughts in response:

Re

My impression is that one of the key defenses that the Fauci/NIH/EcoHealth/etc. offered for their research in Wuhan was that it was technically not Gain of Function, even if some parts of it might sound like Gain of Function to the layperson, which seems in tension with this claim. Do you think they were wrong about this?

It's hard for me to go into detail on a public platform on this (just to be cautious to my job) but I can broadly say that there's a difference between research that is a) gaining a function, b) gain-of-function as defined by informal norms in the biomedical community, and c) what is formally DURC / GoF research as defined by U.S. government policy. The EcoHealth grants fall confusingly as or as not GoF depending on how GoF is defined. 

Re

Speaking as an outsider, the amount of regulation on what you refer to as ePPP (adding functionality to make diseases more dangerous) seems shockingly low. The article you link to tries to make it sound like there are a lot of safeguards, but it seems to me like virtually all the steps only apply if you are seeking federal funding. This is not a standard we accept in other areas! If you are making a car, or building a nuclear power plant, or running a bank or airline, you have to accept extremely intrusive regulation regardless of your funding, and for many things - like nuclear weapons or money laundering - US regulation has world-wide reach. 

I fully agree! I think there are many concrete needs in this space including legal regulation over DURC /ePPP/GoF research in the U.S. particularly but also  every country that practices such research. To achieve such regulation requires a ton of work, consensus building, and thought into what constructive regulation that captures risk while not alienating / shutting down an entire research field  is tough and part of the nuances that I think we as a community need to work towards

This is a perfectly reasonable point to bring up, and I agree that we should critically consider whether or not policy and regulation in the field is adequate.  I want to emphasize some ways that high-risk biological research differs from finance, nuclear weapons, and money laundering.

  • First, people don't do gain of function research (or whatever we ought to call it) for profit, so imposing gigantic fines, the threat of jail time, and constant severe scrutiny would be tantamount to banning it outright. Likewise, private companies are pursuing profits when they build nuclear weapons. Medicine is, of course, heavily regulated, and once again it is the profit motive that allows the industry to thrive even in such a heavily regulated context.
  • Soldiers operating and maintaining nuclear weapons have given permission for the military to exert extremely intrusive control over their activities. Some of the best and brightest scientists worked for the military as an act of patriotic service to build the nuclear bomb during WWII. However, the Manhattan Project was aimed at a specific engineering outcome, while GoF research would be an ongoing effort with no "definition of done," and it might be hard to convince an adequate number of high-quality scientists to sign up for such strict controls if it was for their entire careers.
  • Money laundering is a crime, so it is not "regulated" but policed. Nobody but terrorists would do gain of function research if it was illegal.

For a person who'd like to see gain of function research banned, any move to regulate it and punish violations would be a step in the right direction. However, those who'd like to enforce responsible behavior, perhaps by using regulations on part with those you describe, have to explain how they'd motivate already-beleaguered scientists to do GoF research when their proposal is "even more stick, still no carrot."

I'm curious to know whether and to what extent we've considered ways to reward basic science researchers for making pandemic-mitigating discoveries in a public health context. Is there a way we can reward people for achieving the maximum public health benefit with the minimum risk in their research?

Money laundering is a crime, so it is not "regulated" but policed. Nobody but terrorists would do gain of function research if it was illegal.

Money laundering is a crime, one which the government primarily combats using AML (Anti Money Laundering) regulations. This apply to essentially all financial companies, and require them to evaluate their clients and counterparties for money laundering risk. If a bank facilitates money laundering it can be punished for this even if it didn't want to do it; all that is required for AML violations is that the bank didn't try hard enough to avoid accidentally helping launder money. These regulations are a big deal, with very large number of people employed in their enforcement and large fines for violations. It is much easier to fine a bank than it is to fine the underlying money launderer. AML rules are effectively a way for governments, which are not really capable of catching money laundering, to outsource the expense and difficulty.

The equivalent here would be if medical equipment companies, reagent companies, lab animal companies had to do due diligence on researchers and were liable if they lacked sufficient processes to ensure they didn't sell supplies to researchers who performed dangerous experiments. As with AML, this allows the government to impose large fines on for-profit companies (which it can easily force to pay) over smaller, potentially judgement-proof labs, which still achieving much the same goal.

That's a helpful reframing, thank you. I think there is still a disconnect between the two cases, however. As money laundering is a crime, companies have a relatively simple task before them: to identify and eliminate money laundering.

By contrast, GoF research is not a crime, and the objective, from a "responsibly pro-GoF" point of view, is to improve the risk/reward ratio to an acceptable level. A company would be likely to be highly conservative in making these judgments, as they would capture none of the benefits of successful and informative GoF research, but would be punished for allowing overly risky or failed GoF research to go forward. In other words, companies would likely refuse to sell to GoF research entirely in order to minimize or eliminate their risk.

The problem is even more acute if the work of evaluating GoF research was foisted onto companies. Scientists might be motivated by curiosity, altruism, or a desire for scientific credit, so there is at least some reward to be had even if GoF research were much more stringently regulated. By contrast, regulating companies in the manner you propose would come with no incentive whatsoever for companies to sell to GoF research, thus effectively banning it.

I think actually the analogy extends even here! 

What exactly is money laundering is not always black and white, and financial firms do not have anything like certainty about whether any given person, entity or transaction is guilty. Instead they adopt a series of rules that, on average, reduce money laundering, but not to zero, and there are false positives. These especially effect low income people, immigrants, those with little documentation, and people with unusual situations. AML rules directly contribute to the problem of people being unbanked (lacking access to the formal financial system, being reliant on cheque cashers etc.) - the government knows this and accepts it as a necessary cost. 

Similarly, I would imagine that not all GoF research would be illegal - but some would, and governments could deputize firms to help to differentiate. This would disrupt some legitimate researchers but could be generally regarded by policymakers as an acceptable price to pay.

Clearly there are some dis-analogies. There are many fewer biomedical researchers than money transfers, which makes in-depth evaluation of each one more viable. And as you noted the (financial and otherwise) benefit of research is more distant from the people undertaking it. I'm not trying to make a strong claim that this is a particularly good model for GoF regulation; just noting that I think researchers don't realize quite how unregulated they are relative to other industries. 

It's important to keep in mind that while money laundering is typically carried out by profit-seeking criminals who take advantage of complex financial transactions to hide their illegal activities, GoF research is not driven by financial gain. Therefore, we need to consider the unique nature of GoF research when assessing the need for regulation.

It's not just a matter of how much regulation is in place, but also about finding a balance between the pressures to engage in the research and a regulatory framework that effectively manages any potential risks. If there's an inadequate regulatory apparatus in place relative to the pressures to participate, then the field is "underregulated." Conversely, if there's too much regulation, the field may be at risk of becoming "overregulated."

Given the significant risks associated with GoF research, it requires a high level of regulation compared to other public service research areas that have similarly limited pressures to participate. However, because profit is not a driving force, the field can only tolerate a certain amount of regulation before participation becomes difficult.

Rather than focusing on increasing regulation dramatically or maintaining the status quo, we should look to refine and improve regulation for GoF research. While some scope exists to tighten regulations, excessive regulation could stifle the field altogether, which may or may not be desirable. If we wish the field to continue while enhancing the risk-benefit ratio, our focus should be on regulating the field proportionately to the pressures to participate.

It's time to shift the discussion from "how regulated is the field" to "how regulated is the field relative to the pressures to participate." By doing so, we can strike a balance between promoting the field's progress and ensuring appropriate risk management.

The international community funded a database of Coronaviruses that was held by the lab in Wuhan. In September 2019, the month when the Chinese military overtook the lab, that database was taken offline.

If that database would have been important for pandemic prevention and vaccine development, I would have expected the virologists to write OPs publically calling on China to release the data. That they didn't is a clear statement about what they think for how useful that data is for pandemic prevention and how afraid they are that people look critically at the Wuhan Institute of Virology.

I'm curious to know whether and to what extent we've considered ways to reward basic science researchers for making pandemic-mitigating discoveries in a public health context. 

The virologists seemed to ignore the basic science questions such as "How do these viruses spread?" and "Are they airborne?" that actually mattered. 

Asking those questions would mean doing more biomedical research that isn't gain of function and loss of function.

have to explain how they'd motivate already-beleaguered scientists to do GoF research when their proposal is "even more stick, still no carrot."

That assumes that it's important to motivate them to do GoF research. It seems that research served for them as a distraction from doing the relevant research. 

If that database would have been important for pandemic prevention and vaccine development, I would have expected the virologists to write OPs publically calling on China to release the data. That they didn't is a clear statement about what they think for how useful that data is for pandemic prevention and how afraid they are that people look critically at the Wuhan Institute of Virology.

Are you sure that virologists didn't write such OPs?

The virologists seemed to ignore the basic science questions such as "How do these viruses spread?" and "Are they airborne?" that actually mattered. 

My understanding is that in the US, they actually studied these questions hard and knew about things like airborn transmission and asymptomatic spread pretty early on, but were suppressed by the Trump administration. That doesn't excuse them - they ought to have grown a spine! - but it's important to recognize the cause of failure accurately so that we can work on the right problem.

Are you sure that virologists didn't write such OPs?

Pretty much, when I googled about the fact that they took down the database I found no such OPeds. If you have any evidence to the contrary I would love to see it.

If you talk about that it's wrong that they took down the database that points to the fact that the early lab leak denial was bullshit and the virologists cared nobody finding out that the arguments they made were bullshit.  

Jeremy Farrar describes in his book that one of the key arguments they used to reject the lab leak theory as the huge distance from the openly published sequences to the COVID-19 sequence. That argument becomes a lot weaker when you factor in that the military overtook the lab in September 2019 and at that month they took down their database.

The virologists cared more about keeping the public uninformed about what happened at the Wuhan Institute for Virology than they cared about the database being available to help for fighting the pandemic. 

My understanding is that in the US, they actually studied these questions hard and knew about things like airborn transmission and asymptomatic spread pretty early on, but were suppressed by the Trump administration.

Knowing that airborne transmission matters has consequences about what actions you want to take. 

When the Japanese health authorities advice at the beginning of the pandemic to avoid closed spaces with poor ventilation US and EU authorities didn't give that advice. 

I find it pretty unlikely that Fauci et al didn't give the same advice of avoiding closed spaces that the Japanese authorities gave out because the Trump administration didn't want them to tell people to avoid closed spaces but the Trump administration preferred the advice of telling people to wash their hands. 

One of the corollaries of "avoid closed spaces with poor ventilation" is that forbidding people from meeting each other outside is bad policy. 

The 1.5 meter distance recommendation makes little sense with airborne spread but was quite central for pandemic guidance. 

There's some research that suggests that flu transmission can be reduced in school by controlling the humidity level. There's a good chance that you can also reduce COVID-19 transmission by controlling indoor humidity but the virologists didn't care enough about doing the basic research to establish that to get a policy in place that all public buildings get humidity controlled. 

There was no ramp-up of indoor ventilation production at the start of the pandemic but it would have been the reasonable step if the problem would have been seen as one of airborne transmission. 

The WHO took two years to acknowledge airborne transmission. If the virologist community would have done their job, they would have explained to the WHO early on that it has to acknowledge airborne transmission or be branded by the virologists as science deniers. 

I was curious about your suggestion that a lot of researchers think that basically all biomedical research is gain/loss of function.

Not completely clear on what the context the researchers were speaking to but a standard strategy in figuring out what genes do is to knock out (loss of function) the gene of interest in a model organism and observe what happens. Synthetic biology also has a lot of 'gain of function' engineering e.g. make microbes produce insulin.

My impression is that one of the key defenses that the Fauci/NIH/EcoHealth/etc. offered for their research in Wuhan was that it was technically not Gain of Function, even if some parts of it might sound like Gain of Function to the layperson, which seems in tension with this claim.

It not only sounds that way to a lay-person. The NIH stopped the EcoHealth grant that was partly paying for the research in Wuhan for a short time in 2016. When they renewed the grant Peter Dasek from EcoHealth wrote back:

"This is terrific! We are very happy to hear that our Gain of Function research funding pause has been lifted."

Fauci himself wrote on the 1st February 2020 and email that had one of the study in the attachment with the file name "Baric, Shi et al - Nature medicine - SARS Gain of function".

What Fauci/NIH/EcoHealth is saying seems to be something like "when people say 'gain of function' they really mean ePPP and the research they funded in Wuhan wasn't ePPP because we never put it through the P3O process that could have decided that it was an ePPP".

Without getting too far into the specifics, I think that this is a good attitude to have across a wide range of policy concerns, and that similar issues apply to other policy areas EAs are interested in.

jtm
1y31
11
0

Hi Elika, 

Thanks for writing this, great stuff!  

I would probably frame some things a bit differently (more below), but I think you raise some solid points, and I definitely support the general call for nuanced discussion.

I have a personal anecdote that really speaks to your "do your homework point." When doing research for our 2021 article on dual-use risks (thanks for referencing it!) , I was really excited about our argument for implementing "dual-use evaluation throughout the research life cycle, including the conception, funding, conduct, and dissemination of research." The idea that effective dual-use oversight requires intervention at multiple points felt solid, and some feedback we'd gotten on presentations of our work gave me the impression that this was a fairly novel framing. 

It totally wasn't! NSABB called for this kind of oversight throughout the research cycle (at least) as early as 2007, [1] and, in hindsight, it was pretty naïve of me to think that this simple idea was new. In general, it's been a pretty humbling experience to read more of the literature and realise just how many of the arguments that I thought were novel based on their appearance in recent op-eds and tweets can be found in discussions from 10, 20, or even 50 years ago.

Alright, one element of your post that I would've framed differently: You put a lot of emphasis on the instrumental benefits of nuanced discussion in the form of building trust and credibility, but I hope readers of your post also realise the intrinsic value of being more nuanced.

E.g., from the summary 

"[what you say] does impact how much you are trusted, whether or not you are invited back to the conversation, and thus the potential to make an impact"

And the very last sentence:

 "Always make sure ‘you’re invited back to the table’.

This is a great point, and I really do think it's possible to burn bridges and lose respect by coming across as ignorant or inflammatory. But getting the nuanced details wrong is also a recipe for getting solutions wrong! As you say, proper risk-benefit analysis for concrete dual-use research is almost always difficult, given that the research in question very often has some plausible upside for pandemic preparedness or health more generally.

And even if you know what specific research to draw red lines around, implementation is riddled with challenges: How do you design rules that won't be obsolete with scientific advances?  How do you make criteria that won't restrict research that you didn't intend to restrict? How do you avoid inadvertent attention hazards from highlighting the exact kinds of research that seem the most risky? Let's say you've defined the perfect rules. Who should be empowered to make the tough judgment calls on what to prohibit? If you're limiting access to certain knowledge, who gets to have that access? And so on, and so on.

I do think there's value in strongly advocating for more robust dual-use oversight or lab biosafety, and (barring infohazard concerns), I think op-eds aimed at both policymakers and the general public can be helpful. It's just that I think such advocacy should be more in the tone of "Biosecurity is important, and more work on it is urgently needed" and less "Biosecurity Is Simple, I Would Just Ban All GOF." 

Bottom line, I especially like the parts of your post that encourage people to be more nuanced, not just sound more nuanced.

  1. ^

    From Casadevall 2015: "In addition to defining the type of research that should elicit heightened concern, the NSABB recommended that research be examined for DURC potential throughout its life span, from experimental conception to final dissemination of the results."

Just echoing the experience of "it's been a pretty humbling experience to read more of the literature"; biosecurity policy has a long history of good ideas and nuanced discussions. On US gain-of-function policy in particular, I found myself particularly humbled by the 2015 article Gain-of-function experiments: time for a real debate, an adversarial collaboration between researchers involved in controversial viral gain-of-function work and biosecurity professionals who had argued such work should face more scrutiny. It's interesting to see where the contours of the debate have changed and how much they haven't changed in the past 7+ years.

Thanks!! Strongly agree on your points of the intrinsic value of understanding and being nuanced in this space, I just didn't have the words to frame it as well as you put it :)

ASB
1y31
19
1

Just wanted to give my hearty +1 to approaching biosecurity issues with humility and striving to gain important context (which EAs often lack!)

Thanks for this post! I agree with your point about being careful on terms, and thought it might be useful to collect a few definitions together in a comment.

DURC (Dual-Use Research of Concern)

DURC is defined differently by different organizations. The WHO defines it as:

research that is intended to provide a clear benefit, but which could easily be misapplied to do harm

while the definition given in the 2012 US government DURC policy is:

life sciences research that, based on current understanding, can be reasonably anticipated to provide knowledge, information, products, or technologies that could be directly misapplied to pose a significant threat with broad potential consequences to public health and safety, agricultural crops and other plants, animals, the environment, materiel, or national security

ePPP (enhanced Potential Pandemic Pathogen)

ePPP is a term (in my experience) mostly relevant to the US regulatory context, and was set out in the 2017 HHS P3CO Framework as follows:

A potential pandemic pathogen (PPP) is a pathogen that satisfies both of the following:

  1. It is likely highly transmissible and likely capable of wide and uncontrollable spread in human populations; and
  2. It is likely highly virulent and likely to cause significant morbidity and/or mortality in humans.

An enhanced PPP is defined as a PPP resulting from the enhancement of the transmissibility and/or virulence of a pathogen. Enhanced PPPs do not include naturally occurring pathogens that are circulating in or have been recovered from nature, regardless of their pandemic potential.

One way in which this definition has been criticized (quoting the recent NSABB report on updating the US biosecurity oversight framework) is that "research involving the enhancement of pathogens that do not meet the PPP definition (e.g., those with low or moderate virulence) but is anticipated to result in the creation of a pathogen with the characteristics described by the PPP definition could be overlooked."

GOF (Gain-of-Function)

GOF is not a term that I know to have a clear definition. In the linked Virology under the microscope paper, examples range from making Arabidopsis (a small flowering model plant) more drought-resistant to making H5N1 (avian influenza) transmissible between mammals. I suggest avoiding this term if you can. (The paper acknowledges the term is fuzzily defined, citing The shifting sands of ‘gain-of-function’ research.)

Biosafety, biosecurity, biorisk

The definitions you gave in the footnote seem solid, and similar to the ones I'd offer, though one runs into competing definitions (e.g. the definition provided for biosafety doesn't mention unintentional exposure). I will note that EA tends to treat "biosecurity" as an umbrella term for "reducing biological risk" in a way that doesn't reflect its usage in the biosecurity or public health communities. Also, as far as I can tell, Australia means a completely different thing by "biosecurity" than the rest of the English-speaking world, which will sometimes lead to confusing Google results.

Thanks!! This is great additional detail.

On “learning from people outside EA and those who slightly disagree with EA views” I highly recommend reading everything by Dr Filippa Lentzos: https://www.filippalentzos.com/.

Also, subscribe to the Pandora Report newsletter:
https://pandorareport.org/

Global Biodefense was great but sadly seems to have become inactive: https://globalbiodefense.com/

Strong +1!! Thanks :)

I think it's quite sensible that people hoping to have a positive impact in biosecurity should become well-informed first. However, I don't think this necessarily means that radical positions that would ban a lot of research are necessarily wrong, even if they are more often supported by people with less detailed knowledge of the field. I'm not accusing you of saying this, I just want to separate the two issues.

 Many professionals in this space are scared and stressed. Adding to that isn’t necessarily building trust and needed allies. The professionals in this space are good people – no reputable virologist is trying to do research that intentionally releases or contributes to a pandemic. Biosafety professionals spend their life working to prevent lab leaks. If I’m being honest, many professionals in and around the biosecurity field don’t think incredibly highly of recent (the past few years) journalistic efforts and calls for total research bans.

Many people calling for complete bans think that scientists are unreliable on this - because they want to continue to do their work, and may not be experts in risk - and the fact that said scientists do not like people doing this doesn't establish that anyone calling for a complete ban is wrong to do so. 

As a case in point regarding the unreliability of involved scientists: your reference number 6 repeatedly states that there is "no evidence for a laboratory origin of SARS-CoV-2", while citing arguments around the location of initial cases and phylogeny of SARS-CoV-2 as evidence for a zoonotic emergence. However, a survey of BSL-3 facilities in China found that 53% of associated coronavirus-related Nature publications were produced by Wuhan-based labs between 2017 and 2019, and it is extremely implausible that Wuhan bears 50% of the risk for novel zoonotic virus emergence in all of China! (it's possible that the authors of that survey erred - the do seem ideologically committed to the lab leak theory). Furthermore, I have to the best of my ability evaluated arguments about the presence of the furin cleavage site in the SARS-CoV-2 genome and my conclusion is that it is around 5 times as likely to be present in the lab origin scenario (accounting the fact that the WIV is an author on a proposal to insert such sites into SARS-like coronaviruses; also, I consider anywhere from 1.1 to 20 times as likely to be plausible). One can debate the relative strength of different pieces of evidence - and many have - but the claim that there is evidence on one side and none on the other is not plausible in my view, and I at least don't trust  anyone making such a claim is able to capably adjudicate questions about risks of certain kinds of pathogen research.

(not that it's especially relevant, but I currently think the case for zoonosis is slightly stronger than the case for a lab leak, I just don't think you can credibly claim that there's no evidence that supports the lab leak theory)


A little bit of proof reading

stating confidentiality

confidently

You don’t likely don’t know more than professionals

You likely don't know

Thanks! I do broadly agree with your points. I linked  reference 6 as an example of the benefits and nuances of dual-use research, but don't / shouldn't comment on COVID-19 origins and their views expressed on it.

I'm not that knowledgeable on the biosecurity field, but from afar I've thought that working biosecurity EAs tend to be some of the more collaborative, professional, and respectful of non-EA contributions. And this post is an example of that, thanks for sharing.

Do you have any additional sources you'd recommend? Things you didnt add to the initial list for whatever reason? I'd like to read widely on this topic.

Thanks! I don't have any other one's I broadly recommend but happy to share topic specific resources if there's something in particular you're interested in. 

Great! Might reach out in a few weeks (when I plan to dive into bio resources)

Great post! This matches closely with the lessons I've learned and advice I've given after working with virologists and biosafety professionals. I'd also give a plug for the book Biosecurity Dilemmas for anyone who wants to do a deeper dive in the trade offs in this space.

[comment deleted]1y1
0
0