Laboratory biosafety and biosecurity (collectively, biorisk management or BRM) could benefit from more involvement from social scientists. Earlier this year, I co-authored an article called "Motivating Proactive Biorisk Management" on this topic. Here, I'd like to briefly walk through the core arguments of the article (self-quoting in several places) and then outline a few hypothetical examples of novel interventions derived from its ideas. I hope that this post and the original article contribute to future collaborations between social scientists and biosecurity professionals.
This article represents my personal opinions and does not reflect the opinions of my employer Gryphon Scientific.
Biorisk management (BRM) encompasses a broad set of practices that life scientists can follow to mitigate the risks of their work. It includes things like following safety and security protocols, reporting suspicious activities in and around the lab, and modifying or stopping a research project if you believe that its results could be misused to cause harm.
Life scientists face external pressure from regulators to practice BRM, but sometimes external rules cannot be effectively enforced or haven’t been created yet. In these situations, it is important for life scientists to practice proactive BRM - to be vigilant about potential biorisks and take steps to mitigate them, even when nobody else is looking.
Unfortunately, research suggests that many life scientists may not be very motivated to proactively manage biorisks. Much is still unknown about life scientists’ opinions on BRM, especially outside of the US, but enough studies have been done to be concerning. For example, in a series of annual surveys of laboratory safety staff and scientists at a US national conference, the most commonly cited barrier to improving laboratory safety (with almost 50% of each group agreeing) was “competing priorities,” the second-most commonly cited barrier was “apathy,” and the fourth was “time and hassle factors.” There are no high-quality surveys on the topic of managing risks of deliberate misuse, but in interview studies, non-trivial fractions of life scientists have expressed the ideas that risks are virtually nonexistent, that risks are present but unstoppable, and that the benefits of research virtually always outweigh the risks. (See the full paper for a more complete discussion.)
Despite these findings, little effort is being put directly into understanding or changing life scientists’ attitudes about BRM or providing compelling arguments and narratives about the importance of BRM. Most existing biosafety and biosecurity training focuses entirely on imparting technical biosafety and biosecurity skills, like how to decontaminate equipment or use PPE. It makes little effort to persuade, engage, motivate, or inspire life scientists to practice these skills when nobody else is looking, or to think critically about how to prevent their work from being misused. Relevant research exists on the topic of “safety culture,” but the field is underdeveloped.
Lessons from the social and behavioral sciences can and should be adapted to promote proactive biorisk management. For example, literature on social norms, persuasion, attitude change, and habit formation could be used to design and test behavior-change interventions. The bar is low; researchers have not rigorously tested interventions to change life scientists' proactive BRM practices. Funders should support social scientists and biorisk experts to partner with life scientists on programs of applied research that start with interviews and surveys and build toward scalable and testable interventions.
To illustrate, here are three sketches of possible social-science interventions to promote proactive BRM that could be piloted and evaluated in field settings. The full paper includes references, more intervention ideas, and more detailed thoughts about implementation and evaluation.
Listening tours for proactive biosafety
Labs are more likely to maintain biosafety when scientists and their institutional biosafety staff maintain strong working relationships and see themselves as being on the same team. Unfortunately, the relationships between scientists and safety staff are often strained. Scientists may fear that interacting with safety staff will slow down their work, so they fail to ask questions or tell staff about their concerns. Research on safety in chemistry labs has also found that scientists can sometimes offload responsibility for safety onto staff in ways that are not justifiable. For example, if a scientist notices a malfunctioning piece of equipment, they might assume that staff know about the malfunction and would not allow it to continue if it was truly risky. In fact, safety staff often rely on scientists to let them know about malfunctions and other anomalies.
One approach to improve scientist-staff relations in the life sciences is for biosafety staff to conduct periodic “listening tours” with life science laboratories, as is already practiced by executives in many private firms. Biosafety staff could attend existing lab meetings to introduce themselves, assure the lab that they are not conducting an audit, and ask the lab members to teach them about the possible safety risks involved in their subfield (not necessarily their particular laboratory). Staff could close the conversation by thanking the group and requesting advice on how to reduce the burdens of risk management and how to communicate with other life scientists about the importance of laboratory safety.
By positioning themselves as learners, biosafety staff members can accomplish several psychologically potent goals simultaneously. They can send the message that they are not omnipotent, frame scientists in a position of responsibility and authority regarding laboratory safety, and convey the potential for a friendly, collaborative relationship in the future. This effort could also give life scientists practice thinking about how their own work could be unsafe without fear of being audited, and may give staff valuable information about novel safety risks and ways of making risk management less costly.
One potential example of this approach in practice can be found at Colorado State University, which oversees a large and complex life science research infrastructure. The CSU Biosafety Office conducts outreach visits to life scientists with the goals of establishing caring and friendly relationships and positioning themselves as helpful supporters of scientists' own values of personal safety. According to staff reports, the upfront work of building relationships pays off later with smoother future interactions and a stronger safety culture. Their approach could be studied and scaled.
Shifting social norms in laboratories
Social norms are powerful determinants of workplace behavior, and social psychologists have a long history of successfully shifting behavior by changing norms. In a study published in 2020, social psychologists involved with the open-science movement sought to encourage academic lab scientists to use a formal policy to decide the order of authorship on published papers. The psychologists ran a randomized controlled trial across 30 labs to test a “lab-embedded discourse intervention” - essentially a semi-structured lab meeting - designed to shift norms and attitudes, and found statistically significant effects 4 months later on lab members' self-reports of using a formal authorship policy.
Deciding authorship is a sensitive topic in academia - best practices might not always be widely known, and it can be uncomfortable to bring it up among your peers unless you know how they feel. Many areas of proactive BRM are like this. Imagine asking your labmate, “Hey… would you mind wearing proper PPE in the lab?” Or worse yet, “Hey… I’m worried that our research could be used as a weapon, will you support me if we talk to our professor about it?”
It might be possible to design interventions that shift social norms in labs around proactive BRM. For example, social scientists could work with life scientists to design a semi-structured lab meeting that creates common knowledge among lab members that they all care about biorisk concerns. To assess effectiveness, life scientists could be surveyed about whether they would be willing and able to raise a biorisk concern if they had one. Eventually, it could be promoted at academic conferences. Effective interventions could be scaled, targeted to high-consequence labs, and embedded in academic institutions as part of onboarding (as is done with other promising interventions).
Designing compelling dual-use education programs
In the context of the life sciences, research is considered “dual-use” if it can be misused to cause harm. There is increasing international agreement that life scientists should consider the dual-use potential of their work and adopt codes of conduct to minimize the potential for misuse. See the Tianjin Biosecurity Guidelines for Codes of Conduct for Scientists and the WHO Global Guidance Framework for the Responsible Use of the Life Sciences for two recent examples.
However, life scientists are rarely formally taught about dual-use issues. (Former US Asst. Secretary of Defense and biosecurity expert Andy Weber recently called this “shocking” on the EA Forum.) Existing curricular materials cover some aspects of dual-use issues, but they have not been compiled, tested, or translated into common languages, and their quality likely varies greatly. (For example, I’m skeptical that comic books are a compelling format.)
There need to be compelling, comprehensive, and widely-accessible off-the-shelf online dual-use education programs available for life scientists. Such programs could be developed and tested by educators in partnership with biosecurity experts and life scientists. Government bodies and/or private funders could require them as a precondition for funding or accreditation.
I expect this topic to be somewhat controversial on this forum because of concerns about creating information hazards. While I remain open to changing my mind about the value of dual-use education, I want to offer a couple of thoughts about mitigating information hazards. First, dual-use education programs do not need to go into extensive detail about particular risks to be effective. Second, dual-use education programs should include guidance on responsible disclosure to avoid propagating information hazards. The details of how to do so are outside the scope of this post, but for example, they might involve privately discussing concerns with labmates before blurting them out on social media.
How can I get involved?
If you are interested in learning more, I encourage you to read the full paper from which this article was drawn.
If you think that you might have the skills and motivation to contribute to any of these or similar interventions, I welcome you to contact me by email or via direct message on this Forum. I’m hoping to build a community of people at the intersection of BRM and the social sciences.
If you aren't one of these people, but know someone who might be a good fit, please consider reaching out to that person about getting involved.
If you are interested in funding work in this space, please comment below to let others know.
Thanks to Andrew Sharo, Tessa Alexanian, Ryan Ritterson, and Will Bradshaw for feedback. Thanks to Will Bradshaw for his original post “Biosecurity needs engineers and social scientists" from which I shamelessly cribbed.
This work is licensed under a Creative Commons Attribution 4.0 International License.