Hide table of contents

Laboratory biosafety and biosecurity (collectively, biorisk management or BRM) could benefit from more involvement from social scientists. Earlier this year, I co-authored an article called "Motivating Proactive Biorisk Management" on this topic. Here, I'd like to briefly walk through the core arguments of the article (self-quoting in several places) and then outline a few hypothetical examples of novel interventions derived from its ideas. I hope that this post and the original article contribute to future collaborations between social scientists and biosecurity professionals.

This article represents my personal opinions and does not reflect the opinions of my employer Gryphon Scientific.

Core claims

Biorisk management (BRM) encompasses a broad set of practices that life scientists can follow to mitigate the risks of their work. It includes things like following safety and security protocols, reporting suspicious activities in and around the lab, and modifying or stopping a research project if you believe that its results could be misused to cause harm.

Life scientists face external pressure from regulators to practice BRM, but sometimes external rules cannot be effectively enforced or haven’t been created yet. In these situations, it is important for life scientists to practice proactive BRM - to be vigilant about potential biorisks and take steps to mitigate them, even when nobody else is looking.

Unfortunately, research suggests that many life scientists may not be very motivated to proactively manage biorisks. Much is still unknown about life scientists’ opinions on BRM, especially outside of the US, but enough studies have been done to be concerning. For example, in a series of annual surveys of laboratory safety staff and scientists at a US national conference, the most commonly cited barrier to improving laboratory safety (with almost 50% of each group agreeing) was “competing priorities,” the second-most commonly cited barrier was “apathy,” and the fourth was “time and hassle factors.” There are no high-quality surveys on the topic of managing risks of deliberate misuse, but in interview studies, non-trivial fractions of life scientists have expressed the ideas that risks are virtually nonexistent, that risks are present but unstoppable, and that the benefits of research virtually always outweigh the risks. (See the full paper for a more complete discussion.)

Despite these findings, little effort is being put directly into understanding or changing life scientists’ attitudes about BRM or providing compelling arguments and narratives about the importance of BRM. Most existing biosafety and biosecurity training focuses entirely on imparting technical biosafety and biosecurity skills, like how to decontaminate equipment or use PPE. It makes little effort to persuade, engage, motivate, or inspire life scientists to practice these skills when nobody else is looking, or to think critically about how to prevent their work from being misused. Relevant research exists on the topic of “safety culture,” but the field is underdeveloped.

Lessons from the social and behavioral sciences can and should be adapted to promote proactive biorisk management. For example, literature on social norms, persuasion, attitude change, and habit formation could be used to design and test behavior-change interventions. The bar is low; researchers have not rigorously tested interventions to change life scientists' proactive BRM practices. Funders should support social scientists and biorisk experts to partner with life scientists on programs of applied research that start with interviews and surveys and build toward scalable and testable interventions.

Example interventions

To illustrate, here are three sketches of possible social-science interventions to promote proactive BRM that could be piloted and evaluated in field settings. The full paper includes references, more intervention ideas, and more detailed thoughts about implementation and evaluation.

Listening tours for proactive biosafety

Labs are more likely to maintain biosafety when scientists and their institutional biosafety staff maintain strong working relationships and see themselves as being on the same team. Unfortunately, the relationships between scientists and safety staff are often strained. Scientists may fear that interacting with safety staff will slow down their work, so they fail to ask questions or tell staff about their concerns. Research on safety in chemistry labs has also found that scientists can sometimes offload responsibility for safety onto staff in ways that are not justifiable. For example, if a scientist notices a malfunctioning piece of equipment, they might assume that staff know about the malfunction and would not allow it to continue if it was truly risky. In fact, safety staff often rely on scientists to let them know about malfunctions and other anomalies.

One approach to improve scientist-staff relations in the life sciences is for biosafety staff to conduct periodic “listening tours” with life science laboratories, as is already practiced by executives in many private firms. Biosafety staff could attend existing lab meetings to introduce themselves, assure the lab that they are not conducting an audit, and ask the lab members to teach them about the possible safety risks involved in their subfield (not necessarily their particular laboratory). Staff could close the conversation by thanking the group and requesting advice on how to reduce the burdens of risk management and how to communicate with other life scientists about the importance of laboratory safety.

By positioning themselves as learners, biosafety staff members can accomplish several psychologically potent goals simultaneously. They can send the message that they are not omnipotent, frame scientists in a position of responsibility and authority regarding laboratory safety, and convey the potential for a friendly, collaborative relationship in the future. This effort could also give life scientists practice thinking about how their own work could be unsafe without fear of being audited, and may give staff valuable information about novel safety risks and ways of making risk management less costly.

One potential example of this approach in practice can be found at Colorado State University, which oversees a large and complex life science research infrastructure. The CSU Biosafety Office conducts outreach visits to life scientists with the goals of establishing caring and friendly relationships and positioning themselves as helpful supporters of scientists' own values of personal safety. According to staff reports, the upfront work of building relationships pays off later with smoother future interactions and a stronger safety culture. Their approach could be studied and scaled.

Shifting social norms in laboratories

Social norms are powerful determinants of workplace behavior, and social psychologists have a long history of successfully shifting behavior by changing norms. In a study published in 2020, social psychologists involved with the open-science movement sought to encourage academic lab scientists to use a formal policy to decide the order of authorship on published papers. The psychologists ran a randomized controlled trial across 30 labs to test a “lab-embedded discourse intervention” - essentially a semi-structured lab meeting - designed to shift norms and attitudes, and found statistically significant effects 4 months later on lab members' self-reports of using a formal authorship policy.

Deciding authorship is a sensitive topic in academia - best practices might not always be widely known, and it can be uncomfortable to bring it up among your peers unless you know how they feel. Many areas of proactive BRM are like this. Imagine asking your labmate, “Hey… would you mind wearing proper PPE in the lab?” Or worse yet, “Hey… I’m worried that our research could be used as a weapon, will you support me if we talk to our professor about it?”

It might be possible to design interventions that shift social norms in labs around proactive BRM. For example, social scientists could work with life scientists to design a semi-structured lab meeting that creates common knowledge among lab members that they all care about biorisk concerns. To assess effectiveness, life scientists could be surveyed about whether they would be willing and able to raise a biorisk concern if they had one. Eventually, it could be promoted at academic conferences. Effective interventions could be scaled, targeted to high-consequence labs, and embedded in academic institutions as part of onboarding (as is done with other promising interventions).

Designing compelling dual-use education programs

In the context of the life sciences, research is considered “dual-use” if it can be misused to cause harm. There is increasing international agreement that life scientists should consider the dual-use potential of their work and adopt codes of conduct to minimize the potential for misuse. See the Tianjin Biosecurity Guidelines for Codes of Conduct for Scientists and the WHO Global Guidance Framework for the Responsible Use of the Life Sciences for two recent examples.

However, life scientists are rarely formally taught about dual-use issues. (Former US Asst. Secretary of Defense and biosecurity expert Andy Weber recently called this “shocking” on the EA Forum.) Existing curricular materials cover some aspects of dual-use issues, but they have not been compiled, tested, or translated into common languages, and their quality likely varies greatly. (For example, I’m skeptical that comic books are a compelling format.)

There need to be compelling, comprehensive, and widely-accessible off-the-shelf online dual-use education programs available for life scientists. Such programs could be developed and tested by educators in partnership with biosecurity experts and life scientists. Government bodies and/or private funders could require them as a precondition for funding or accreditation. 

I expect this topic to be somewhat controversial on this forum because of concerns about creating information hazards. While I remain open to changing my mind about the value of dual-use education, I want to offer a couple of thoughts about mitigating information hazards. First, dual-use education programs do not need to go into extensive detail about particular risks to be effective. Second, dual-use education programs should include guidance on responsible disclosure to avoid propagating information hazards. The details of how to do so are outside the scope of this post, but for example, they might involve privately discussing concerns with labmates before blurting them out on social media.

How can I get involved?

If you are interested in learning more, I encourage you to read the full paper from which this article was drawn.

If you think that you might have the skills and motivation to contribute to any of these or similar interventions, I welcome you to contact me by email or via direct message on this Forum. I’m hoping to build a community of people at the intersection of BRM and the social sciences.

If you aren't one of these people, but know someone who might be a good fit, please consider reaching out to that person about getting involved.

If you are interested in funding work in this space, please comment below to let others know.

Thanks to Andrew Sharo, Tessa Alexanian, Ryan Ritterson, and Will Bradshaw for feedback. Thanks to Will Bradshaw for his original post “Biosecurity needs engineers and social scientists" from which I shamelessly cribbed.

This work is licensed under a Creative Commons Attribution 4.0 International License.
 

Comments2


Sorted by Click to highlight new comments since:

This seems really valuable!
There could be lessons learned from hospital initiatives around hand hygiene, where there were big cultural aspects (like doctors aren't expecting nurses to tell them what to do.)

Thanks for writing this!

Lessons from the social and behavioral sciences can and should be adapted to promote proactive biorisk management. For example, literature on social norms, persuasion, attitude change, and habit formation could be used to design and test behavior-change interventions. The bar is low; researchers have not rigorously tested interventions to change life scientists' proactive BRM practices. Funders should support social scientists and biorisk experts to partner with life scientists on programs of applied research that start with interviews and surveys and build toward scalable and testable interventions.


I agree with all of this and believe that we undervalue the importance of diagnosing and changing behaviour in many areas of EA practice.

I think that this article provides useful theory and ideas for potential interventions.

Curated and popular this week
 ·  · 16m read
 · 
Applications are currently open for the next cohort of AIM's Charity Entrepreneurship Incubation Program in August 2025. We've just published our in-depth research reports on the new ideas for charities we're recommending for people to launch through the program. This article provides an introduction to each idea, and a link to the full report. You can learn more about these ideas in our upcoming Q&A with Morgan Fairless, AIM's Director of Research, on February 26th.   Advocacy for used lead-acid battery recycling legislation Full report: https://www.charityentrepreneurship.com/reports/lead-battery-recycling-advocacy    Description Lead-acid batteries are widely used across industries, particularly in the automotive sector. While recycling these batteries is essential because the lead inside them can be recovered and reused, it is also a major source of lead exposure—a significant environmental health hazard. Lead exposure can cause severe cardiovascular and cognitive development issues, among other health problems.   The risk is especially high when used-lead acid batteries (ULABs) are processed at informal sites with inadequate health and environmental protections. At these sites, lead from the batteries is often released into the air, soil, and water, exposing nearby populations through inhalation and ingestion. Though data remain scarce, we estimate that ULAB recycling accounts for 5–30% of total global lead exposure. This report explores the potential of launching a new charity focused on advocating for stronger ULAB recycling policies in low- and middle-income countries (LMICs). The primary goal of these policies would be to transition the sector from informal, high-pollution recycling to formal, regulated recycling. Policies may also improve environmental and safety standards within the formal sector to further reduce pollution and exposure risks.   Counterfactual impact Cost-effectiveness analysis: We estimate that this charity could generate abou
sawyer🔸
 ·  · 2m read
 · 
Note: This started as a quick take, but it got too long so I made it a full post. It's still kind of a rant; a stronger post would include sources and would have gotten feedback from people more knowledgeable than I. But in the spirit of Draft Amnesty Week, I'm writing this in one sitting and smashing that Submit button. Many people continue to refer to companies like OpenAI, Anthropic, and Google DeepMind as "frontier AI labs". I think we should drop "labs" entirely when discussing these companies, calling them "AI companies"[1] instead. While these companies may have once been primarily research laboratories, they are no longer so. Continuing to call them labs makes them sound like harmless groups focused on pushing the frontier of human knowledge, when in reality they are profit-seeking corporations focused on building products and capturing value in the marketplace. Laboratories do not directly publish software products that attract hundreds of millions of users and billions in revenue. Laboratories do not hire armies of lobbyists to control the regulation of their work. Laboratories do not compete for tens of billions in external investments or announce many-billion-dollar capital expenditures in partnership with governments both foreign and domestic. People call these companies labs due to some combination of marketing and historical accident. To my knowledge no one ever called Facebook, Amazon, Apple, or Netflix "labs", despite each of them employing many researchers and pushing a lot of genuine innovation in many fields of technology. To be clear, there are labs inside many AI companies, especially the big ones mentioned above. There are groups of researchers doing research at the cutting edge of various fields of knowledge, in AI capabilities, safety, governance, etc. Many individuals (perhaps some readers of this very post!) would be correct in saying they work at a lab inside a frontier AI company. It's just not the case that any of these companies as
Dorothy M.
 ·  · 5m read
 · 
If you don’t typically engage with politics/government, this is the time to do so. If you are American and/or based in the U.S., reaching out to lawmakers, supporting organizations that are mobilizing on this issue, and helping amplify the urgency of this crisis can make a difference. Why this matters: 1. Millions of lives are at stake 2. Decades of progress, and prior investment, in global health and wellbeing are at risk 3. Government funding multiplies the impact of philanthropy Where things stand today (February 27, 2025) The Trump Administration’s foreign aid freeze has taken a catastrophic turn: rather than complying with a court order to restart paused funding, they have chosen to terminate more than 90% of all USAID grants and contracts. This stunningly reckless decision comes just 30 days into a supposed 90-day review of foreign aid. This will cause a devastating loss of life. Even beyond the immediate deaths, the long-term consequences are dire. Many of these programs rely on supply chains, health worker training, and community trust that have taken years to build, and which have already been harmed by U.S. actions in recent weeks. Further disruptions will actively unravel decades of health infrastructure development in low-income countries. While some funding may theoretically remain available, the reality is grim: the main USAID payment system remains offline and most staff capable of restarting programs have been laid off. Many people don’t believe these terminations were carried out legally. But NGOs and implementing partners are on the brink of bankruptcy and insolvency because the government has not paid them for work completed months ago and is withholding funding for ongoing work (including not transferring funds and not giving access to drawdowns of lines of credit, as is typical for some awards). We are facing a sweeping and permanent shutdown of many of the most cost-effective global health and development programs in existence that sa