Hide table of contents

Laboratory biosafety and biosecurity (collectively, biorisk management or BRM) could benefit from more involvement from social scientists. Earlier this year, I co-authored an article called "Motivating Proactive Biorisk Management" on this topic. Here, I'd like to briefly walk through the core arguments of the article (self-quoting in several places) and then outline a few hypothetical examples of novel interventions derived from its ideas. I hope that this post and the original article contribute to future collaborations between social scientists and biosecurity professionals.

This article represents my personal opinions and does not reflect the opinions of my employer Gryphon Scientific.

Core claims

Biorisk management (BRM) encompasses a broad set of practices that life scientists can follow to mitigate the risks of their work. It includes things like following safety and security protocols, reporting suspicious activities in and around the lab, and modifying or stopping a research project if you believe that its results could be misused to cause harm.

Life scientists face external pressure from regulators to practice BRM, but sometimes external rules cannot be effectively enforced or haven’t been created yet. In these situations, it is important for life scientists to practice proactive BRM - to be vigilant about potential biorisks and take steps to mitigate them, even when nobody else is looking.

Unfortunately, research suggests that many life scientists may not be very motivated to proactively manage biorisks. Much is still unknown about life scientists’ opinions on BRM, especially outside of the US, but enough studies have been done to be concerning. For example, in a series of annual surveys of laboratory safety staff and scientists at a US national conference, the most commonly cited barrier to improving laboratory safety (with almost 50% of each group agreeing) was “competing priorities,” the second-most commonly cited barrier was “apathy,” and the fourth was “time and hassle factors.” There are no high-quality surveys on the topic of managing risks of deliberate misuse, but in interview studies, non-trivial fractions of life scientists have expressed the ideas that risks are virtually nonexistent, that risks are present but unstoppable, and that the benefits of research virtually always outweigh the risks. (See the full paper for a more complete discussion.)

Despite these findings, little effort is being put directly into understanding or changing life scientists’ attitudes about BRM or providing compelling arguments and narratives about the importance of BRM. Most existing biosafety and biosecurity training focuses entirely on imparting technical biosafety and biosecurity skills, like how to decontaminate equipment or use PPE. It makes little effort to persuade, engage, motivate, or inspire life scientists to practice these skills when nobody else is looking, or to think critically about how to prevent their work from being misused. Relevant research exists on the topic of “safety culture,” but the field is underdeveloped.

Lessons from the social and behavioral sciences can and should be adapted to promote proactive biorisk management. For example, literature on social norms, persuasion, attitude change, and habit formation could be used to design and test behavior-change interventions. The bar is low; researchers have not rigorously tested interventions to change life scientists' proactive BRM practices. Funders should support social scientists and biorisk experts to partner with life scientists on programs of applied research that start with interviews and surveys and build toward scalable and testable interventions.

Example interventions

To illustrate, here are three sketches of possible social-science interventions to promote proactive BRM that could be piloted and evaluated in field settings. The full paper includes references, more intervention ideas, and more detailed thoughts about implementation and evaluation.

Listening tours for proactive biosafety

Labs are more likely to maintain biosafety when scientists and their institutional biosafety staff maintain strong working relationships and see themselves as being on the same team. Unfortunately, the relationships between scientists and safety staff are often strained. Scientists may fear that interacting with safety staff will slow down their work, so they fail to ask questions or tell staff about their concerns. Research on safety in chemistry labs has also found that scientists can sometimes offload responsibility for safety onto staff in ways that are not justifiable. For example, if a scientist notices a malfunctioning piece of equipment, they might assume that staff know about the malfunction and would not allow it to continue if it was truly risky. In fact, safety staff often rely on scientists to let them know about malfunctions and other anomalies.

One approach to improve scientist-staff relations in the life sciences is for biosafety staff to conduct periodic “listening tours” with life science laboratories, as is already practiced by executives in many private firms. Biosafety staff could attend existing lab meetings to introduce themselves, assure the lab that they are not conducting an audit, and ask the lab members to teach them about the possible safety risks involved in their subfield (not necessarily their particular laboratory). Staff could close the conversation by thanking the group and requesting advice on how to reduce the burdens of risk management and how to communicate with other life scientists about the importance of laboratory safety.

By positioning themselves as learners, biosafety staff members can accomplish several psychologically potent goals simultaneously. They can send the message that they are not omnipotent, frame scientists in a position of responsibility and authority regarding laboratory safety, and convey the potential for a friendly, collaborative relationship in the future. This effort could also give life scientists practice thinking about how their own work could be unsafe without fear of being audited, and may give staff valuable information about novel safety risks and ways of making risk management less costly.

One potential example of this approach in practice can be found at Colorado State University, which oversees a large and complex life science research infrastructure. The CSU Biosafety Office conducts outreach visits to life scientists with the goals of establishing caring and friendly relationships and positioning themselves as helpful supporters of scientists' own values of personal safety. According to staff reports, the upfront work of building relationships pays off later with smoother future interactions and a stronger safety culture. Their approach could be studied and scaled.

Shifting social norms in laboratories

Social norms are powerful determinants of workplace behavior, and social psychologists have a long history of successfully shifting behavior by changing norms. In a study published in 2020, social psychologists involved with the open-science movement sought to encourage academic lab scientists to use a formal policy to decide the order of authorship on published papers. The psychologists ran a randomized controlled trial across 30 labs to test a “lab-embedded discourse intervention” - essentially a semi-structured lab meeting - designed to shift norms and attitudes, and found statistically significant effects 4 months later on lab members' self-reports of using a formal authorship policy.

Deciding authorship is a sensitive topic in academia - best practices might not always be widely known, and it can be uncomfortable to bring it up among your peers unless you know how they feel. Many areas of proactive BRM are like this. Imagine asking your labmate, “Hey… would you mind wearing proper PPE in the lab?” Or worse yet, “Hey… I’m worried that our research could be used as a weapon, will you support me if we talk to our professor about it?”

It might be possible to design interventions that shift social norms in labs around proactive BRM. For example, social scientists could work with life scientists to design a semi-structured lab meeting that creates common knowledge among lab members that they all care about biorisk concerns. To assess effectiveness, life scientists could be surveyed about whether they would be willing and able to raise a biorisk concern if they had one. Eventually, it could be promoted at academic conferences. Effective interventions could be scaled, targeted to high-consequence labs, and embedded in academic institutions as part of onboarding (as is done with other promising interventions).

Designing compelling dual-use education programs

In the context of the life sciences, research is considered “dual-use” if it can be misused to cause harm. There is increasing international agreement that life scientists should consider the dual-use potential of their work and adopt codes of conduct to minimize the potential for misuse. See the Tianjin Biosecurity Guidelines for Codes of Conduct for Scientists and the WHO Global Guidance Framework for the Responsible Use of the Life Sciences for two recent examples.

However, life scientists are rarely formally taught about dual-use issues. (Former US Asst. Secretary of Defense and biosecurity expert Andy Weber recently called this “shocking” on the EA Forum.) Existing curricular materials cover some aspects of dual-use issues, but they have not been compiled, tested, or translated into common languages, and their quality likely varies greatly. (For example, I’m skeptical that comic books are a compelling format.)

There need to be compelling, comprehensive, and widely-accessible off-the-shelf online dual-use education programs available for life scientists. Such programs could be developed and tested by educators in partnership with biosecurity experts and life scientists. Government bodies and/or private funders could require them as a precondition for funding or accreditation. 

I expect this topic to be somewhat controversial on this forum because of concerns about creating information hazards. While I remain open to changing my mind about the value of dual-use education, I want to offer a couple of thoughts about mitigating information hazards. First, dual-use education programs do not need to go into extensive detail about particular risks to be effective. Second, dual-use education programs should include guidance on responsible disclosure to avoid propagating information hazards. The details of how to do so are outside the scope of this post, but for example, they might involve privately discussing concerns with labmates before blurting them out on social media.

How can I get involved?

If you are interested in learning more, I encourage you to read the full paper from which this article was drawn.

If you think that you might have the skills and motivation to contribute to any of these or similar interventions, I welcome you to contact me by email or via direct message on this Forum. I’m hoping to build a community of people at the intersection of BRM and the social sciences.

If you aren't one of these people, but know someone who might be a good fit, please consider reaching out to that person about getting involved.

If you are interested in funding work in this space, please comment below to let others know.

Thanks to Andrew Sharo, Tessa Alexanian, Ryan Ritterson, and Will Bradshaw for feedback. Thanks to Will Bradshaw for his original post “Biosecurity needs engineers and social scientists" from which I shamelessly cribbed.

This work is licensed under a Creative Commons Attribution 4.0 International License.
 

Comments2


Sorted by Click to highlight new comments since:

This seems really valuable!
There could be lessons learned from hospital initiatives around hand hygiene, where there were big cultural aspects (like doctors aren't expecting nurses to tell them what to do.)

Thanks for writing this!

Lessons from the social and behavioral sciences can and should be adapted to promote proactive biorisk management. For example, literature on social norms, persuasion, attitude change, and habit formation could be used to design and test behavior-change interventions. The bar is low; researchers have not rigorously tested interventions to change life scientists' proactive BRM practices. Funders should support social scientists and biorisk experts to partner with life scientists on programs of applied research that start with interviews and surveys and build toward scalable and testable interventions.


I agree with all of this and believe that we undervalue the importance of diagnosing and changing behaviour in many areas of EA practice.

I think that this article provides useful theory and ideas for potential interventions.

Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
titotal
 ·  · 35m read
 · 
None of this article was written with AI assistance. Introduction There have been many, many, many attempts to lay out scenarios of AI taking over or destroying humanity. What they tend to have in common is an assumption that our doom will be sealed as a result of AI becoming significantly smarter and more powerful than the best humans, eclipsing us in skill and power and outplaying us effortlessly. In this article, I’m going to do a twist: I’m going to write a story (and detailed analysis) about a scenario where humanity is disempowered and destroyed by AI that is dumber than us, due to a combination of hype, overconfidence, greed and anti-intellectualism. This is a scenario where instead of AI bringing untold abundance or tiling the universe with paperclips, it brings mediocrity, stagnation, and inequality. This is not a forecast. This story probably won’t happen. But it’s a story that reflects why I am worried about AI, despite being generally dismissive of all those doom stories above. It is accompanied by an extensive, sourced list of present day issues and warning signs that are the source of my fears. This post is divided into 3 parts: Part 1 is my attempt at a plausible sounding science fiction story sketching out this scenario, starting with the decline of a small architecture firm and ending with nuclear Armageddon. In part 2 I will explain, with sources, the real world current day trends that were used as ingredients for the story. In part 3 I will analysise the likelihood of my scenario, why I think it’s very unlikely, but also why it has some clear likelihood advantages over the typical doomsday scenario. The story of Slopworld In the nearish future: When the architecture firm HBM was bought out by the Vastum corporation, they announced that they would fire 99% of their architects and replace them with AI chatbots. The architect-bot they unveiled was incredibly impressive. After uploading your site parameters, all you had to do was chat with