Hide table of contents

Jonas Sandbrink

Mitigating the misuse and proliferation of dangerous biotechnology capabilities is a crucial part of our biosecurity strategy, especially until systems like robust pathogen detection and super PPE are in place. However, preventing the misuse of biotechnology can be tricky to work on due to the risk of drawing attention to the very things we fear. Andrew Snyder-Beattie and Ethan Alley mention strengthening the biological weapons convention as one such project in their Concrete Biosecurity Projects post. 

Here, I present some new (more or less) concrete project ideas in this space that seem good and which have not been explored substantially. These should be imagined as standing next to core risk mitigation efforts that are already receiving attention, including strengthening the biological weapons convention, DNA synthesis screening, and Genetic Engineering Attribution.

If you are keen to work on any of the projects below, please register your interest through this Google Form
 

1. Record-keeping for strong attribution

In most places, if you buy a gun, its serial number is registered. This enables law enforcement to link guns used in crimes to their owners and thus deter their misuse. We should create a similar system for attributing biological agents - agents that also have the potential to kill people - to their creators. This may be achieved through keeping records of the genetic sequences of organisms that scientists work on. 

Record-keeping could take place at the DNA synthesis or sequencing stage. For instance, a record-keeping module could be introduced together with DNA synthesis screening into all DNA synthesis machines. Recorded DNA sequences would be encrypted (potentially hashed) to protect intellectual property (IP) and stored in a way accessible to later investigation. Once an unusual outbreak occurs, the sequence for the causative agent could be similarly encrypted and screened against the encrypted DNA records from facilities. Matches would result in a flag of facilities that have worked on the agent in question, which could automatically trigger an inspection. I call this system Retrospective Encrypted Corroboration Of Recorded DNA Sequences (RECORDS), but you might come up with something better.  

A record-based system for strong attribution of biological agents may be a powerful mechanism to deter biological weapons development and use. Thus, such a system could feature as part of a future BWC compliance regime. Routine visits as part of such a compliance regime could check for active record-keeping. For instance, microbial samples collected in the laboratory could be sequenced, encrypted, and checked against facility records for gaps. Lastly, functional strong attribution could be used to identify accidental laboratory releases - it could have provided important positive or negative evidence in the COVID-19 origins debate. 

A more simple initial alternative to a RECORDS system might be to create a commitment of DNA synthesis companies to cross-check existing records in the case of an unusual outbreak. For instance, members of the International Gene Synthesis Consortium keep order and customer records for 8 years which could be tapped into. 

There are many technical, economic, and political challenges to making record-keeping workable. Individuals familiar with DNA synthesis and sequencing, cryptography, blockchain developers, and social studies might be able to contribute here by doing initial scoping and expanding on this idea and how to put it into practice. 
 

2. Responsible access to genetic sequences

In most places, to buy a gun, you have to have a license. Such a licence certifies your need to have a gun and a clean criminal record. In contrast, anyone can currently access any genetic blueprint, including those of deadly pathogens. I think we need to build responsible access systems, so that certain genetic blueprints, whether already known or discovered in the future, can only be accessed with a legitimate reason. 

In a recent related paper, James Smith and I argue that we might look at patient data as a parallel. For privacy reasons, access to patient data is tightly controlled. To access this patient data, researchers need to prove their credentials and need to apply with a concrete project. It seems reasonable to argue that a blueprint for a pandemic weapon should be subject to at least the same level of scrutiny. Similar to how patient data is anonymized unless absolutely required, a responsible access system could ensure that those genetic sequence fractions are shared with developers of vaccines and other countermeasures that are actually needed. 

Responsible access systems are both a technical and public engagement challenge. We need to find ways to create responsible access systems while generating as least friction as possible for legitimate research. This might involve direct work with genetic sequence repositories like GenBank, European Nucleotide Archive, DNA Data Bank of Japan, and GISAID. Furthermore, responsible access needs to consider equitable access and prevent discrimination against researchers from developing countries.

Skeptics might argue that dangerous organisms are already out of the box. However, currently, we are still protected by our limited knowledge of blueprints for pandemic-capable pathogens against which we do not have any countermeasures. It seems likely that we will discover pathogens worse than those already known in the future. To prevent their proliferation, we need to build responsible access systems now. 
 

3. Consensus-finding on risks and benefits of research

One of the reasons for the debate around so-called “gain of function research”, the enhancement of potential pandemic pathogens, having stalled is the fact that researchers disagree over the benefits and risks of research. However, this does not mean that we should not try to calculate the expected value of a given experiment; rather we should try to pool the opinions of different experts to come up with a consolidated estimate. 

Consensus-finding platforms like pol.is could be useful to solve the gridlock of differing beliefs. Pol.is allows individuals to vote on others’ comments and features a machine learning algorithm that helps to identify points of consensus (summary here). Such a platform could become a cornerstone of discussions around research risks and governance. Discussions could create guidance on whether and in what form individual projects should take place - including research like the enhancement of potential pandemic pathogens. Furthermore, such consensus finding platforms could be used to create robust ranking of different categories of research by their benefits and risks. These benefit and risk rankings could be used in the comparative risk-benefit assessment of research projects, and thus help funding decisions become more based on a project’s expected value. 

To drive the application of consensus-finding platforms to decisions over research strategies, individuals might trial pol.is and other platforms for evaluating the benefits and risks of different projects. A starting point might be effective altruism projects and funding decisions. In the long term, building a hub or pipeline for such inquiries could be very promising. 
 

4. Information loops to steer funding to less risky projects

Imagine there are two sets of identical houses which only differed by one factor: in one set of houses the electricity meter was in the basement, in the other set it was in the front room. Where do you think electricity consumption would be less? Arguably, the houses with the electricity meter in the front room, where the inhabitants are presented with their consumption. less. Donnella Meadows presents this story in her famous Leverage Points paper to demonstrate the power of information loops. Can we create and leverage such information loops for reducing biotechnology risks? 

In 1986, the US Toxic Release Inventory required the public disclosure of hazardous air pollutants released from factories. Within four years, Meadows claims, emissions had dropped by 40%. A similar requirement for the public disclosure of laboratory accidents would likely incentivize better laboratory practices. Requiring the public disclosure of funding risky research like the enhancement of potential pandemic pathogens might encourage more thorough review and oversight. As a large fraction of relevant information on grants is available online, a nonprofit could scrape the internet for grants by different funding bodies, assign risk scores based on high-level categories, and collate these on a website to highlight differences in risk-taking behavior. 

Highlighting different risk levels of different projects to grantmakers could also lead to the preferential funding of less risky research. This might be achieved through assigning different research proposals safety and security risks scores. Assignment of such risk scores might be researcher-led (a start for which could be preregistration), grantmaker-led, or a form of automated risk assessment. 

For projects aiming to create information loops, infohazards need to be considered and managed. Whether a project is net positive will depend on its specifics. 
 

Conclusions

These ideas have not received much attention so far and may serve as a starting point for further thinking. I have spent less than 20h thinking about each of these projects, so even an initial scoping study for each of them would likely be valuable. These projects are all very interdisciplinary, and do not require a set background; rather, the more crucial skills for success will be showing initiative, original thinking, and successful mediation across different viewpoints. While I describe these projects as generally good, this is not necessarily the case for every instantiation. This is especially the case for project ideas 3 and 4. Again, if you are interested in taking charge or helping with any of these projects, please fill in the Google Form

 

Acknowledgments

Many thanks to Joshua Monrad, James Wagstaff, and Andrew Snyder-Beattie for helpful feedback on this post. 


 

Comments7


Sorted by Click to highlight new comments since:

The barriers a “DNA registry” would impose on a terrorist (the only bad actor who’d be inconvenienced by it) would be trivial if they had the capability to do the other things necessary to produce a bioweapon. In fact, DNA synthesis and sequencing wouldn’t even be a necessary part of such an endeavor. I won’t describe the technological reasons why, but a basic familiarity with these technologies will make the reasons why clear. On the other hand, depending on execution, it could be rather annoying for legitimate researchers.

The idea of sealing off biological infohazards seems reasonable to me and like it might do some good. The world does not need public info on how to most effectively culture bird flu. This has precedent in the classification of military secrets and protection of patient data, would impact only a small number of researchers, and could be linked to the grant approval and IRB process for implementation.

As a key caveat, though, I would want such a system to be lightweight and specific. For example, you don’t want to make it harder to order COVID-19 genomic data or proteins, because then pandemic response becomes much harder. Thousands of scientists need that info, and if you tried to red tape it up, they’d slow to a crawl. Someone wiser than me would need to figure out the minimal information you need to hide to make it much harder to make a bioweapon while minimally inconveniencing the research community.

I don’t know much about the politics stuff here. However, my read is that those currently in power have a “nature is the ultimate bioterrorist” view. Convincing them to change their mind, replacing them with someone having a different viewpoint, or installing an alternative power center, seems hard.

I could imagine an approach involving a nonprofit staffed by people who know what they’re doing looking at grant applications or papers and putting a media spotlight on the most flagrantly risky and stupid stuff. But of course you’d have to find people willing to take on that role and also get the media willing to go agains the powers that be on this. I wouldn’t work there, and I don’t know if I’d listen to them either if they seemed unmeasured in their criticism. And who would choose to work at such an org but a bunch of alarmists? I think there would be perception issues.

Possibly you could “bribe” scientists by offering more and easier grant money for dangerous bioscience research provided they verifiably comply with enhanced safeguards during the research. That could allow an extremely well funded granting org like FTX to “pay for safety” rather than trying to gain control of the NIH purse strings to “enforce safety.” Think of it as harm reduction.

Thank you for your thoughts, I agree that this is tricky - but I believe we should at the very least have some discussions on this. The scenario I think about is based on the following reasoning (and targets not yet known pathogens): a) we are conducting research to identify new potential pandemic pathogens, b) DNA synthesis capabilities + other molecular biology capabilities required to synthesise viruses are becoming more accessible, we cannot count on all orders being properly screened, c) only a small number of labs (~20?) actually work on a given potential pandemic pathogen plus some public health folks, definitely not more than 1000s of people, therefore at least 1 to 2 order of magnitude fewer individuals than all those capable of synthesizing the potential pandemic pathogen (this obviously changes once a potential pandemic pathogen enters humans and becomes a pandemic pathogen, then genome definitely needs to be public), d) can we have those few people apply to access the genomes from established databases similar to how people apply to access patient data?  

In terms of needing such a system to be lightweight and specific: this also implies needing it what is sometimes called "adaptive governance" (i.e. you have to be able to rapidly change your rules when new issues emerge).

For example, there were ambiguities about whether SARS-CoV-2 fell under Australia Group export controls on "SARS-like-coronaviruses" (related journal article)... a more functional system would include triggers for removing export controls (e.g. at a threshold of global transmission, public health needs will likely outweigh biosecurity concerns about pathogen access)

One thing I find hopeful, under the "Consensus-finding on risks and benefits of research" idea, is that the report Emerging Technologies and Dual-Use Concerns (WHO, 2021) includes two relevant governance  priorities:

  • Safety by design in dual-use research projects: "A comprehensive approach to identifying the potential benefits and risks of research may improve the design and flag potential pitfalls early in the research. "
  • Continued lack of a global framework for DURC: "Previous WHO consultations have highlighted the lack of a global framework as a critical gap, and regulations, norms and laws to address DURC remain fragmented among stakeholders and countries."

This was based on an expert elicitation study using the IDEA (Investigate, Discuss, Estimate, Aggregate) framework... I find it hopeful that this process identified these governance issues as priorities!

That said, I find it less hopeful that when "asked to allocate each issue a score from 1 to 100 reflecting its impact and plausibility" the scores for "The Lack of a Global DURC Framework" appear to range from 1 to 99:

Figure 2 from the CSER / WHO report on Emerging Technologies and Dual-Use Concerns.

Interesting, thank you for sharing! I was aware of this report, but did not consider their methodology in-depth at the time of reading. 

This is really interesting to learn- thanks for sharing Jonas. Found the reasoning for creating an equivalent of Retrospective Encrypted Corroboration Of Recorded DNA Sequences (RECORDS) to be a really good feature in creating accountability and transparency for all research. (It's a cool name by the way!)

Thanks Jasmin!

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 3m read
 · 
We are excited to share a summary of our 2025 strategy, which builds on our work in 2024 and provides a vision through 2027 and beyond! Background Giving What We Can (GWWC) is working towards a world without preventable suffering or existential risk, where everyone is able to flourish. We do this by making giving effectively and significantly a cultural norm. Focus on pledges Based on our last impact evaluation[1], we have made our pledges –  and in particular the 🔸10% Pledge – the core focus of GWWC’s work.[2] We know the 🔸10% Pledge is a powerful institution, as we’ve seen almost 10,000 people take it and give nearly $50M USD to high-impact charities annually. We believe it could become a norm among at least the richest 1% — and likely a much wider segment of the population — which would cumulatively direct an enormous quantity of financial resources towards tackling the world’s most pressing problems.  We initiated this focus on pledges in early 2024, and are doubling down on it in 2025. In line with this, we are retiring various other initiatives we were previously running and which are not consistent with our new strategy. Introducing our BHAG We are setting ourselves a long-term Big Hairy Audacious Goal (BHAG) of 1 million pledgers donating $3B USD to high-impact charities annually, which we will start working towards in 2025. 1 million pledgers donating $3B USD to high-impact charities annually would be roughly equivalent to ~100x GWWC’s current scale, and could be achieved by 1% of the world’s richest 1% pledging and giving effectively. Achieving this would imply the equivalent of nearly 1 million lives being saved[3] every year. See the BHAG FAQ for more info. Working towards our BHAG Over the coming years, we expect to test various growth pathways and interventions that could get us to our BHAG, including digital marketing, partnerships with aligned organisations, community advocacy, media/PR, and direct outreach to potential pledgers. We thin