1087 karmaJoined


Hi there! 

I currently co-lead the biosecurity grantmaking program at Effective Giving. Before that, I've worked in various research roles focused on pandemic preparedness and biosecurity.

Joshua TM



Thanks for writing this, I found it interesting!

This is a very welcome contribution to a professional field (ie., the GCBR-focused parts of the pandemic preparedness and biosecurity space) that can often feel opaque and poorly coordinated — sincere thanks to Max and everyone else who helped make it!

Thanks for sharing this and congrats on a very longstanding research effort!

Are you able to provide more details on the backgrounds of the “biorisk experts”? For example, the kinds of organisations they work for, their seniority (eg years of professional experience), or their prior engagement with global catastrophic biological risks specifically (as opposed to pandemic preparedness or biorisk management more broadly).

I ask because I’m wondering about potential selection effects with respect to level of concern about catastrophe/extinction from biology. Without knowing your sampling method, I could imagine that you could potentially disproportionately have reached people who worry more about catastrophic and extinction risks than the typical “biorisk expert.”


This is Joshua, I work on the biosecurity program at the philanthropic advisor Effective Giving. In 2021, we recommended two grants to UNIDIR's work on biological risks, e.g. this report on stakeholder perspectives on the Biological Weapons Convention, which you might find interesting.

To be clear, I definitely think there's a spectrum of attitudes towards security, centralisation, and other features of hazard databases, so I think you're pointing to an important area of meaningful substantive disagreement!

Yes, benchtop devices have significant ramifications! 

  • Agreed, storing the database on-device does sound much harder to secure than some kind of distributed storage. Though, I can imagine that some customers will demand airgapped on-device solutions, where this challenge could present itself anyway.
  • Agreed, sending exact synthesis orders from devices to screeners seems undesirable/unviable, for a host of reasons. 

But that's consistent with my comment, which just meant to emphasise that I don't read Diggans and Leproust as advocating for a fully "public" hazard database, as slg's comment could be read to imply.

Hi slg — great point about synthesis screening being a very concrete example where approaches to security can make a big difference.

One quibble I have: Your hyperlink seems to suggest that Diggans and Leproust advocate for a fully “public” database of annotated hazard sequences. But I think it’s worth noting that although they do use the phrase “publicly available” a couple of times, they also pretty explicitly discuss the idea of having such a database be accessible to synthesis providers only, which is a much smaller set and seems to carry significantly lower risks for misuse than truly public access. Relevant quote:

“Sustained funding and commitment will be required to build and maintain a database of risk-associated sequences, their known mechanisms of pathogenicity and the biological contexts in which these mechanisms can cause harm. This database (or at a minimum a screening capability making use of this database), to have maximum impact on global DNA synthesis screening, must be available to both domestic and international providers.”

Also worth noting the parenthetical about having providers use a screening mechanism with access to the database without having such direct access themselves, which seems like a nod to some of the features in, eg, SecureDNA’s approach.


Hi Nadia, thanks for writing this post! It's a thorny topic, and I think people are doing the field a real service when they take the time to write about problems as they see them –– I particularly appreciate that you wrote candidly about challenges involving influential funders.

Infohazards truly are a wicked problem, with lots of very compelling arguments pushing in different directions (hence the lack of consensus you alluded to), and it's frustratingly difficult to devise sound solutions. But I think infohazards are just one of many factors contributing to the overall opacity in the field causing some of these epistemic problems, and I'm a bit more hopeful about other ways of reducing that opacity. For example, if the field had more open discussions about things that are not very infohazardous (e.g., comparing strategies for pursuing well-defined goals, such as maintaining the norm against biological weapons), I suspect it'd mitigate the consequences of not being able to discuss certain topics (e.g. detailed threat models) openly. Of course, that just raises the question of what is and isn't an infohazard (which itself may be infohazardous...), but I do think there are some areas where we could pretty safety move in the direction of more transparency.

I can't speak for other organisations, but I think my organisation (Effective Giving, where I lead the biosecurity grantmaking program) could do a lot to be more transparent just by overcoming obstacles to transparency that are unrelated to infohazards. These include the (time) costs of disseminating information; concerns about how transparency might affect certain key relationships, e.g. with prospective donors whom we might advise in the future; and public relations considerations more generally; and they're definitely very real obstacles, but they generally seem more tractable than the infohazard issue.

I think we (again, just speaking for Effective Giving's biosecurity program) have a long way to go, and I'd personally be quite disappointed if we didn't manage to move in the direction of sharing more of our work during my tenure. This post was a good reminder of that, so thanks again for writing it!

Thanks for researching and writing this!

Thanks for doing this survey and sharing the results, super interesting!


maybe partly because people who have inside views were incentivised to respond, because it’s cool to say you have inside views or something

Yes, I definitely think that there's a lot of potential for social desirability bias here! And I think this can happen even if the responses are anonymous, as people might avoid the cognitive dissonance that comes with admitting to "not having an inside view." One might even go as far as framing the results as  "Who do people claim to defer to?"

Load more