Hide table of contents

Authors: Megan Crawford, Finan Adamson, Jeffrey Ladish

Special Thanks to Georgia Ray for Editing

Biorisk

Most in the effective altruism community are aware of a possible existential threat from biological technology but not much beyond that. The form biological threats could take is unclear. Is the primary threat from state bioweapon programs? Or superorganisms accidentally released from synthetic biology labs? Or something else entirely?

If you’re not already an expert, you’re encouraged to stay away from this topic. You’re told that speculating about powerful biological weapons might inspire terrorists or rogue states, and simply articulating these threats won’t make us any safer. The cry of “Info hazard!” shuts down discussion by fiat, and the reasons cannot be explained since these might also be info hazards. If concerned, intelligent people cannot articulate their reasons for censorship, cannot coordinate around principles of information management, then that itself is a cause for concern. Discussions may simply move to unregulated forums, and dangerous ideas will propagate through well intentioned ignorance.

We believe that well reasoned principles and heuristics can help solve this coordination problem. The goal of this post is to carve up the information landscape into areas of relative danger and safety; to illuminate some of the islands in the mire that contain more treasures than traps, and to help you judge where you’re likely to find discussion more destructive than constructive.

Useful things to know already if you’re reading this post:

Much of the material in this also overlaps with Gregory Lewis’ Information Hazards in Biotechnology article, which we recommend.

Risks of Information Sharing

We’ve divided this paper into two broad categories: risks from information sharing, and risks from secrecy. First we will go over the ways in which sharing information can cause harm, and then how keeping information secret can cause harm.

We believe considering both is important for determining whether or not to share a particular thought or paper. To keep things relatively targeted and concrete, we provide illustrative toy examples, or sometimes even real examples.

This section categorizes ways that sharing information in the biological sciences can be risky.

A topic covered in other Information Hazard posts that we chose not to focus on here is that different audiences can present substantially different risk profiles for the same idea.

With some ideas, almost all of the benefits and de-risking associated with sharing can be achieved by only mentioning your idea to one key researcher, or sharing findings in a journal associated with some obscure subfield, while simultaneously dodging most of the risk of these ideas finding their way to a foolish or bad actor.

If you’re interested in that topic, Gregory Lewis’ paper Information Hazards in Biotechnology is a good place to read about it.

Bad conceptual ideas to bad actors

A bad actor gets an idea they did not previously have

Some ways this could manifest:

  • A bad actor uses these new ideas to create novel biological weapons or strategies.
  • State bioweapons programs or bioterrorists gain new research directions or ideas.

Why might this be important?

State or non-state actors may have trouble developing ideas on their own. Model generation can be quite difficult, so generating or sharing clever new models can be risky. In particular, we are concerned about the possibility of ideas moving from biology researchers to bioterrorists or state actors. Biosecurity researchers are often better-educated and/or more creative than most bad actors. There are also probably many more researchers than people interested in bioterrorism; the difference in numbers could be even more impactful. If there are more biosecurity researchers than there are bad actors, researchers are likely to come up with many more ideas.

Examples

  • Toy example: Biosecurity researcher writes and publishes a paper about vulnerabilities in the water supply of Exemplandia and a biological agent, Sickmaniasis, that could be used to terrorize Exemplandia. Bioterrorists read the paper, and decide to carry out an attack. A bioterrorist does research in how to manufacture Sickmaniasis, and how to disseminate Sickmaniasis into the water supply of Exemplandia, and carries out the attack.

Bad conceptual ideas to careless actors

A careless actor gets an idea they did not previously have

Some ways this could manifest:

  • A careless actor decides to either explore an idea publicly in further detail, or decides to implement the idea, not realizing or caring about the damage it could cause.

Why might this be important?

Careless actors may be unlikely to have a given interesting idea on their own, but might have the inclination and ability to implement an idea if they hear about it from someone else. One reason this might be true is that biosecurity researchers could specifically be looking for interesting possible threats, so the “interesting idea” space they explore will focus more heavily on risky ideas.

Examples

  • Toy example 1: Biosecurity researcher publishes a report about vulnerabilities in the water supply of Exemplandia and a biological agent, Sickmaniasis, that could be used to terrorize Exemplandia. Another researcher writes a paper that explores specific possible implementations of Sickmaniasis, including sequence information and lab procedures for generating Sickmaniasis. In this case of the Unilaterist’s Curse, both security researchers were motivated by the desire to prevent some kind of harm, but the first researcher was specifically more careful about publishing methods.

  • Toy example 2: Researcher publishes report on how to use a gene drive to drive an insect species extinct. A careless researcher uses this report to create a gene drive in a lab on a test population of an insect species. Some insects escape from the lab, and the wild insect species population crashes. Even though the original researcher’s lab was very careful with test implementations of their gene drives, the information they produce led to a careless lab crashing the population of a whole species.

  • Real Example: In 1997, rabbit hemorrhagic disease (RHD) began to spread through New Zealand. It is believed by authorities that New Zealand farmers smuggled the disease into the country and released it intentionally as an animal control measure. RHD was used in Australia as a biocontrol tool, and organizations had attempted to get the New Zealand government to approve it for use. The virus began to spread after their application was denied. This is a case where authorities that reviewed a biological tool for use decided it was a bad idea. Despite their disapproval, someone released it. This wasn’t a human pathogen, but the demonstrated potential for a unilateral actor to decide to release a banned disease agent and succeed is troubling all the same. We’d like to reiterate that unsanctioned pest control using disease is A BAD IDEA!

Implementation details to bad actors

A bad actor gains access to details (but not an original idea) on how to create a harmful biological agent

Some ways this could manifest:

  • A bad actor exploits this newly available information to create a weapon they previously did not have the knowledge or ability to create, even though they already knew of the potential attack vector.
  • Someone with the intent to produce a potentially-dangerous agent, but not the means or knowledge, is granted access to supplies and/or knowledge that allows them to develop a dangerous biological product.

Why might this be important?

The bad actor would not have been able to easily generate the instructions to create the harmful agent without the new source of information. As DNA synthesis & lab automation technology improves, the bottleneck to the creation of a harmful agent is increasingly knowledge & information rather than applied skill. Technical knowledge and precise implementation details have historically been a bottleneck for bioweapons programs, particularly terrorist or poorly-funded programs (see Barriers to Bioweapons by Sonia Ben Ouagrham-Gormley).

Examples

  • Toy example: A researcher publishes the information for how to reconstruct an extinct & deadly human virus. A bioterrorist or state bioweapon program uses this information to recreate an extinct virus and weaponizes it.
  • Real Example: It’s no secret that the smallpox genome is available online. It’s quite conceivable that a country could fund a program to reconstruct it from this information. It’s also not impossible that this has already happened in secret.

Implementation details to careless actors

A little knowledge is a dangerous thing

Some ways this could manifest:

  • Careless actors who might otherwise have had very little likelihood of creating or releasing anything particularly hazardous, gain access to methods or equipment that increase this likelihood
  • A careful researcher offhandedly mentions a potentially-valuable line of research, which they chose not to pursue due to its potentially catastrophic downsides, which might inspire an overly-optimistic colleague to pursue it

Why might this be important?

Many new technologies (especially in biology) may have unintended side effects. Microscopic organisms can proliferate, and that may get out of hand if procedures are not followed carefully. Sometimes a tentative plan, which might or might not be a good idea, is perceived as a great plan by someone less familiar with its risks. The more careless actor may then take steps to implement a plan without considering the externalities.

As advanced lab equipment becomes cheaper and more accessible, and as more non-academic labs open up without the highly-cautious pro-safety incentives of academia, we might expect to see more experimenters who neglect to practice appropriate safety procedures. We might even see more experimenters who fell through the cracks, and never learned these procedures in the first place. How bad a development this is depends on precisely what those labs are working on, and the quality of their self-supervision.

Second-degree variant: Dangerous implementation knowledge is given to someone who is likely to distribute it, which might later result in a convergence of intent and means in a single individual, either a careless or malicious actor, who produces a dangerous biological product. Some examples of possible distributors might be a person whose job rewards the dissemination of information, or a person who chronically underestimates risks.

This risk means it is important to keep in mind what incentives people have to share information, and whether that might incline them to share information hazards.

Examples

  • Toy Example 1: A civilian hears about how CRISPR can remove viruses from cells, buys himself some tools, and injects himself with an untested DIY Herpes ‘cure.’ He doesn’t actually cure his herpes, but he does accidentally edit his germline or give himself cancer. There is a massive social backlash towards synthetic biology, and the FDA shuts down multiple scientific attempts at a Herpes cure that used superficially-similar methods but had much higher odds of success.
  • Toy Example 2: An undergrad lab assistant tests out adding a plasmid to E. coli for a novel protein that she heard about at a conference. She fails to note that the original paper included a few non-prominent sentences on the necessity of only transfecting varieties with a genetic kill-switch, due to a strong suspicion that this gene considerably increases the hardiness of E. coli. Further carelessness results in this E. coli getting out and multiplying outside of the lab. Eventually, this hardiness gene is picked up by a human pathogen.
  • Real Example: A biohacker, among other exploits, injected himself with an agent meant to enhance muscle growth. This likely spurred others to take dangerous risks and the CEO of a biotech company ended up injecting himself with an untested herpes treatment.
  • Toy Example (Second Degree Variant): A researcher discovers a way to make Azure Death transmissible from guinea pigs to humans and tells a journalist to warn pet owners. The journalist spreads the researcher’s work, wanting to credit them for the discovery, widely spreading their methods.

Information vulnerable to future advances

Information that is not currently dangerous becomes dangerous

Some ways this could manifest:

  • Future tech could turn previously safe information into dangerous information.
  • Technological advances or economies of scale could alter the capabilities we could reasonably expect even a low-competence actor to have access to

Why might this be important?

Technological progress can be difficult to predict. Sometimes there are major advances in technology that allow for new capabilities, such as rapidly sequencing and copying genomes. Could the information you share be dangerous in 5 years? 10? 100? How does this weigh against how useful the information is, or how likely it is to become public soon anyway?

Examples

  • Toy Example 1: After future technology makes the discovery of new and functional enzymes much easier, conceptual ideas of bioweapons that previously required highly specialized knowledge to implement are now extremely hazardous.
  • Toy Example 2: A new culturing technique makes it drastically easier and cheaper to grow not only harmless bacterial cells, but also pathogenic ones. Suddenly, a paper published on the highly-specific culturing procedures for a finicky but dangerous pathogen is useful to non-specialists.
  • Real example: The Smallpox genome was published online. Later, DNA printing became cheap and easy to use. The publishing of the smallpox genome online wasn’t particularly dangerous when it happened. Humanity hadn’t yet developed the technology to print organisms from scratch, and genetic engineering methods were much less precise. Now, access to the smallpox genome could be used by bad actors with sufficient knowhow and technology to print it and use it as a bioweapon.

Risk of Idea Inoculation

Presenting an idea causes people to dismiss risks

Some ways this could manifest:

  • Presenting a bad version of a good idea can cause people to dismiss it prematurely and not take it seriously even when it’s presented in a better form

Why might this be important?

Trying to change norms can backfire. If the first people presenting a measure to reduce the publication of risky research are too low-prestige to be taken seriously, no effect might actually be the best-case scenario. An idea that is associated with disreputable people or hard-to-swallow arguments may itself start being treated as disreputable, and face much higher skepticism and hostility than if better, proven arguments had been presented first.

This is almost the inverse of the Streisand effect, which appears to derive from similar psychological principles. In the case of the Streisand Effect, attempts to remove information are what catapult it into public consciousness. In the case of idea inoculation, attempts to publicize an idea ensure that the concept is ignored or dismissed out-of-hand, with no further consideration given to it.

It also connects in interesting ways with Bostrom's Schema[1]

Examples

  • Toy Example 1: A biohacker attempts using CRISPR to alter their genome to produce more of the hormone incredulin. It doesn’t work and they give themselves cancer. The story gets popularized in media and lawmakers prevent useful research on the uses of CRISPR.
  • Toy Example 2: An overly-enthusiastic crackpot biologist over-promises some huge advancement in the next 2 years, and ends up plastered across the media. Once he’s revealed as a fraud, suddenly no funding agencies want to touch the field even though other people in this specialty are still doing meaningful, realistic work.

Some Other Risk Categories

This list is not exhaustive, and we chose to lean concrete rather than abstract.

There were a few important-but-abstract risk categories that we didn’t think we could easily do justice while keeping them succinct and concrete. We felt that several were already implied in a more concrete way by the categories we did keep, but that they encompass some edge-cases that our schemas don’t capture. They at least warrant a mention and description.

One is the “Risk of Increased Attention,” what Bostrom calls “Attention Hazard.” This is naturally implied by the four “ideas/actors” categories, but in fact covers a broader set of cases. A zone we focused less on are the circumstances in which even useful ideas, combined with smart actors, can eventually lead to unintuitive but catastrophic consequences if given enough attention and funding. This is best exemplified in the fears about the rate of development and investment in AI. It’s also partially exemplified in “Information vulnerable to future advances.”

The other is “Information Several Inferential Distances Out Is Hazardous.” This is a superset of “Information vulnerable to future advances,” but it also encompasses cases where it’s merely a matter of extending an idea out a few further logical steps, not just technological ones.

For both, we felt they partially-overlapped with the examples already given, and leaned a bit too abstract and hard-to-model for this post’s focus on concrete examples. However, we think there’s still a lot of value in these important, abstract, and complete (but harder-to-use) schemas.

Risks from Secrecy

We’ve talked above about many of the risks involved in information hazards. We take the risks of sharing information hazards seriously, and think others should as well. But in the Effective Altruist community, it has been our observation that people don’t observe the flipside of this.

Conversations about risks from biology get shut down and turn into discussions of infohazards, even when the information being shared is already available. There is something to be said for not spreading information further, but shutting down the discussion of people looking for solutions also has downsides.

Leaving it to the experts is not enough when there may not be a group of experts thinking and coming up with solutions. We encourage people that want to work on biorisks to think about the value and risks in sharing potentially dangerous information. Below we will go through the risks or loss of value from not sharing information.

A holistic model of information sharing will include weighing both the risks and benefits of sharing information. A decision should be made having considered how the information might be used by bad or careless actors AND how valuable the information is for good actors to further research or coordinate to solve a problem.

Risk of Lost Progress

Closed research culture stifles innovation

Some ways this could manifest:

  • Ignorance is the default outcome. If secretiveness ensures that nothing is added to the knowledge and work of a field, beneficial progress is unlikely to be made.

Why might this be important?

Good actors need information to develop useful countermeasures. In a world where researchers cannot communicate their ideas with each other it makes model generation more difficult and reduces the ability of the field to build up good defensive systems.

Examples

  • Toy Example 1: New information is learned about a recently-discovered virus, which indicate it is more dangerous and has greater pandemic potential than originally thought. This information is not shared on the grounds that it could inspire others to weaponize it. As a result, lab safety procedures for working with the virus are not updated.
  • Toy Example 2: Vaccines are not produced because researchers don’t have access to information about dangerous organisms.
  • Toy Example 3: A dangerous scenario is never discussed among good actors avoiding infohazards. Bad actors don’t avoid thinking about infohazards, so they create novel bioweapons that could have been prepared for if a discussion had occurred.
  • Toy Example 4: The public is unaware of risks, so politicians don’t fund programs that develop critical infrastructure towards defending against pathogens (see US gov defunding programs like the USDA).

Dangerous work is not stopped

Information is not shared, so risky work is not stopped

Some ways this could manifest:

  • Areas with stronger privacy norms, such as industry, may have incentives to hide details about their work. If the risks associated with a particular project are not open information, these risks may be missed or ignored by others engaging in the same work.
  • If a high standard of secrecy is maintained by labs by default, it can be hard for governmental or academic oversight to notice which labs should receive more oversight.

Why might this be important?

Some fields of research are dangerous, or may eventually become dangerous. It is much harder to prevent a class of research if the dangers posed by that research cannot be discussed publicly.

Informal social checks on the standards or behavior of others seems to serve an important, and often underestimated, function as a monitoring and reporting system against unethical or unsafe behaviors. It can be easy to underestimate how much the objections of a friend can shift the way you view the safety of your research, as they may bring up a concern you didn’t even think to ask about.

There are also entities with a mandate to do formal checks, and it is dangerous if they are left in the dark. Work environments, labs, or even entire fields can develop their own unusual work cultures. Sometimes, these cultures systematically undervalue a type of risk because of its disproportionate benefits to them, even if the general populace would have objections. Law enforcement, lawmakers, public discussion, reporting, and entities like ethical review boards are intended to intervene in these sorts of cases, but have no way to do so if they never hear about a problem.

Each of these entities have their strengths and weaknesses, but a world without whistleblowers, or one where no one can access anyone capable of changing these environments, is likely to be a more dangerous world.

Examples

  • Toy Example: An academic decides not to publish a paper about the risks of researching a particular strain of bacteria due to high rates of escape from seemingly quarantined labs. Researchers elsewhere begin research on the bacteria, but with lax containment because they were unaware of the risks.
  • Real Almost-Example: In 1972 -a year before the Asilomar Conference- grad student Janet Metz mentioned to other grad students that her lab might try to use a virus to put bacterial DNA into mammalian cells. Pollack told Berg (her supervisor) he should “put genes into a phage that doesn't grow in a bug that grows in your gut,” and reminded him that SV40 is a small-animal tumor virus that transforms human cells in culture and makes them look like tumor cells. Prior to that discussion, her lab had not fully thought through the potential dangerous implications of that research.
  • Real Example: The true source of the Rajneeshee Salmonella poisonings was only uncovered when a leader of the cult publicly expressed concern about the behavior of one of its members, and explicitly requested an investigation into their laboratory.

Risk of Information Siloing

Siloing information leaves individual workers blind to the overall goal accomplished

Some ways this could manifest:

  • It can be more difficult to prevent harm when the systems capable of producing it are not well understood by the participants. If you have processes of production or research where labor is specialized and distributed, moral actors may not notice when they are producing something harmful.

Why might this be important?

Lab work seems to be increasingly getting automated, or outsourced piecemeal. At the same time, the biotechnology industry has an incentive to be secretive with any pre-patent information they uncover. Without additional precautions being taken, secretive assembly-line-esque offerings increase the likelihood that someone could order a series of steps that look harmless in isolation, but create something dangerous when combined.

Catalyst Biosummit

By the way, the authors are part of the organizing team for the Catalyst Biosecurity Summit. It will bring together synthetic biologists and policymakers, academics and biohackers, and a broad range of professionals invested in biosecurity for a day of collaborative problem-solving. It will be in February 2020. We haven’t locked down a specific date yet, but you can sign up for updates here.

Examples

  • Toy Example 1: A platform outsources lab work while granting buyers a high degree of privacy. No individual worker in the assembly line was able to piece together that they were producing a dangerous biological agent until it had already been produced and released.
  • Toy Example 2: Diagnosis of novel diseases takes longer because knowledge of diseases was hidden.
  • Real Example 1: Researchers put together a bird flu that was airborne and killed ferrets. They didn’t create any mutations that didn’t exist in the wild already, they just put them together in a way that nature hadn’t yet, but could happen naturally through recombination. American and Dutch governments banned publication of papers with their methods. Had they been allowed to publish their research, it could have given other scientists more information with which to develop a vaccine. Americans have since reversed their decision on the ban.
  • Real Example 2: The Guardian successfully ordered part of the smallpox genome to a residential address from a bioprinting company.
  • Real Example 3: A DOD lab accidentally sent weapons grade anthrax to many labs. The CDC and other orgs have made similar mistakes.

Barriers to Funding and New Talent

Talented people don’t go into seemingly empty or underfunded fields

Some ways this could manifest:

  • A culture of secrecy can serve as a stumbling-block for early-career researchers interested in entering a field. It can make it more challenging to locate information, funding, and aligned mentors, and these can serve to deter people who might otherwise be interested in making a career solving an important problem.

Why might this be important?

While many researchers and policy makers work in biosecurity, there is a shortage of talent applied to longer term and more extreme biosecurity problems. There have been only limited efforts to successfully attract top talent to this nascent field.

This may be changing. The Open Philanthropy Project has begun funding projects focused on Global Catastrophic Biorisk, and has provided funding for many individuals beginning their careers in the field of biosecurity.

Policies that require a lot of oversight or add on procedures that increase the cost of doing research cause there to be fewer opportunities for people who want to make a positive difference.

Examples

  • Toy Example: A talented biology graduate looks at EA discussions and notices a lack of engagement with the most important biosecurity risks for the far future. They decide the EA community isn’t taking far future concerns seriously and apply their skills elsewhere.
  • Real Example: Labs opt out of valuable pathogen research because regulations increase operating costs and time costs of workers (Wurtz, et al). This leads to fewer places to learn and fewer job opportunities for people that want to prevent harmful pathogens.

Streisand Effect

Suppressing information can cause it to spread

Some ways this could manifest:

  • Attempting to suppress information can sometimes cause information to spread further than it would have otherwise. Many people’s response to even well-advised attempts at information suppression is to directly or indirectly increase the visibility of the event by discussing it or spreading the underlying information itself.

Why might this be important?

The Streisand effect is named after an incident where attempts to have photographs taken down led to a media spotlight and widespread discussion of those same photos. The photos had previously been posted in a context where only 1 or 2 people had taken enough of an interest to access it.

Something analogous could very easily happen with a paper outlining something hazardous in a research journal, or with an online discussion. The audience may have originally been quite targeted simply due to the nicheness or the obscurity of its original context. But an attempt at calling for intervention leads to a public discussion, which spreads the original information. This could be viewed as one of the possible negative outcomes of poorly-targeted whistleblowing.

As mentioned in the section on idea inoculation, this effect is functionally idea inoculation’s inverse and is based on similar principles.

Examples

  • Toy example: An online discussion group has policies for handling information that some view as overly restrictive. The frustrated people start a new online discussion group with overly-permissive infohazard guidelines.
  • Real Examples of the Streisand effect: Barbra Streisand’s attempts to remove photos of her seaside mansion from a large database of California coastline photos catapulted said photograph to fame. See also: The Roko’s Basilisk Incident, “Why the Lucky Stiff”’s Infosuicide
  • Real Bio Examples of the Streisand effect: In all likelihood, more people know that the smallpox genome is/was public due to the attempts to suppress it than from organic searches. Relatedly, some dangerous people might have assumed that printed DNA was carefully and successfully monitored if there weren’t so many articles about how sometimes it’s not.

Conclusion

Overall, we think biosecurity in the context of catastrophic risks has been underfunded and underdiscussed. There has been positive development in the time since we started on this paper; the Open Philanthropy Project is aware of funding problems in the realm of biosecurity and has been funding a variety of projects to make progress on biosecurity.

It can be difficult to know where to start helping in biosecurity. In the EA community, we have the desire to weigh the costs and benefits of philanthropic actions, but that is made more difficult in biosecurity by the need for secrecy.

We hope we’ve given you a place to start and factors to weigh when deciding to share or not share a particular piece of information in the realm of biosecurity. We think the EA community has sometimes erred too much on the side of shutting down discussions of biology by turning them into discussions about infohazards. It’s possible EA is being left out of conversations and decision making processes that could benefit from an EA perspective. We’d like to see collaborative discussion aimed towards possible actions or improvements in biosecurity with risks and benefits of the information considered, but not the central point of the conversation.

It’s a big world with many problems to focus on. If you prefer to focus your efforts elsewhere, feel free to do so. But if you do choose to engage with biosecurity, we hope you can weigh risks appropriately and choose the conversations that will lead to many talented collaborators and a world safer from biological risks.

Sources


  1. Connecting “Risk of Idea Inoculation” with Bostrom’s Schema: this could be seen as a subset of Attention Hazard and a distant cousin of Knowing-Too-Much Hazard. Attention Hazard encompasses any situation where drawing too much attention to a set of known facts increases risk, and the link is obvious. In Knowing-Too-Much Hazard, the presence of knowledge makes certain people a target of dislike. However, in Idea Inoculation, people’s dislike for your incomplete version of the idea rubs that dislike off onto the idea itself ↩︎

Comments8
Sorted by Click to highlight new comments since: Today at 1:56 AM

Thanks for writing the post. I essentially agree with the steers on which areas are more or less ‘risky’. Another point worth highlighting is that, given these issues tend to be difficult to judge and humans are error-prone, it can be worth running things by someone else. Folks are always welcome to contact me if I can be helpful for this purpose.

But I disagree with the remarks in the post along the lines of that ‘There’s lots of valuable discussion that is being missed out on in EA spaces on biosecurity, due to concerns over infohazards’. Often - perhaps usually - the main motivation for discretion isn’t ‘infohazards!’.

Whilst (as I understand it) the ‘EA’ perspective on AI safety covers distinct issues from mainstream discussion on AI ethics (e.g. autonomous weapons, algorithmic bias), the main distinction between ‘EA’ biosecurity and ‘mainstream’ biosecurity is one of scale. Thus similar topics are shared between both, and many possible interventions/policy improvements have dual benefit: things that help mitigate the risk of smaller outbreaks tend to help mitigate the risk of catastrophic ones.

These topics are generally very mature fields of study. To put it in perspective, with ~5 years in medicine and public health and 3 degrees, I am roughly par for credentials and substantially below-par for experience at most expert meetings I attend - I know people who have worked on (say) global health security longer than I have been alive. I’d guess some of this could be put down to unnecessary credentialism and hierarchalism, and it doesn’t mean there’s nothing to do as all the good ideas have already been thought, but it does make low hanging fruit likely to be plucked, and that useful contributions are hard to make without substantial background knowledge.

These are also areas which tend to have powerful stakeholders, entrenched interests, in many cases (especially security-adjacent issues) great political sensitivity. Thus even areas which are pretty ‘safe’ from an information hazard perspective (e.g. better governance of dual-use research of concern), can be nonetheless delicate to talk about publicly. Missteps are easy to make (especially without the relevant tacit knowledge), and the consequences can be to (as you note in the write-up) to innoculate the idea, but also to alienate powerful interests and potentially discredit the wider EA community.

The latter is something I’m particularly sensitive to. This is partly due to my impression that the ‘growing pains’ in other EA cause areas tended to incur unnecessary risk. It is also due to the reactions of folks the pre-existing community when contemplating EA involvement tend not to be unalloyed enthusiasm. They tend to be very impressed with my colleagues who are starting to work in the area, have an appetite for new ideas and ‘fresh eyes’, and reassured that EAs in this area tend to be cautious and responsible. Yet despite this they tend to remain cautious about the potential to have a lot of inexperienced people bouncing around delicate areas, both in general but also for their exposure to this community in particular, as they are often going somewhat ‘out on a limb’ to support ‘EA biosecurity’ objectives in the first place.

Another feature of this landscape is that the general path to impact of a ‘good biosecurity idea’ is to socialize it in the relevant expert community and build up a coalition of support. (One could argue how efficient this from the point of view of the universe, but it is the case regardless.) In consequence, my usual advice for people seeking to work in this area is that career capital is particularly valuable, not just for developing knowledge and skills, but also gaining the network and credibility to engage with the relevant groups.

Thanks for the thoughtful response!

I want to start with the recognition that everything I remember hearing from you in particular around this topic, here and elsewhere, has been extremely reasonable. I also very much liked your paper.

My experience has been that I have had multiple discussions around disease shut down prematurely in some in-person EA spaces, or else turned into extended discussions of infohazards, even if I'm careful. At some point, it started to feel more like a meme than anything. There are some cases where "infohazards" were brought up as a good, genuine, relevant concern, but I also think there are a lot of EAs and rationalists who seem to have a better grasp of the infohazard meme than they do of anything topical in this space. Some of the sentiment you're pointing to is largely a response to that, and it was one of the motivations for writing a post focused on clear heuristics and guidelines. I suspect this sort of thing happening repeatedly comes with its own kind of reputational risk, which could stand to see some level of critical examination.

I think there are good reasons for the apparent consensus you present that particularly effective EA Biorisk work requires extraordinarily credentialed people.* You did a good job of presenting that here. The extent to which political sensitivity and the delicate art of reputation-management plays into this, is something I was partially aware of, but had perhaps under-weighted. I appreciate you spelling it out.

The military seems to have every reason to adopt discretion as a default. There's also a certain tendency of the media and general public to freak out in actively damaging directions around topics like epidemiology, which might feed somewhat into a need for reputation-management-related discretion in those areas as well. The response to an epidemic seems to have a huge, and sometimes negative, impact on how a disease progresses, so a certain level of caution in these fields seems pretty warranted.

I want to quickly note that I tend to be relatively-unconvinced that mature and bureaucratic hierarchies are evidence of a field being covered competently. But I would update considerably in your direction if your experience agrees with something like the following:

Is it your impression that whenever you -or talented friends in this area- come up with a reasonably-implementable good idea, that after searching around, you tend to discover that someone else has already found it and tried it?

And if not, what typically seems to have gone wrong? Is there a step that usually falls apart?

(Here are some possible bottlenecks I could think of, and I'm curious if one of them sounds more right to you than the others: Is it hard to search for what's already been done, to the point that there are dozens of redundant projects? Is it a case of there being too much to do, and each project is a rather large undertaking? (a million good ideas, each of which would take 10 years to test) Does it seem to be too challenging for people to find some particular kind of collaborator? A resource inadequacy? Is the field riddled with untrustworthy contributions, just waiting for a replication crisis? (that would certainly do a lot to justify the unease and skepticism about newcomers that you described above) Does it mostly look like good ideas tend to die a bureaucratic death? Or does it seem as if structurally, it's almost impossible for people to remain motivated by the right things? Or is the field just... noisy, for lack of a better word for it. Hard to measure for real effect or success.)

*It does alienate me, personally. I try very hard to stand as a counterargument to "credentialism-required"; someone who tries to get mileage out of engaging with conversations and small biorisk-related interventions as a high-time-investment hobby on the side of an analysis career. Officially, all I'm backed up with on this is a biology-related BS degree, a lot of thought, enthusiasm, and a tiny dash of motivating spite. If there wasn't at least a piece of me fighting against some of the strong-interpretation implications of this conclusion, this post would never have been written. But I do recognize some level of validity to the reasoning.

Hello Spiracular,

Is it your impression that whenever you -or talented friends in this area- come up with a reasonably-implementable good idea, that after searching around, you tend to discover that someone else has already found it and tried it?

I think this is somewhat true, although I don't think this (or the suggestions for bottlenecks in the paragraph below) quite hits the mark. The mix of considerations are something like these:

1) I generally think the existing community covers the area fairly competently (from an EA perspective). I think the main reason for this is because the 'wish list' of what you'd want to see for (say) a disease surveillance system from an EA perspective will have a lot of common elements with what those with more conventional priorities would want. Combined with the billions of dollars and lots of able professionals, even areas which are neglected in relative terms still tend to have well-explored margins.

1.1) So there are a fair few cases where I come across something in the literature that anticipates an idea I had, or of colleagues/collaborators reporting back, "It turns out people are already trying to do all the things I'd want them to do re. X".

1.2) Naturally, given I'm working on this, I don't think there's no more good ideas to have. But it also means I foresee quite a lot of the value is rebalancing/pushing on the envelope of the existing portfolio rather than 'EA biosecurity' striking out on its own.

2) A lot turns on 'reasonably-implementable'. There's a generally treacherous terrain that usually lies between idea and implementation, and propelling the former to the latter through this generally needs a fair amount of capital (of various types). I think this is the typical story for why many fairly obvious improvements haven't happened.

2.1) For policy contributions, perhaps the main challenge is buy-in. Usually one can't 'implement yourself', and rely instead on influencing the relevant stakeholders (e.g. science, industry, government(s)) to have an impact. Bandwidth is generally limited in the best case, and typical cases tend to be fraught with well-worn conflicts arising from differing priorities etc. Hence the delicateness mentioned above.

2.2) For technical contributions, there are 'up-front' challenges common to doing any sort of bio-science research (e.g. wet-labs are very expensive). However, pushing one of these up the technology readiness levels to implementation also runs into similar policy challenges (as, again, you can seldom 'implement yourself').

3) This doesn't mean there are no opportunities to contribute. Even if there's a big bottleneck further down the policy funnel, new ideas upstream still have value (although knowing what the bottleneck looks like can help one target these to have easier passage - and not backfire), and in many cases there will be more incremental work which can lay the foundation for further development. There could be a synergistic relationship with folks who are more heavily enmeshed in the existing community can help translate initiatives/ideas from those less so.


Just wanted to say thanks to both Gregory and Spiracular for their detailed and thoughtful back and forth in this thread. As someone coming from a place somewhere in the middle but having spent less time thinking through these considerations, I found getting to hear your personal perspectives very helpful.

Thanks! For me, this does a bit to clear up why buy-in is perceived as such a key bottleneck.

(And secondarily, supporting the idea that other areas of fairly-high ROI are likely to be centered around facilitating collaboration and consolidation of resources among people with a lot of pre-existing experience/expertise/buy-in.)

Now that we've gone over some of the considerations, here's some of the concrete topics I see as generally high or low hazard for open discussion.

Good for Open Discussion

  • Broad-application antiviral developments and methods
    • Vaccines
    • Antivirals proper
    • T-cell therapy
    • Virus detection and monitoring
  • How to report lab hazards
    • ...and how to normalize and encourage this
  • Broadly-applicable protective measures
    • Sanitation
    • Bunkers?
  • The state of funding
  • The state of talent
    • What broad skills to develop
    • How to appeal to talent
    • Who talent should talk to

Bad for Open Discussion

These things may be worth specialists discussing among themselves, but are likely to do more harm than good in an open thread.

  • Disease delivery methods
  • Specific Threats
  • Specific Exploitable Flaws in Defense Systems
    • Ex: immune systems, hospital monitoring systems
    • It is especially bad to mention them if they are exploitable reliably
    • If you are simultaneously providing a comprehensive solution to the problem, this can become more of a gray-area. Partial-solutions, or challenging-to-implement solutions, are likely to fall on the bad side of this equation.
  • Much of the synthetic biology surrounding this topic
  • Arguments for and against various agents using disease as an M.O.
Biosecurity researchers are often better-educated and/or more creative than most bad actors.

I generally agree with the above statement and that the risk of openly discussing some topics outweigh the benefits of doing so. But I recently realised there are some people outside of EA that I think are generally well educated, probably more creative than many biosecurity researchers, and who often write openly about topics the EA community may consider bioinfohazards: authors of near-future science fiction.

Many of the authors in this genre have STEM backgrounds, often write about malicious-use GCR scenarios (thankfully, the risk is usually averted), and I've read several interviews where authors mention taking pains to do research so they can depict a scenario that represents a possible, if sometimes ambitious, future risk. While these novels don't provide implementation details, the 'attack strategies' are often described clearly and the accompanying narrative may well be more inspiring to a poorly educated bad actor looking for ideas than a technical discussion would be.

I haven't seen (realistic) fiction discussed in the context of infohazards before and would be interested to know what others think of this. In the spirit of the post, I'll refrain from creating an 'attention hazard' (or just advertising?) by mentioning any authors who I think describe GCR's particularly well.

Yeah, I agree there's a bunch of bio researchers who are fine talking openly about scary stuff you could do with bio and sometimes fiction authors represent that. I think the effect this should have on those that care about infohazards is to be willing to discuss them in order to get work done to prevent them or aid us in the case that they happen. It's hard to justify preparing for something if you're totally unwilling to acknowledge the things you want to prepare for or prevent. 

More from Fin
Curated and popular this week
Relevant opportunities