While I've only worked in biosecurity for about a year and my computer security background consists of things I picked up while working on other aspects of software engineering, the cultures seem incredibly different. Some examples of good computer security culture that would be bad biosecurity culture:

  • Openness and full disclosure. Write blog posts with deep detail on how vulnerabilities were found, with the goal of teaching others how to find similar ones in the future. Keep details quiet for a few months if need be to give vendors time to fix but after, say, 90 days go public.
  • Breaking things to fix them. Given a new system, of course you should try to compromise it. If you succeed manually, make a demo that cracks it in milliseconds. Make (and publish!) fuzzers and other automated vulnerability search tools.
  • Enthusiastic curiosity and exploration. Noticing hints of vulnerabilities and digging into them to figure out how deep they go is great. If someone says "you don't need to know that" ignore them and try to figure it out for yourself.

This is not how computer security has always been, or how it is everywhere, and people in the field are often fiercely protective of these ideals against vendors that try to hide flaws or silence researchers. And overall my impression is that this culture has been tremendously positive in computer security.

Which means that if you come into the effective altruism corner of biosecurity with a computer security background and see all of these discussions of "information hazards", people discouraging trying to find vulnerabilities, and people staying quiet about dangerous things they've discovered it's going to feel very strange, and potentially rotten.

So here's a framing that might help see things from this biosecurity perspective. Imagine that the Morris worm never happened, nor Blaster, nor Samy. A few people independently discovered SQL injection but kept it to themselves. Computer security never developed as a field, even as more and more around us became automated. We have driverless cars, robosurgeons, and simple automated agents acting for us, all with the security of original Sendmail. And it's all been around long enough that the original authors have moved on and no one remembers how any of it works. Someone who put in some serious effort could cause immense destruction, but this doesn't happen because the people who have the expertise to cause havoc have better things to do. Introducing modern computer security culture into this hypothetical world would not go well!

Most of the cultural differences trace back to what happens once a vulnerability is known. With computers:

  • The companies responsible for software and hardware are in a position to fix their systems, and disclosure has helped build a norm that they should do this promptly.
  • People who are writing software can make changes to their approach to avoid creating similar vulnerabilities in the future.
  • End users have a wide range of effective and reasonably cheap options for mitigation once the vulnerability is known.

But with biology there is no vendor, a specific fix can take years, a fully general fix may not be possible, and mitigation could be incredibly expensive. The culture each field needs is downstream from these key differences.

Overall this is sad: we could move faster if we could all just talk about what we're most concerned about, plus cause prioritization would be simpler. I wish we were in a world where we could apply the norms from computer security! But different constraints lead to different solutions, and the level of caution I see in biorisk seems about right given these constraints.

(Note that when I talk about "good biosecurity culture" I'm describing a set of norms that I see as the right ones for the situation we're in, and that are common among effective altruists and other people with a similar view of the world. There's another set of norms within biology, however, that developed when the main threats were natural. Since there's no risk of nature using public knowledge to cause harm, this older approach is even more open than computer security culture, and in my opinion is a very poor fit for the environment we're in now.)

129

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 5:27 PM
slg
6mo11
0
0

Thanks for the write-up. Just adding a note on how this distinction has practical implications for how to design databases containing hazardous sequences that are required for gene synthesis screening systems.

With gene synthesis screening, companies want to stop bad actors from getting access to the physical DNA or RNA of potential pandemic pathogens. Now, let's say researchers find the sequence of a novel pathogen that would likely spark a pandemic if released. Most would want this sequence to be added to synthesis screening databases. But some also want this database to be public. The information hazards involved in making such information publicly available could be large, especially if there is attached discussion of how exactly these sequences are dangerous.

Yup!

(I expected to see your comment link to SecureDNA, which has a cryptographic solution to screening synthesis without either (a) sending hazards to synthesizers or (b) sending reconstructible synthesis orders to others.)

[comment deleted]5mo1
0
0

Hi slg — great point about synthesis screening being a very concrete example where approaches to security can make a big difference.

One quibble I have: Your hyperlink seems to suggest that Diggans and Leproust advocate for a fully “public” database of annotated hazard sequences. But I think it’s worth noting that although they do use the phrase “publicly available” a couple of times, they also pretty explicitly discuss the idea of having such a database be accessible to synthesis providers only, which is a much smaller set and seems to carry significantly lower risks for misuse than truly public access. Relevant quote:

“Sustained funding and commitment will be required to build and maintain a database of risk-associated sequences, their known mechanisms of pathogenicity and the biological contexts in which these mechanisms can cause harm. This database (or at a minimum a screening capability making use of this database), to have maximum impact on global DNA synthesis screening, must be available to both domestic and international providers.”

Also worth noting the parenthetical about having providers use a screening mechanism with access to the database without having such direct access themselves, which seems like a nod to some of the features in, eg, SecureDNA’s approach.

I think benchtop synthesizers would change this quite a bit? Because then you need one of:

  • Ship the database on every benchtop, where it is at much higher risk of compromise.

  • Have benchtops send each synthesis request out for screening.

  • Something like Secure DNA's approach, where the benchtop sends the order out for screening in a format that does not disclose it's contents.

Yes, benchtop devices have significant ramifications! 

  • Agreed, storing the database on-device does sound much harder to secure than some kind of distributed storage. Though, I can imagine that some customers will demand airgapped on-device solutions, where this challenge could present itself anyway.
  • Agreed, sending exact synthesis orders from devices to screeners seems undesirable/unviable, for a host of reasons. 

But that's consistent with my comment, which just meant to emphasise that I don't read Diggans and Leproust as advocating for a fully "public" hazard database, as slg's comment could be read to imply.

If your benchtop device user can modify the hardware to attempt to defeat the screening mechanism, the problem becomes orders of magnitude harder. I imagine that making a DNA sequence generating device that can't be modified to make smallpox even if it's in the middle of Pyongyang and the malicious user is the North Korean government is an essentially unsolvable problem - if nothing else, they can try to reverse engineer the device and build a similar one without any screening mechanism at all.

A bit tangential, but this raises an important point: in general, you're looking for things that raise the bar for causing harm. If you can take smallpox synthesis from something where anyone who works in a lab with a benchtop can synthesize it without even opening in the machine to one where they would have to disassemble it, but that already increases the chance that someone else in the lab would notice.

It would be great to get to a place where we have systems that will provide reliable protection even from well-funded state actors, but (a) a lot of the risk comes from much easier cases like it becoming easier for an individual to cause harm and (b) we are so far from having that kind of protection that efforts to improve the situation there should be much lower priority than ones that handle the easier cases.

That's a good pointer, thanks! I'll drop the reference to Diggans and Leproust for now.

To be clear, I definitely think there's a spectrum of attitudes towards security, centralisation, and other features of hazard databases, so I think you're pointing to an important area of meaningful substantive disagreement!

Executive summary: The cultures of biosecurity and computer security differ in important ways due to the differences in constraints and capabilities surrounding biological vs. computer vulnerabilities.

Key points:

  1. Computer security culture values openness, breaking things to understand them, and satisfying curiosity. This culture developed in a context where vulnerabilities could be fixed by vendors, avoided in future software, and mitigated by users.
  2. Biosecurity culture is much more cautious about disclosing and exploring vulnerabilities. This is because biology lacks easy fixes, mitigations are expensive, and a vulnerability could enable serious harm if exploited by malicious actors.
  3. The norms of computer security culture would be risky and irresponsible if applied directly to biosecurity. The constraints are different enough that different norms have developed.
  4. There are good reasons for biosecurity culture being more closed and cautious than typical computer security culture given the lack of mechanisms for mitigating biological risks.
  5. Understanding these different constraints helps explain the different norms despite both fields dealing with vulnerabilities and risks.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.