4 min read 7

28

Crossposted from my blog.

[Epistemic status: Quick discussion of a seemingly useful concept from a field I as yet know little about.]

I've recently started reading around the biosecurity literature, and one concept that seems to come up fairly frequently is the Web of Prevention (also variously called the Web of Deterrence, the Web of Protection, the Web of Reassurance...[1]). Basically, this is the idea that the distributed, ever-changing, and dual-use nature of potential biosecurity threats means that we can't rely on any single strategy (e.g. traditional arms control) to prevent them. Instead, we must rely on a network of different approaches, each somewhat failure-prone, that together can provide robust protection.

For example, the original formulation of the "web of deterrence" identified the key elements of such a web as

comprehensive, verifiable and global chemical and biological arms control; broad export monitoring and controls; effective defensive and protective measures; and a range of determined and effective national and international responses to the acquisition and/or use of chemical and biological weapons[2].

This later got expanded into a broader "web of protection" concept that included laboratory biosafety and biosecurity; biosecurity education and codes of conduct; and oversight of the life sciences. I'd probably break up the space of strategies somewhat differently, but I think the basic idea is clear enough.

The key concept here is that, though each component of the Web is a serious part of your security strategy, you don't expect any one to be fully protective or rely on it too heavily. Rather than a simple radial web, a better metaphor might be a multilayered meshwork of protective layers, each of which catches some potential threats while inevitably letting some slip through. No layer is perfect, but enough layers stacked on top of one another can together prove highly effective at blocking attacks[3].

This makes sense. Short of a totally repressive surveillance state, it seems infeasible to eliminate all dangerous technologies, all bad actors, or all opportunities to do harm. But if we make means, motive and opportunity each rare enough, we can prevent their confluence and so prevent catastrophe.

Such is the Web of Prevention. In some ways it's a very obvious idea: don't put all your eggs in one basket, don't get tunnel vision, cover all the biosecurity bases. But there's a few reasons I think it's a useful concept to have explicitly in mind.

Firstly, I think the concept of the Web of Prevention is important because multilayer protective strategies like this are often quite illegible. One can easily focus too much on one strand of the web / one layer of the armour, and conclude that it's far too weak to achieve effective protection. But if that layer is part of a system of layers, each of which catches some decent proportion of potential threats, we may be safer than we'd realise if we only focused on one layer at a time.

Secondly, this idea helps explain why so much important biosecurity work consists of dull, incremental improvements. Moderately improving biosafety or biosecurity at an important institution, or tweaking your biocontainment unit protocols to better handle an emergency, or changing policy to make it easier to test out new therapies during an outbreak...none of these is likely to single-handedly make the difference between safety and catastrophe, but each can contribute to strengthening one layer of the system.

Thirdly, and more speculatively, the presence of a web of interlocking protective strategies might mean we don't always have to make each layer of protection maximally strong to keep ourselves safe. If you go overboard on surveillance of the life sciences, you'll alienate researchers and shut down a lot of highly valuable research. If you insist on BSL-4 conditions for any infectious pathogens, you'll burn a huge amount of resources (and goodwill, and researcher time) for not all that much benefit. And so on. Better to set the strength of each layer at a judicious level[4], and rely on the interlocking web of other measures to make up for any shortfall.

Of course, none of this is to say that we're actually well-prepared and can stop worrying. Not all strands of the web are equally important, and some may have obvious catastrophic flaws. And a web of prevention optimised for preventing traditional bioattacks may not be well-suited to coping with the biosecurity dangers posed by emerging technologies. Perhaps most importantly, a long-termist outlook may substantially change the Web's ideal composition and strength. But in the end, I do think I expect something like the Web, and not a single ironclad mechanism, to be what protects us.


  1. Rappert, Brian, and Caitriona McLeish, eds. (2007) A web of prevention: biological weapons, life sciences and the governance of research. Link here. ↩︎

  2. Rappert & McLeish, p. 3 ↩︎

  3. To some extent, this metaphor depends on the layers in the armour being somewhat independent of each other, such that holes in one are unlikely to correspond to holes in another. Even better would be an arrangement such that the gaps in each layer are anticorrelated with those in the next layer. If weaknesses in one layer are correlated with weaknesses in the next, though, there's a much higher chance of an attack slipping through all of them. I don't know to what extent this is a useful insight in biosecurity. ↩︎

  1. Of course, in many cases the judicious level might be "extremely strong". We don't want to be relaxed about state bioweapons programs. And we especially don't want those responsible for safety at each layer to slack off because the other layers have it covered: whatever level of stringency each layer is set to, it's important to make sure that level of stringency actually applies. But still, if something isn't your sole line of defence, you can sometimes afford to weaken it slightly in exchange for other benefits. ↩︎

Show all footnotes
Comments7


Sorted by Click to highlight new comments since:

A related notion from computer security, defense in depth.

Another related concept I just stumbled upon is the "Swiss cheese model of accident causation". According to Wikipedia, this is:

a model used in risk analysis and risk management, including aviation safety, engineering, healthcare, emergency service organizations, and as the principle behind layered security, as used in computer security and defense in depth. It likens human systems to multiple slices of swiss cheese, stacked side by side, in which the risk of a threat becoming a reality is mitigated by the differing layers and types of defenses which are "layered" behind each other. Therefore, in theory, lapses and weaknesses in one defense do not allow a risk to materialize, since other defenses also exist, to prevent a single point of failure.

[...] The Swiss cheese model of accident causation illustrates that, although many layers of defense lie between hazards and accidents, there are flaws in each layer that, if aligned, can allow the accident to occur.

This is referenced in a viral image about preventing covid-19 infections. 

Also related is the recent (very interesting) paper using that same term (linkpost).

(Interestingly, I don't recall the paper mentioning getting the term from computer security, and, skimming it again now, I indeed can't see them mention that. In fact, they only seem to say "defence in depth" once in the paper.

I wonder if they got the term from computer security and forgot they'd done so, if they got it from computer security but thought it wasn't worth mentioning, or if the term has now become fairly common outside of computer security, but with the same basic meaning, rather than the somewhat different military meaning. Not really an important question, though.)

I've noticed something similar around "security mindset": Eliezer and MIRI have used the phrase to talk about a specific version of it in relation to AI safety, but the term, as far as I know, originates with Bruce Schneier and computer security, although I can't recall MIRI publications mentioning that much, possibly because they didn't even realize that's where the term came from. Hard to know, a probably not very relevant other than to weirdos like us. ;-)

The initial post by Eliezer on security mindset explicitly cites Bruce Schneier as the source of the term, and quotes extensively from this piece by Schneier.

Another good idea from the biosecurity literature is "distancing": that any bio threat increases the tendency of people tp distant from each other via quarantine, masks, less travel, and thus R0 will decline, hopely below 1.

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 31m read
 · 
James Özden and Sam Glover at Social Change Lab wrote a literature review on protest outcomes[1] as part of a broader investigation[2] on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that's what I'm going to do today. I reviewed the evidence on protest outcomes, focusing only on the highest-quality research, to answer two questions: 1. Do protests work? 2. Are Social Change Lab's conclusions consistent with the highest-quality evidence? Here's what I found: Do protests work? Highly likely (credence: 90%) in certain contexts, although it's unclear how well the results generalize. [More] Are Social Change Lab's conclusions consistent with the highest-quality evidence? Yes—the report's core claims are well-supported, although it overstates the strength of some of the evidence. [More] Cross-posted from my website. Introduction This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review. Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews: 1. Animal Charity Evaluators (2018), Protest Intervention Report. 2. Orazani et al. (2021), Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis. 3. Social Change Lab – Ozden & Glover (2022), Literature Review: Protest Outcomes. 4. Shuman et al. (2024), When Are Social Protests Effective? The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018). Orazani et al. (2021)[3] is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters' cause. But what people say in a lab setting mig