An adversary who knows that his opponents’ troops have been inoculated against anthrax can switch his battle plans to smallpox or plague—or to an agent for which no vaccine exists. -Ken Alibek, Biohazard
A challenge for reducing bio risk is that many of the risks are coming from adversaries. Adversaries can react to our interventions, and so developing countermeasures may be less effective than one might naively expect due to 'substitution effects'.[1] There are several distinct substitution effects:
- ‘Switching’ - we find a countermeasure for X, adversary then switches from X to developing Y
- ‘Escalating’ - we find a countermeasure for X, adversary modifies X to X’ to overcome countermeasure[2]
- ‘Attention Hazard + Offense Bias’ - we investigate a countermeasure for X, but fail. Adversary was previously not developing X, but seeing our interest in X starts to develop X.
- Can be combined with escalation, even if we successfully find countermeasure for X, adversary is now on general X pathway and starts developing X’
- Can also simply be a timeframe effect, if adversary can produce X before we successfully get the countermeasure to X (although here it matters if the adversary is more like a terrorist and would deploy X as soon as it was created, or a state program that would keep X around for awhile, imposing some ongoing accident or warfare risk until the countermeasure for X was found).
- Can be a problem if we think the countermeasure is imperfect enough that the attention hazard outweighs the countermeasure development.
- ‘Exposing Conserved Vulnerabilities’ - imagine that there are 10 possible attacks. For 9 of them, we could quickly develop a countermeasure in an emergency, but for one of them, finding a countermeasure is impossible (we don’t know which is which in advance). We research countermeasures for all 10 attacks, solve 9 of them, but also reveal the attack that is impossible to counter. The adversary thus picks that one, leaving us worse off than had we just remained ignorant and just waited for an emergency (e.g. if we assume before doing the research that adversary would have only a 10% chance of picking that attack). By picking up the low hanging fruit, we’ve ‘funneled’ our adversary towards the weak points.
Substitution effects will have varying consequences for global catastrophic biological risks (GCBRs). In a worst case scenario, finding countermeasures to more mundane things will cause adversaries to move towards GCBR territory (either by more heavily engineering mundane things, or switching into entirely new kinds of attack). However, this is also counterbalanced by the fact that bioweapons (‘BW’) in general might be less attractive when there are countermeasures for a lot of them.
- ‘Reduced BW appeal’ - An adversary has a program that is developing X and Y. We find a cure to X, which reduces the appeal of the entire program, causing the adversary to give up on both X and Y.
Better technology for attribution (e.g. tracing the origin of an attack or accident) is one concrete example that produces ‘reduced BW appeal.’ Better attribution is unlikely to dissuade development of bioweapons oriented towards mutually assured destruction (and we might expect most GCBRs to be coming from such weapons). But in reducing the strategic/tactical appeal of bioweapons for assassination, sabotage, or ambiguous attacks, the overall appeal of a BW program is reduced, which could have spillover effects into reducing the probability of more GCBR-style weapons.
One key question around substitution effects is the flexibility of an adversary. I get a vague impression from reading about the Soviet program that many scientists were extreme specialists, focusing on only one type of microbe. If this is the case, I would expect escalation risk to be greater than risks of switching or attention hazards (e.g. all the smallpox experts try to find ways around the vaccine, rather than switching to Ebola[3]). This is especially true if internal politics and budget battles are somewhat irrational and favor established incumbents (e.g. so that the smallpox scientist gets a big budget even if their project to bypass a countermeasure is unjustifiable).
Some implications:
- Be wary of narrow countermeasures
- Be hesitant to start an offense-defense race unless we think we can win
- Look for broad spectrum countermeasures or responses—which are more likely to eliminate big chunks of the risk landscape and provide overall reduced bioweapons appeal
Thank you to Chris Bakerlee, Anjali Gopal, Gregory Lewis, Jassi Pannu, Jonas Sandbrink, Carl Shulman, James Wagstaff, and Claire Zabel for helpful comments.
Analogous to the 'fallacy of the last move' - H/T Greg Lewis ↩︎
Forcing an adversary to escalate from X to X' may still reduce catastrophic risk by imposing additional design constraints on the attack ↩︎
Although notably the Soviet program attempted to create a smallpox/Ebola chimera virus in order to bypass smallpox countermeasures ↩︎
The central point of this piece is that a bioattacker may use biodefense programs to inform their method of attack, and adapt their approach to defeat countermeasures. This is true. I think the point would be strengthened by clarifying that this adaptability would not be characteristic of all adversaries.
We also face a threat by the prospect of being attacked at unpredictable times by a large number of uncoordinated unadaptable adversaries launching one-off attacks. They may evade countermeasures on their first attempt, but might not be able to adapt.
A well-meaning but misguided scientist could also accidentally cause a pandemic with an engineered pathogen that was not intended as a bioweapon, but as an object of scientific study or model for biodefense. They might not be thinking of themselves as an adversary, or be thought of that way, and yet we can still imagine that the incentives they may face in their line of research may lead them into behaviors similar to those of an adversary.
In general, it seems important to develop a better sense of the range of adversaries we might face. I'm agnostic about what type of adversary would be the most concerning.
To escalate a bioweapon, researchers would have to be engaged in technically difficult and complicated engineering or breeding/selecting efforts. To switch to a different bioweapon, researchers could potentially be just selecting a different presently existing alternative. They'd be retraining scientists on known protocols and using already-existing equipment. Switching is almost certain to be much easier than escalating.
Thanks for clarifying what you mean by "program."
When we look at the Soviet BW program, they engaged in both "switching" (i.e. stockpiling lots of different agents) and "escalating," (developing agents that were heat, cold, and antibiotics-resistant).
If the Soviets discovered that Americans had developed a new antibiotic against one of the bacterial agents in their stockpile, I agree that it would have been simpler to acquire that antibiotic and use it to select for a resistant strain in a dish. Antibiotics were hard to develop then, and remain difficult t... (read more)