BS

Ben Stewart

1442 karmaJoined Feb 2020Sydney NSW, Australia

Bio

Hi, I'm Ben! I'm a Research Analyst at Open Philanthropy, though all views I express here are my own. 

Before OP I was an independent researcher in global health and biosecurity, and a Charity Entrepreneurship incubatee. I have an MD and undergrad degrees in philosophy, international relations, and neuroscience, all from the University of Sydney. 

Comments
184

Hi Vasco, nice post thanks for writing it! I haven't had the time to look into all your details so these are some thoughts written quickly.

I worked on a project for Open Phil quantifying the likely number of terrorist groups pursuing bioweapons over the next 30 years, but didn't look specifically at attack magnitudes (I appreciate the push to get a public-facing version of the report published - I'm on it!). That work was as an independent contractor for OP, but I now work for them on the GCR Cause Prio team. All that to say these are my own views, not OP's.

I think this is a great post grappling with the empirics of terrorism. And I agree with the claim that the history of terrorism implies an extinction-level terrorist attack is unlikely. However, for similar reasons to Jeff Kaufman, I don't think this strongly undermines the existential threat from non-state actors. This is for three reasons, one methodological and two qualitative:

  1. The track record of bioterrorism in particular is too sparse to make empirical projections with much confidence. I think the rarity of bioterror and general small magnitudes of attacks justifies a prior against bioterror as a significant threat, but only a weak one. It also justifies a prior that we should expect at least a handful of groups to attempt bioterror over the next 10-30 years. To take the broader set of terror attacks as having strong implications for future bioterror, one would need to think that 'terrorism' is a compelling reference class for bio x-risk - which my next two points dispute.
  2. The vast majority of terror groups ('violent non-state actors' is a more generally applicable handle) would not want to cause extinction. Omnicidality is a fairly rare motivation - most groups have specific political aims, or ideological motivations that are predicated on a particular people/country/sect/whatever thriving and overcoming its enemies. Aiming for civilisational collapse is slightly more prevalent, though still uncommon. And for all of history, there hasn't been a viable path to omnicide or x-risk anyway. So the kind of actor that presents a bio x-risk is probably going to be very different to the kinds of actor that make up the track record of terrorism.
  3. The vast majority of terror attacks are kinetic - involving explosives, firearms, vehicles, melee weapons. The exceptions are chemical and biological weapons. The biological weapons chosen are generally non-replicative - anthrax, botulinum toxin, ricin, etc. This means that chem and bio attacks also rely on delivery mechanisms that have to get each individual victim to come into contact with the agent. An attack with a pandemic-class agent would not rely on such delivery. It would be strikingly different in complexity of development, attack modality, targeting specificity, and many other dimensions. I.e. it would be very unlike almost all previous terrorist attacks. The ability to carry out such an attack is also fairly unprecedented - it may only emerge with subsequent developments in biotechnology, especially from the convergence of AI and biotechnology.

So overall, compared to the threat model of future bio x-risk, I think the empirical track record of terrorism is too weak (point 1) and based on actors with very different motivations (point 2) using very different attack modalities (point 3). The latter two points are grounded in a particular worldview - that within coming years/decades biotechnology will enable biological weapons with catastrophic potential. I think that worldview is certainly contestable, but I think the track record of terrorism is not the most fruitful line of attack against it.

On a meta-level, the fact that XPT superforecasters are so much higher than what your model outputs suggests that they also think the right reference class approach is OOMs higher. And this is despite my suspicion that the XPT supers are too low and too indexed on past base-rates.

You emailed asking for reading recommendations - in lieu of my actual report (which will take some time to get to a publishable state), here's my structured bibliography! In particular I'd recommend Binder & Ackermann 2023 (CBRN Terrorism) and McCann 2021 (Outbreak: A Comprehensive Analysis of Biological Terrorism).

Although focused on civil conflicts, Lauren Gilbert's shallow explores some possible interventions in this space, including:

  • Disarmament, Demobilization, and Reintegration (DDR) Programs 
  • Community-Driven Development
  • Cognitive Behavioral Therapy
  • Cash Transfers and/or Job Training
  • Alternative Dispute Resolution (ADR)
  • Contact Interventions and Mass Media
  • Investigative Journalism
  • Mediation and Diplomacy

Open Phil had this issue - they now use 'Global Health & Wellbeing' and 'Global Catastrophic Risks', which I think captures the substantive focus of each.

As one data point: I was interested in global health from a young age, and found 80K during med school in 2019, which led to opportunities in biosecurity research, and now I'm a researcher on global catastrophic risks. I'm really glad I've made this transition! However, it's possible that I would have not applied to 80K (and not gone down this path) if I had gotten the impression they weren't interested in near-termist causes. 

Looking back at my 80K 1on1 application materials, I can see I was aware that 80K thought global health was less neglected than biosecurity, and I was considering bio as a career (though perhaps only with 20-30% credence compared to global health). If I'd been aware at the time just how longtermist 80K is, I think there's a 20-40% chance I would have not applied. 

I think Elika's is a great example of having a lot of impact, but I agree that an example shifting from global health is maybe unnecessarily dismissive. I don't think the tobacco thing is good - surely any remotely moral career advisor would advise moving away from that. Ideally a reader who shifted from a neutral or only very-mildly-good career to a great career would be better (as they do for their other examples). I'd guess 80K know some great examples? Maybe someone working exclusively on rich-country health or pharma who moved into bio-risk?

Happy to end this thread here. On a meta-point, I think paying attention to nuance/tone/implicatures is a better communication strategy than retreating to legalese, but it does need practice. I think reflecting on one's own communicative ability is more productive than calling others irrational or being passive-aggressive. But it sucks that this has been a bad experience for you. Hope your day goes better!

Things can be 'not the best', but still good. For example, let's say a systematic, well-run, whistleblower organisation was the 'best' way. And compare it to 'telling your friends about a bad org'. 'Telling your friends' is not the best strategy, but it still might be good to do, or worth doing. Saying "telling your friends is not the best way" is consistent with this. Saying "telling your friends is a bad idea" is not consistent with this. 

I.e. 'bad idea' connotes much more than just 'sub-optimal, all things considered'.

Your top-level post did not claim 'public exposés are not the best strategy', you claimed "public exposés are often a bad idea in EA". That is a different claim, and far from a default view. It is also the view I have been arguing against. I think you've greatly misunderstood others' positions, and have rudely dismissed them rather than trying to understand them. You've ignored the arguments given by others, while not defending your own assertions. So it's frustrating to see you playing the 'I'm being cool-headed and rational here' card. This has been a pretty disappointing negative update for me. Thanks

You didn’t provide an alternative, other than the example of you conducting your own private investigation. That option is not open to most, and the beneficial results do not accrue to most. I agree hundreds of hours of work is a cost; that is a pretty banal point. I think we agree that a more systematic solution would be better than relying on a single individual’s decision to put in a lot of work and take on a lot of risk. But you are, blithely in my view, dismissing one of the few responses that have the potential to protect people. Nonlinear have their own funding, and lots of pre-existing ties to the community and EA public materials. A public expose has a much better chance of protecting newcomers from serious harm than some high-up EAs having a private critical doc. The impression I have of your view is that it would have been better if Ben hadn’t written or published his post and instead saved his time, and prefer that Nonlinear was quietly rejected by those in the know. Is that an accurate picture of your view? If you think there are better solutions, it would be good to name them up front, rather than just denigrate public criticism.

Not everyone is well connected enough to hear rumours. Newcomers and/or less-well-connected people need protection from bad actors too. If someone new to the community was considering an opportunity with Nonlinear, they wouldn't have the same epistemic access as a central and long-standing grant-maker. They could, however, see a public exposé.

What a fantastic resource, thanks all! Also may be worth adding, the new National Security Commission on Emerging Biotechnology, which will be delivering a 2024 report based on “a thorough review of how advances in emerging biotechnology and related technologies will shape current and future activities of the Department of Defense“ - delivering it to the DoD, White House, and Congress.

Load more