Hi, I'm Ben! I'm a Research Analyst at Open Philanthropy, though all views I express here are my own.
Before OP I was an independent researcher in global health and biosecurity, and a Charity Entrepreneurship incubatee. I have an MD and undergrad degrees in philosophy, international relations, and neuroscience, all from the University of Sydney.
Although focused on civil conflicts, Lauren Gilbert's shallow explores some possible interventions in this space, including:
As one data point: I was interested in global health from a young age, and found 80K during med school in 2019, which led to opportunities in biosecurity research, and now I'm a researcher on global catastrophic risks. I'm really glad I've made this transition! However, it's possible that I would have not applied to 80K (and not gone down this path) if I had gotten the impression they weren't interested in near-termist causes.
Looking back at my 80K 1on1 application materials, I can see I was aware that 80K thought global health was less neglected than biosecurity, and I was considering bio as a career (though perhaps only with 20-30% credence compared to global health). If I'd been aware at the time just how longtermist 80K is, I think there's a 20-40% chance I would have not applied.
I think Elika's is a great example of having a lot of impact, but I agree that an example shifting from global health is maybe unnecessarily dismissive. I don't think the tobacco thing is good - surely any remotely moral career advisor would advise moving away from that. Ideally a reader who shifted from a neutral or only very-mildly-good career to a great career would be better (as they do for their other examples). I'd guess 80K know some great examples? Maybe someone working exclusively on rich-country health or pharma who moved into bio-risk?
Happy to end this thread here. On a meta-point, I think paying attention to nuance/tone/implicatures is a better communication strategy than retreating to legalese, but it does need practice. I think reflecting on one's own communicative ability is more productive than calling others irrational or being passive-aggressive. But it sucks that this has been a bad experience for you. Hope your day goes better!
Things can be 'not the best', but still good. For example, let's say a systematic, well-run, whistleblower organisation was the 'best' way. And compare it to 'telling your friends about a bad org'. 'Telling your friends' is not the best strategy, but it still might be good to do, or worth doing. Saying "telling your friends is not the best way" is consistent with this. Saying "telling your friends is a bad idea" is not consistent with this.
I.e. 'bad idea' connotes much more than just 'sub-optimal, all things considered'.
Your top-level post did not claim 'public exposés are not the best strategy', you claimed "public exposés are often a bad idea in EA". That is a different claim, and far from a default view. It is also the view I have been arguing against. I think you've greatly misunderstood others' positions, and have rudely dismissed them rather than trying to understand them. You've ignored the arguments given by others, while not defending your own assertions. So it's frustrating to see you playing the 'I'm being cool-headed and rational here' card. This has been a pretty disappointing negative update for me. Thanks
You didn’t provide an alternative, other than the example of you conducting your own private investigation. That option is not open to most, and the beneficial results do not accrue to most. I agree hundreds of hours of work is a cost; that is a pretty banal point. I think we agree that a more systematic solution would be better than relying on a single individual’s decision to put in a lot of work and take on a lot of risk. But you are, blithely in my view, dismissing one of the few responses that have the potential to protect people. Nonlinear have their own funding, and lots of pre-existing ties to the community and EA public materials. A public expose has a much better chance of protecting newcomers from serious harm than some high-up EAs having a private critical doc. The impression I have of your view is that it would have been better if Ben hadn’t written or published his post and instead saved his time, and prefer that Nonlinear was quietly rejected by those in the know. Is that an accurate picture of your view? If you think there are better solutions, it would be good to name them up front, rather than just denigrate public criticism.
Not everyone is well connected enough to hear rumours. Newcomers and/or less-well-connected people need protection from bad actors too. If someone new to the community was considering an opportunity with Nonlinear, they wouldn't have the same epistemic access as a central and long-standing grant-maker. They could, however, see a public exposé.
What a fantastic resource, thanks all! Also may be worth adding, the new National Security Commission on Emerging Biotechnology, which will be delivering a 2024 report based on “a thorough review of how advances in emerging biotechnology and related technologies will shape current and future activities of the Department of Defense“ - delivering it to the DoD, White House, and Congress.
Hi Vasco, nice post thanks for writing it! I haven't had the time to look into all your details so these are some thoughts written quickly.
I worked on a project for Open Phil quantifying the likely number of terrorist groups pursuing bioweapons over the next 30 years, but didn't look specifically at attack magnitudes (I appreciate the push to get a public-facing version of the report published - I'm on it!). That work was as an independent contractor for OP, but I now work for them on the GCR Cause Prio team. All that to say these are my own views, not OP's.
I think this is a great post grappling with the empirics of terrorism. And I agree with the claim that the history of terrorism implies an extinction-level terrorist attack is unlikely. However, for similar reasons to Jeff Kaufman, I don't think this strongly undermines the existential threat from non-state actors. This is for three reasons, one methodological and two qualitative:
So overall, compared to the threat model of future bio x-risk, I think the empirical track record of terrorism is too weak (point 1) and based on actors with very different motivations (point 2) using very different attack modalities (point 3). The latter two points are grounded in a particular worldview - that within coming years/decades biotechnology will enable biological weapons with catastrophic potential. I think that worldview is certainly contestable, but I think the track record of terrorism is not the most fruitful line of attack against it.
On a meta-level, the fact that XPT superforecasters are so much higher than what your model outputs suggests that they also think the right reference class approach is OOMs higher. And this is despite my suspicion that the XPT supers are too low and too indexed on past base-rates.
You emailed asking for reading recommendations - in lieu of my actual report (which will take some time to get to a publishable state), here's my structured bibliography! In particular I'd recommend Binder & Ackermann 2023 (CBRN Terrorism) and McCann 2021 (Outbreak: A Comprehensive Analysis of Biological Terrorism).