Let's make nice things with biology. Working on nucleic acid synthesis screening at IBBIS. Also into dual-use risk assessment, synthetic biology, lab automation, event production, donating to global health. From Toronto, lived in Paris and Santiago, currently in the SF Bay. Website: tessa.fyi
A new biosecurity-relevant newsletter (which me and Anemone put together) is GCBR Organization Updates. Every few months, weâll ask organizations who are doing impactful work to reduce GCBRs to share their current projects, recent publications, and any opportunities for collaboration.
I was part of a youth delegation to the BWC in 2017, and I think the greatest benefit I got was that it raised my aspirations. I'm not sure I'd previously conceived of myself as the sort of person who could speak at the UN. I also heard an expert bowing out of dinner early because they had to go finish their slides for the next day, and realized there isn't some upper echelon of governance and society where everyone is hypercompetent and on top of things; even at the friggin' United Nations people are making their slides the night before.
I don't know how much of an effect this had on my decision to start a biosecurity meetup the next year and eventually transition to full-time biosecurity work, but I think it played a role. There are other benefits too; Schelling-point NGO networking, collecting lived-experience stories that make your understanding of diplomacy more vivid, and creating a pressure of prior consistency that increases the chance of that a delegate will continue to work on biosecurity (YMMV on whether the last item is a benefit).
+1 on "specialist experts are surprisingly accessible to enthusiastic youth", cf some relevant advice from Alexey Guzey
Thanks for this comment, and thanks to Nadia for writing the post, I'm really happy to see it up on the forum!
Chris and I wrote the guidance for reading groups and early entrants to the field; this was partly because we felt that new folks are most likely to feel stuck/intimidated/forced-into-deference/etc. and because it's where we most often found ourselves repeating the same advice over and over.
I think there are people whose opinions I respect who would disagree with the guidance in a few ways:
(Side note: it's always both flattering and confusing to be considered a "senior member" of this community. I suppose it's true, because EA is very young, but I have many collaborators and colleagues who have decade(s) of experience working full-time on biorisk reduction, which I most certainly do not.)
This is more a response to "it is easy to build an intuitive case for biohazards not being very important or an existential risk", rather than your proposals...
My feeling is that it is fairly difficult to make the case that biological hazards present an existential as opposed to catastrophic risk and that this matters for some EA types selecting their career paths, but it doesn't matter as much in the grand scale of advocacy? The set of philosophical assumptions under which "not an existential risk" can be rounded to "not very important" seems common in the EA community, but extremely uncommon outside of it.
My best guess is that any existential biorisk scenarios probably route through civilisational collapse, and that those large-scale risks are most likely a result of deliberate misuse, rather than accidents. This seems importantly different from AI risk (though I do think you might run into trouble with reckless or careless actors in bio as well).
I think a focus on global catastrophic biological risks already puts one's focus in a pretty different (and fairly neglected) place from many people working on reducing pandemic risks, and that the benefit of trying to get into the details of whether a specific threat is existential or catastrophic doesnât really outweigh the costs of potentially generating infohazards.
My guess is that (2) will be fairly hard to achieve, because the sorts of threat models that are sufficiently detailed to be credible to people trying to do hardcore existential-risk-motivated cause prioritization are dubiously cost-benefitted from an infohazard perspective.
Happy to pitch in with a few stories of rejection!
These were all pretty painful for me at the time... and I'm realizing I've since come up with stories where the rejections were okay, or part of a fine trajectory. I guess one message here is "just because you were rejected once doesn't mean you will be if you apply again"?
Maybe thereâs a huge illusion in EA of âsomeone else has probably worked out these big assumptions we are makingâ. This goes all the way up to the person at Open Phil thinking âHolden has probably worked these outâ but actually no one has.
I just wanted to highlight this in particular; I have heard people at Open Phil say things along the lines of "... but we could be completely wrong about this!" about large strategic questions. A few examples related to my work:
These are big questions, and I have spent dozens (though not hundreds) of hours thinking about them... which has led to me feeling like I have "working hypotheses" in response to each. A working hypothesis is not a robust, confident answer based on well-worked-out assumptions. I could be wrong, but I suspect this is also true in many other areas of community building and cause prioritisation, even "all the way up".
I recall meeting Karolina M. Sulich, the VP of Osmocosm, at EAGxBerlin last year, and thought some of her machine olfaction x biosecurity ideas were really cool! I'd be stoked for more people to look into this.
A few more you might share:
I can't speak for the author, and while I'd classify these as examples of suspicion and/or criticism of EA biosecurity rather than a "backlash against EA", here are some links:
I'll also say I've heard criticism of "securitising health" which is much less about EAs in biosecurity and more clashing concerns between groups that prioritise global health and national security, where EA biosecurity folks often end up seen as more aligned with the national security concerns due to prioritising risks from deliberate misuse of biology.