We’re testing out a new service to connect people interested in using their careers to mitigate global catastrophic biological risks with people who work in the field. If you’re interested, please sign up here.
This is a follow-up project to my post last month, where we experimented with encouraging people to reach out to an “EA Professional” in the area of their interest. Depending on how well this goes, we may expand this out to advice in other areas.
More information is covered in the FAQ below. If you have thoughts or suggestions, We’d be happy to hear them.
Who is this service for?
This service is for anyone who is seriously interested in working on mitigating catastrophic biological risks, like the risk of an engineered pandemic. If you’re unsure, you can read the 80,000 Hours problem profile on this here.
You don’t need to have any prior experience in the field; we have advisors prepared to talk to people at different career stages.
How should I prepare?
To get the most out of this service, we recommend that you prepare some questions to discuss with the advisor, and read some background materials if you haven’t already. Here are some articles we think are particularly useful as background for people interested in biosecurity:
- Reducing global catastrophic biological risks - 80,000 Hours
- Why experts are terrified of a human-made pandemic — and what we can do to stop it
- 'Future risks' chapter of The Precipice, Introduction and 'Pandemics' section
- Concrete Biosecurity Projects (some of which could be big)
Questions advisors might be able to help you with:
- I’ve read the relevant introductory literature but I’m not sure what my next step should be — do you have any suggestions?
- I have a specific career / education decision before me; do you have any input?
- I have a background in [supply chain management], how might I contribute to the field?
- Do you have any advice for how I can best test my fit for work in [X aspect of biosecurity work, e.g., US policy]?
Is this a good use of my/the advisor's time?
You won’t be wasting anyone’s time. The advisors here have decided that this is a good use of their time — if a call gets set up, you can assume everyone wants to be there. And the form is quick — less than 5 minutes to fill out.
How will you select who can have a call?
We hope to match most people with advisors. However, advisors have limited availability, so we’ll prioritize advisees based on relevance to their stated interests and backgrounds.
How are advisors selected?
Advisors were selected on the recommendation of a senior member of the EA biosecurity community.
Why this service?
I think speaking to more experienced people makes it more likely you’ll enter the field by providing inspiration, giving permission, and suggesting concrete ideas about what to do next. I want to lower the barrier to entry for people thinking of entering this field to chat with someone more experienced.
Why biosecurity specifically?
We’re currently running this as a test. In the future, we might expand to more fields.
Who’s running this?
This is an experimental project of the Centre for Effective Altruism.
Can I get advice on something else?
If you haven’t already considered getting career advice from 80,000 Hours, we highly recommend booking a 1:1 call. You can also check out this informal service to connect people to EA professionals in different areas.
If you would like to get advice on a specific area or from someone working in a particular field, we’d love to hear from you - please let us know here.
How can I ask more questions?
You can comment on this post or email email@example.com.
Not that important, but I'm curious, is the IGI (Jennifer Doudna's team) Facebook page closed to all comments? Or just closed to my comments?
If it's just me, that makes sense. If it's closed to all comments, one might wonders why one would use a social networking platform to prohibit social networking?
The article suggests, "This service is for anyone who is seriously interested in working on mitigating catastrophic biological risks, like the risk of an engineered pandemic."
It's great that there are skilled people addressing this threat, and it seems very likely they will be able to make a constructive contribution which reduces the risk of an engineered pandemic which threatens civilization itself. The question I hope we are asking would be, is reducing the risk of an an engineered pandemic sufficient?
The key issue with genetic engineering, or any technology, seems to be the scale of the power involved. A simple example can help illustrate the issue of scale...
In WWII we threw conventional explosives at each other with wild abandon all over the planet. But because conventional explosives are of limited scale, and don't have the power to collapse the system as a whole, we could make this mistake, clean up the mess, try to learn the lessons, and continue on with further progress. This is the paradigm which defines the past.
If we have a WWIII with nuclear weapons then cleaning up the mess, learning the lessons, and continuing with progress will take place, if it happens at all, over much longer time frames. Nobody alive at the time of such a war will live to see any recovery that might eventually occur. This is the paradigm which defines the future.
SUCCESS: Imperfect management worked with conventional explosives because the scale of these weapons is limited, incapable of crashing the systems which are required for recovery.
FAILURE: Imperfect management will not work with nuclear weapons, because the scale of these powers is vastly greater, and can be credibly proposed capable of destroying the systems required for recovery.
If the power of genetic engineering is of existential scale such as is the case with nuclear weapons, then it would seem to follow that reducing the risk of a genetic global catastrophe is not sufficient. Instead, mitigating the risk seems more like a game of Russian roulette where one gets away with repeatedly pulling the trigger, until the one bad day when one doesn't.
A simple rule governs much of human history. If it's possible for something to go wrong, sooner or later it likely will.