We’re testing out a new service to connect people interested in using their careers to mitigate global catastrophic biological risks with people who work in the field. If you’re interested, please sign up here.

This is a follow-up project to my post last month, where we experimented with encouraging people to reach out to an “EA Professional” in the area of their interest. Depending on how well this goes, we may expand this out to advice in other areas.

More information is covered in the FAQ below. If you have thoughts or suggestions, We’d be happy to hear them.


Who is this service for?

This service is for anyone who is seriously interested in working on mitigating catastrophic biological risks, like the risk of an engineered pandemic. If you’re unsure, you can read the 80,000 Hours problem profile on this here

You don’t need to have any prior experience in the field; we have advisors prepared to talk to people at different career stages. 

How should I prepare?

To get the most out of this service, we recommend that you prepare some questions to discuss with the advisor, and read some background materials if you haven’t already. Here are some articles we think are particularly useful as background for people interested in biosecurity: 

Questions advisors might be able to help you with:

  • I’ve read the relevant introductory literature but I’m not sure what my next step should be — do you have any suggestions?
  • I have a specific career / education decision before me; do you have any input?
  • I have a background in [supply chain management], how might I contribute to the field?
  • Do you have any advice for how I can best test my fit for work in [X aspect of biosecurity work, e.g., US policy]?

Is this a good use of my/the advisor's time?

You won’t be wasting anyone’s time. The advisors here have decided that this is a good use of their time — if a call gets set up, you can assume everyone wants to be there. And the form is quick — less than 5 minutes to fill out.

How will you select who can have a call?

We hope to match most people with advisors. However, advisors have limited availability, so we’ll prioritize advisees based on relevance to their stated interests and backgrounds.

How are advisors selected? 

Advisors were selected on the recommendation of a senior member of the EA biosecurity community.

Why this service?

I think speaking to more experienced people makes it more likely you’ll enter the field by providing inspiration, giving permission, and suggesting concrete ideas about what to do next. I want to lower the barrier to entry for people thinking of entering this field to chat with someone more experienced.

Why biosecurity specifically?

We’re currently running this as a test. In the future, we might expand to more fields. 

Who’s running this?

This is an experimental project of the Centre for Effective Altruism.

Can I get advice on something else?

If you haven’t already considered getting career advice from 80,000 Hours, we highly recommend booking a 1:1 call. You can also check out this informal service to connect people to EA professionals in different areas.

If you would like to get advice on a specific area or from someone working in a particular field, we’d love to hear from you - please let us know here.

How can I ask more questions?

You can comment on this post or email forum@effectivealtruism.org.


7 comments, sorted by Click to highlight new comments since: Today at 10:39 AM
New Comment

Not that important, but I'm curious, is the IGI  (Jennifer Doudna's team) Facebook page closed to all comments?   Or just closed to my comments?   


If it's just me, that makes sense.    If it's closed to all comments, one might wonders why one would use a social networking platform to prohibit social networking?   

The article suggests, "This service is for anyone who is seriously interested in working on mitigating catastrophic biological risks, like the risk of an engineered pandemic."

It's great that there are skilled people addressing this threat, and it seems very likely they will be able to make a constructive contribution which reduces the risk of an engineered pandemic which threatens civilization itself.  The question I hope we are asking would be, is reducing the risk of an an engineered pandemic sufficient?  

The key issue with genetic engineering, or any technology, seems to be the scale of the power involved.   A simple example can help illustrate the issue of scale...

In WWII we threw conventional explosives at each other with  wild abandon all over the planet.   But because conventional explosives are of limited scale, and don't have the power to collapse the system as a whole, we could make this mistake, clean up the mess, try to learn the lessons, and continue on with further progress.  This is the paradigm which defines the past.

If we have a WWIII with nuclear weapons then cleaning up the mess, learning the lessons, and continuing with progress will take place, if it happens at all, over much longer time frames.  Nobody alive at the time of such a war will live to see any recovery that might eventually occur.  This is the paradigm which defines the future.

SUCCESS:  Imperfect management worked with conventional explosives because the scale of these weapons is limited, incapable of crashing the systems which are required for recovery. 

FAILURE:  Imperfect management will not work with nuclear weapons, because the scale of these powers is vastly greater, and can be credibly proposed capable of destroying the systems required for recovery.

If the power of genetic engineering is of existential scale such as is the case with nuclear weapons, then it would seem to follow that reducing the risk of a genetic global catastrophe is not sufficient.   Instead, mitigating the risk seems more like a game of Russian roulette where one gets away with repeatedly pulling the trigger, until the one bad day when one doesn't.

A simple rule governs much of human history.   If it's possible for something to go wrong, sooner or later it likely will.

I've tried to intelligently discuss issues like genetic engineering concerns with leaders and experts in the field.   As example, I spent a month posting every day on Jennifer Doudna's team on Facebook.   I ended this attempt at dialog only when they deleted all my posts without warning or explanation.

What I've learned from this experience is that anyone who has reached the level of expert in a field like genetic engineering has too large of a personal investment in that field to be objective and detached regarding the question of whether that field should exist.

Given how quickly technologies like CRISPR are becoming ever easier, ever cheaper and ever more accessible, I've come to the conclusion that bio-security will soon no longer be possible.    Revolutionary technologies like genetic engineering are being deliberately released in to the wild of the broad public (Doudna is clear and explicit about this goal) and as these powers spread throughout the population they will escape the reach of any attempts at effective management.

We can't control drugs, guns, or even reckless driving.  It puzzles me why we think we'll be able to control genetic engineering.   Bio-security seems a form of science clergy mythology to me honestly.  

You say "we can't control drugs, guns, or even reckless driving". I don't think that's entirely true. For example, the RAND meta-analysis What Science Tells Us About the Effects of Gun Policies shows moderate evidence that violence crime can be reduced by prohibitions associated with domestic violence, background checks, waiting periods, and stand-your-ground laws. Similarly, I believe that progress in car safety engineering has radically reduced the human suffering caused by reckless driving. I have heard biosecurity professionals use cars as an example of a technology that was deliberately and successfully engineered to be safer.

I also suspect the learning you describe ("anyone who has reached the level of expert in a field like genetic engineering has too large of a personal investment") is too strong a conclusion to draw from your experience. People infer a lot about what it might be like to engage with someone from how they attempt dialogue; I don't know what the content of your posts was, but posting similar content every day seems likely to cause observers to conclude that you have very strongly-held beliefs and are willing to violate social norms to attempt to spread those beliefs, which might lead them to decide that engaging in dialogue with you would be unpleasant or unproductive.

(I will note that I hesitated to write this reply because of the tone of your comment, but then didn't want the only comment on a post targeted towards people interested in the field of biosecurity to be so despairing about its prospects; I personally believe there is a lot of useful work that can be done to reduce risks from pandemics.)

Hi Tessa,

Thanks for your feedback.  I agree that my comment was imprecise, and too sweeping, a common failure here.

It's true that we've had some success managing drugs, guns and traffic safety.   Some success is acceptable with these factors because drugs, guns and reckless driving are limited forces which don't have the power to threaten the system as a whole.   So we make mistakes, try to learn the lessons, improve upon past efforts, and continue forward.    This is the pattern of progress which has characterized human history to date.

My contention is that such limited management success is not adequate with vast powers such as genetic engineering, because such technologies do pose a risk to the system as a whole.   Starting with, say, Hiroshima, we've entered a new era where the traditional "mistakes>fixes>more progress" paradigm is becoming obsolete, a relic of the past.   

I'm willing to learn, and agree that I obviously don't know every genetic engineering professional.   Can you introduce us to any genetic engineering PhD who is publicly questioning whether the field of genetic engineering should exist?  I would very much like to meet such a brave soul.

I'm not willing to violate social norms in the sense of being personally rude, engaging in food fights etc, as that is a waste of everyone's time, mine included.  

I am however willing to violate social norms by posting similar content, expressing strong beliefs (which I'm entirely willing to have challenged) and  by being as inconvenient as possible to those claiming that expertise on some narrow technical topic also makes them experts on the  human condition which will ultimately decide the fate of our civilization.   

Yes, lots of useful work can be done in the field of genetics, agreed of course.  But none of that good work is going to matter if evil doers or stupid people, or just unintended mistakes crash the system as a whole.

Jennifer Doudna is a good person who wants to make CRISPR available to everyone.   She is well intended, and a technical expert, but very naive about the human condition.  While one person is curing cancer with technologies like CRISPR, somebody else is going to be engineering a bio-weapon which brings the house down.  "Experts" seem unwilling to grasp this, and I believe that's primarily because they have too big of an investment in the status quo to be detached and objective.

Sorry for the too many words, another common failing here.  As you've correctly observed, I do have strong feelings on this subject.  

PS:  I just found your website, like it!  You seem like very much the kind of person I hope to dialog with, so I'm hoping that I can put enough on the table to make that worth your while.   

I don't know how much of her time Jennifer Doudna spends thinking about bioweapons, but I do think she spends a lot of time thinking about the ethical implications of CRISPR. If you read things like this NYT interview with her from last week she's saying things like:

Interviewer: It’s also easy to imagine two different countries, let alone two different people, having competing ideas about what would constitute ethical gene editing. In an optimal world, would there be some sort of global body or institution to help govern and adjudicate these decisions? In an optimal world? This is clearly a fantasy.

OK, how about a suboptimal one? The short answer is: I don’t know. I could imagine that given the complexities of using genome editing in different settings, it’s possible that you might decide to use it differently in different parts of the world. Let’s say an area where a mosquito-borne disease is endemic, and it’s dangerous and high risk for the population. You might say the risk of using genome editing and the gene drive to control the mosquito population is worth it. Whereas doing it somewhere else where you don’t face the same public-health issue, you might say the risk isn’t worth it. So I don’t know. The other thing is, as you indicated with the way you asked the question, having any global regulation and enforcing it — hard to imagine how that would be achieved. It’s probably more realistic to have, as we currently do, scientific entities that are global that study these complex issues and make formal recommendations, work with government agencies in different countries to evaluate risks and benefits of technologies.

This doesn't seem like a person who is just arguing "CRISPR should be everywhere, for everyone". I also think she is not claiming to be an expert at making bioethical determinations of what technology should be deployed, and my sense from hearing her public speaking is that she is reluctantly taking on a mantle of going around and saying that we all need to have a very sober and open discussion about where and how CRISPR should be used, but that she doesn't feel particularly qualified to make those determinations herself. The Innovative Genomics Institute, which she co-founded, has an entire research area dedicated to Public Impact, including initiatives like the Berkeley Ethics and Regulation Group for Innovative Technologies. You can argue that these actions are poorly targeted, but I don't think it's accurate to frame Doudna as a naively pro-technology actor.

Hi again Tessa,

Doudna wants to "democratize" CRISPR, as she  puts it.  But whatever her perspective, it doesn't really matter, because genetic engineering will inevitably follow a path similar to computing where it becomes easier and easier, cheaper and cheaper, and more and more accessible to more and more people over time.  

Doudna and other technical experts appear to still be laboring under the illusion that they will remain in control of this process, which is why they continually reference governing bodies and so on.  My reply to that is, tell it to the North Korean regime.  

Even if we rule out evil doers, which we can not do, the fact still remains that over some period of time literally millions of people will be fiddling with technologies like CRISPR and whatever is to come next.  There are already CRISPR kits on Amazon, and bio-hacking groups of amateurs on Reddit.  Only God knows what such amateurs will be releasing in to the environment.   Yes, genetic change in the natural world is a given, but never before at such a pace.

Yes, it was the IGI Facebook page where I invested a month attempting to engage.   Yes, Doudna does make the points you've credited to her, agreed.  But none of that really matters, because the technical experts are rapidly losing control of the genie they have let out of the bottle.   I see their talk of governance systems etc as basically a way to pacify the public while this technology continues it's rapid march past the point of no return.

Please feel free to rip any of this to shreds.  I have strong views, that's true, but I'm also very receptive to challenge.  

My real concern is not genetic engineering in particular so much as it is the ever accelerating knowledge explosion as a whole.