Hi! This is my first post on EA Forum. I've been a fan of Effective Altruism and following for a few years though, and I'm currently going through the 80,000 Hours career planning course.

I originally posted this as a Shortform because I couldn't find the option to create a regular post. I later found it so I'm now reposting it here in case that helps more people see it and maybe share comments/feedback. When I posted it as a Shortform I got a thoughtful comment from Linch about information hazards. I didn't realize there was a any kind of taboo or concern about discussing biorisks when I wrote this, so apologies if this violates any community norms and let me know if it's serious enough to warrant me taking this down. 

I'm going to make the case here that certain problem areas currently prioritized highly in the longtermist EA community are overweighted in their importance/scale. In particular I'll focus on biorisks, but this could also apply to other risks such as non-nuclear global war and perhaps other areas as well.

I'll focus on biorisks because that is currently highly prioritized by both Open Philanthropy and 80,000 Hours and probably other EA groups as well. If I'm right that biotechnology risks should be deprioritized, that would relatively increase the priority of other issues like AI, growing Effective Altruism, global priorities research, nanotechnology risks and others by a significant amount. So it could help allocate more resources to those areas which still pose existential threats to humanity.

I won't be taking issue with the longtermist worldview here. In fact, I'll assume the longtermist worldview is correct. Rather, I'm questioning whether biorisks really pose a significant existential/extinction risk to humanity. I don't doubt that they could lead to major global catastrophes which it would be really good to avert. I just think that it's extremely unlikely for them to lead to total human extinction or permanent civilization collapse.

This started when I was reading about disaster shelters. Nick Beckstead has a paper considering whether they could be a useful avenue for mitigating existential risks [1]. He concludes there could be a couple of special scenarios where they are that need further research, but by and large new refuges don't seem like a great investment because there are already so many existing shelters and other things which could serve to protect people from many global catastrophes. Specifically, the world already has a lot of government bunkers, private shelters, people working on submarines, and 100-200 uncontacted peoples which are likely to produce survivors from certain otherwise devastating events. [1]

A highly lethal engineered pandemic is among the biggest risks considered from biotechnology. This could potentially wipe out billions of people and lead to a collapse of civilization. But it's extremely unlikely not to spare at least a few hundred or thousand people among those who have access to existing bunkers or other disaster shelters, people who are working on submarines, and among the dozens of tribes and other peoples living in remote isolation. Repopulating the Earth and rebuilding civilization would not be fast or easy, but these survivors could probably do it over many generations.

So are humans immune then from  all existential risks thanks to preppers, "sardines" [2] and uncontacted peoples? No. There are certain globally catastrophic events which would likely spare no one. A superintelligent malevolent AI could probably hunt everyone down. The feared nanotechnological "gray goo" scenario could wreck all matter on the planet. A nuclear war extreme enough that it contaminated all land on the planet with radioactivity - even though it would likely have immediate survivors - might create such a mess that no humans would last long-term. There are probably others as well.

I've gone out on a bit of a limb here to claim that biorisks aren't an existential risk. I'm not a biotech expert, so there could be some biorisks that I'm not aware of. For example, could there be some kind of engineered virus that contaminates all food sources on the planet? I don't know and would be interested to hear from folks about that. This could be similar to a long-lasting global nuclear fallout in that it would have immediate survivors but not long-term survivors.  However, mostly the biorisks I have seen people focus on seem to be lethal virulent engineered pandemics that target humans. As I've said, it seems unlikely this would kill all the humans in bunkers/shelters, submarines and on remote parts of the planet.

Even if there is some kind of lesser-known biotech risk which could be existential, my bottom-line claim is that there seems to be an important line between real existential risks that would annihilate all humans and near-existential risks that would spare some people in disaster shelters and shelter-like situations. I haven't seen this line discussed much and I think it could help with better prioritizing global problem areas for the EA community.

--

[1]: "How much could refuges help us recover from a global catastrophe?" https://web.archive.org/web/20181231185118/https://www.fhi.ox.ac.uk/wp-content/uploads/1-s2.0-S0016328714001888-main.pdf

[2]: I just learned that sailors use this term for submariners which is pretty fun. https://www.operationmilitarykids.org/what-is-a-navy-squid-11-slang-nicknames-for-navy-sailors/

22

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since: Today at 5:34 PM
ASB
2y41
0
0

Thanks Evan, and welcome to the forum!  I agree this is an important question for prioritization, and does imply that AI is substantially more important than bio (a statement I believe despite working on biosecurity, at least if we are only considering longtermism).  As Linch mentioned, we have policies/norms against publicly brainstorming information hazards.  If somebody is concerned about a biology risk that might constitute an information hazard, they can contact me privately to discuss options for responsible disclosure.

I'm a (new) biomedical engineering MS student, so this is based on a few years of undergraduate and graduate-level study in the realm of biology.

For example, could there be some kind of engineered virus that contaminates all food sources on the planet? I don't know and would be interested to hear from folks about that.

Viruses are typically specific to a particular organism or set of related organisms. Some have very narrow host specificity, while others are more broad. The same is true of bacteria and fungi.

So your question can be refined slightly by asking, "how hard would it be to engineer, culture, and deploy a set of viri, bacteria, and/or fungi capable of killing all food sources on the planet, and what prevention or mitigation strategies exist to combat this outcome?"

Understand that the engineering, culturing, and deployment of each pathogen within this set of bioweapons would at present be extremely difficult, requiring a high level of technical expertise, time, money, and government support. The efficacy of each pathogen would need to be confirmed, and then it would need to be stored or maintained until the attacker had accumulated a set of bioweapons with efficacy against a sufficiently broad set of food sources to threaten civilizational collapse or human extinction.

This process would be coming against a backdrop of improving ability to rapidly develop and roll out vaccines, by a scientific apparatus that is vastly bigger than the resources available to the attacker.

Furthermore, as you may know, we had an mRNA vaccine within days of sequencing COVID. Delay in vaccine rollout was for manufacturing and testing. With animals facing a virus, we could develop an mRNA vaccine, test it with RCTs (with no particular concern for safety, only efficacy), and then start vaccinating immediately.

Bacteria, fungi, and parasites are potentially harder targets, but these pathogens are also harder to engineer and spread because their biology is more complex and they often require insect or animal vectors to spread. These vectors can be killed or restricted geographically.

We also have far fewer ethical barriers to re-engineering non-human animals in ways that might make them more resistant to a particular pathogen. We can also use chemical and physical barriers to screen them to arbitrary degrees. If necessary, food sources could even be grown in sterilized airtight containers. This would be extremely logistically complicated and energetically expensive, but it could be done if necessary to safeguard the survival of humanity.

An extreme nuclear winter scenario could wipe out all photosynthetic organisms, eliminating the base of the food chain. Even in this scenario, however, it's plausible that we could use nuclear energy or fossil fuels to produce artificial light in adequate quantities to maintain some agriculture. There have been visions that adequate supply of solar energy and an "energy explosion" would allow us to transition to high-productivity vertical farming just to enhance the abundance and quality of crops in a "world as usual" scenario.

I don't know any details of this proposal, but this would potentially protect us against nuclear winter and harden us significantly against biorisk all at once. It seems likely to me that vertical farms would have some sort of shield surrounding them to screen out pests even in a world that's not facing an unusually severe biorisk. Shielded vertical farms seem like a great mitigation strategy that would not only protect humanity against such a bioweapon, but also disincentivize an attacker to pursue such a strategy in the first place.

Bottom Line:

Insofar as bioweapons are an existential risk, the vast bulk of that risk will be from pathogens that attack the human body.

Further Thoughts:

Your endpoint is that humans would likely survive even an extremely severe bioweapon attack, and could then rebuild civilization. Yet this world-state appears fragile.

Unlike AI safety or "grey goo," an outcome like this would almost certainly be due to a deliberate human attack on other humans, with apocalyptic or world-controlling intentions.

The attacker would know in advance about the expected outcome. They might conceivably harden themselves to survive it, perhaps by preparing their own bunkers.

That attacker could lay advance plans for how to track down the survivors and either dominate them or eliminate them using more conventional means. An attacker with the resources and coordination capacity to carry out an attack like this might conceivably be expected to also be able to "finish the job."

So we should consider biorisk, and other deliberate attacks on humanity, in terms of two components:

  • Initial attack level success (to varying degrees, such as "causing chaos," "X-disaster," and human extinction)
  • Given attack success level, the difficulty of "finishing the job"

The "finishing the job" aspect has both positive and negative aspects. The negative aspect is that our calculations for the X-risk of a bioweapons attack can't stop with, "well, they probably can't infect everybody." The positive aspect is that it provides a whole new opportunity for preventing or mitigating the threat.

I haven't seen EA deal very much with post-X-disaster scenarios. I know that there's some attention from ALLFED on how we might improve the food supply in the case of a nuclear winter. Perhaps there is some thought out there on how to resist a "finishing the job" post-X-disaster scenario.

Thanks for sharing your expertise and in-depth reply!

You might be interested in reading this investigation about whether civilization collapse can lead to extinction

This post would benefit from an analysis of the relative likelihood of a biorisk and malevolent AI risk. 

That's a very good point.

With the assumption of longtermist ethics which I mentioned in the post, I think the difference in likelihoods has to be very large to make a difference though. Because placing equal value on future human lives to present ones makes extinction risks astronomically worse than catastrophic non-extinction risks.

(I don't 100% subscribe to longtermist ethics, but that was the frame I was taking for this post.)

Curated and popular this week
Relevant opportunities