Hide table of contents

On the one hand, I think we've seen enough evidence that government, etc., institutions are surprisingly inadequate at even dealing with a natural pandemic that by all accounts have substantially less concerning properties than a severe bio-engineered pandemic.

On the other hand, a classical reason given for being less concerned about biorisk historically is that we'll see "warning shots" before the real thing (in a way that we're less likely to see with, eg, AI). In a way, COVID-19 is one such "warning shot." So I expect governments, large non-EA donors, public health people and the current generation of smart young people, etc., to all make the update to be much more concerned about pandemics and institutional resilience to them.

On balance, I weakly think current events should lead us to be less concerned about future biorisk. What do you guys think?

New Answer
New Comment


2 Answers sorted by

I think you're probably right that society is likely to respond by increasing our ability to respond to natural pandemics in various ways. There's a lot of great people who are now way more interested in pandemics than they were before.

(Come to think of it, putting some thought now into how to mobilise those forces to avert the next pandemic is probably warranted, since I think there's a pretty good chance all that energy dissipates without much to show for it within a few years of this pandemic ending.)

When it comes to biorisk as a whole, the picture is less clear (though my guess is still probably positive?). There does seem to be some danger that people neglect considerations around engineered pandemics (DURC, info hazards, etc.) in their rush to tackle natural pandemics. I think a lot of work done on the latter is still useful for preventing the former, but they don't always run in the same direction, and since engineered pandemics seem to be the greatest concern from a longtermist perspective, this could be a significant concern.

(Come to think of it, putting some thought now into how to mobilise those forces to avert the next pandemic is probably warranted, since I think there's a pretty good chance all that energy dissipates without much to show for it within a few years of this pandemic ending.)

I agree with this. I generally suspect it's important to give people "things to do" when they're currently riled up/inspired/motivated about something, and that if the absence of things to do they'll just gradually revert to their prior sets of interests and ... (read more)

I think there are sort-of four subquestions here:

1. Do these events provide evidence that we should've been more worried all along about pandemics in general (not necessarily from a longtermist/x-risk perspective)?

2. Do these events provide evidence that we should've been more worried all along about existential risk from pandemics?

3. Do these events increase the actual risk from future pandemics in general (not necessarily from a longtermist/x-risk perspective)?

4. Do these events increase the actual existential risk from future pandemics?

With that in mind, here are my wild speculations as to the answers, informed by very little actual expertise.

I'm fairly confident the answer to 3 is no. It seems quite likely to me that these events will at least somewhat decrease the actual risk from future pandemics in general, because of the "warning shot" effect you mention.

I think 4 is a very interesting question. I would guess that there's enough overlap between what's good for pandemics in general and what's good for existential risks from pandemics that these events will reduce those risks, again due to the "warning shot" effect.

I would also guess that we'll see something more like resources being added to the pool of pandemic preparedness, rather than resources being taken away from longtermist-style pandemic preparedness in order to fuel more "small scale" (by x-risk standards) or "short term" pandemic preparedness. This is partly informed by my second-hand impression that there's currently not many resources in specifically longtermist-style pandemic preparedness anyway (to the extent that the two categories are even separate).

But I could imagine being wrong about all of that.

I think the answers to 1 and 2 depend what you previously believed. I think for most people, the answer to both should be "yes" - most people seemed to have very much dismissed, or mostly just not thought about, risks from pandemics, so a very real example seems likely to remind them that things that don't usually happen really do happen sometimes.

But it seems to me that what we're seeing here is remarkably like what I've been hearing from EAs, longtermists, and biorisk people since I got into EA, from various podcasts and articles and conversations. So for these people, it might not be "new evidence", just something that fits with their existing models (which doesn't mean they expected precisely this to happen at precisely this point).

1 Related Questions

1Answer by Goran Haden
What do you think now, two years later?
4
MichaelA🔸
Quick take: Seems to have clearly boosted the prominence of biorisk stuff, and in a way that longtermism-aligned folks were able to harness well to promote interventions, ideas, etc. that are especially relevant to existential biorisk. I think it probably also on net boosted longtermist-/x-risk-style priorities/thinking more broadly, but I haven't really thought about it much.
6Answer by MichaelA🔸
Very speculative and anecdotal I think I personally find myself emotionally tugged away from longtermism a little by these events. When there's so much destruction happening "right before my eyes" and in a short enough future that it can really emotionally resonate, it's like on some level my brain/emotions are telling me "How could you be worried about AI risk or a future bioengineered pandemic at a time like this! There are people dying right now. This already is a catastrophe!" And it's slightly hard to feed into my emotions the fact that a very different scale of catastrophe, and a much more permanent type, could still possibly happen at some point. (Again, I'm not dismissing that the current pandemic really is a catastrophe, and I do believe it makes sense to reallocate substantial effort to it right now.) On the other hand, this pandemic also seems to validate various things longtermists have been saying for a while, such as about how civilization is perhaps more fragile than people imagine, how we need to improve the speed at which we can develop vaccines, etc. And it provides an emotionally powerful reminder of just how bad and real a catastrophe can be, which might make it easier for people to feel how bad it is that we could have a catastrophe that's even worse, and that in fact destroys civilization as a whole. I think I'd tentatively guess that this pandemic will make the general public slightly more "longtermist" in their values in general. I'd also guess that it'll make the general public substantially more in favour - for present-focused reasons - of things that also happen to be good from a longtermism perspective (e.g., increased spending on future pandemic preparedness in general). But I'm not sure how it'll affect people who are already quite longtermist. From my sample size of 1 (myself), it seems it won't really change behaviours, but will slightly reduce the emotional resonance of longtermism right now (as opposed to just general focus on
Comments4
Sorted by Click to highlight new comments since:

One reason to believe otherwise is because you think existential GCBRs will looks so radically different that any broader biosecurity preparatory work won't be useful.

This was basically going to be my response -- but to expand on it, in a slightly different direction, I would say that, although maybe we shouldn't be more concerned about biorisk, young EAs who are interested in biorisk should update in favor of pursuing a career in/getting involved with biorisk. My two reasons for this are:

1) There will likely be more opportunities in biorisk (in particular around pandemic preparedness) in the near-future.

2) EAs will still be unusually invested in lower-probability, higher-risk problems than non-EAs (like GCBRs).

(1) means talented EAs will have more access to potentially high-impact career options in this area, and (2) means EAs may have a higher counterfactual impact than non-EAs by getting involved.

Is this a cunning scheme to ask private questions on the Forum, or is this actually going to go public at some point? :P

It's going to go public! Want people to review it lightly in case this type of question will lead to information-hazard territory in the answers.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while