Hide table of contents

Some people have proposed various COVID-19-related questions (or solicited collections of such questions) that I think would help inform EAs’ efforts and prioritisation both during and after the current pandemic. In particular, I've seen the following posts: 1, 2, 3, 4.

Here I wish to solicit a broader category of questions: Any questions which it would be valuable for someone to research, or at least theorise about, that the current pandemic in some way “opens up” or will provide new evidence about, and that could inform EAs’ future efforts and priorities. These are not necessarily questions for how to help with COVID-19 specifically, and some may inform EA efforts even outside of the broad space of existential risks. I’ve provided several examples to get things started.

I'd guess that most of these questions are probably best addressed at least a few months from now, partly because then there will be more and clearer evidence. But we could start now with collecting the questions and thinking of how we could later investigate them.

If you have ideas on how to refine or investigate some of the questions here, have ideas for spin-off or additional related questions, or already have some tentative “answers”, please provide those as comments.

(I'd consider the 4 posts linked to above to also count as good examples of the sort of question I’m after.)

New Answer
New Comment

6 Answers sorted by

What lessons can be drawn from these events for how much to trust governments, mainstream experts, news sources, EAs, rationalists, mathematical modelling by people without domain-specific expertise, etc.? What lessons can be drawn for debates about inside vs outside views, epistemic modesty, etc.?

E.g., I think these events should probably update me somewhat further towards:

  • expecting governments to think and/or communicate quite poorly about low-probability, high-stakes events.
  • believing in something like, or a moderate form of, "Rationalist/EA exceptionalism"
  • trusting inside-views that seem clever even if they're from non-experts and I lack the expertise to evaluate them

But I'm still wary of extreme versions of those conclusions. And I also worry about something like a "stopped clock is right twice a day" situation - perhaps this was something like a "fluke", and "early warnings" from the EA/rationalist community would typically not turn out to seem so prescient.

(I believe there’s been a decent amount discussion of this sort of thing on LessWrong.)

Some people have previously suggested that "warning shots" in the form of things somewhat like, but less extreme than, global or existential catastrophes could increase the extent to which people prepare for future GCRs and existential risks.

What evidence does/did COVID-19, reactions to it, and reactions to it that seem likely to occur in future provide for or against that idea?

And what evidence do these things give about how well society generalises the lesson from such warning shots? E.g., does/will society from COVID-19 that it’s important to make substantial preparations for other types of low-likelihood, high-stakes possibilities like AI risk? This could be seen as trying to gather more evidence (or at least thoughts) relevant to the following statement from Nick Beckstead (2015):

Overspecificity of reactions to warning shots: It may be true that, e.g., the 1918 flu pandemic served as a warning shot for more devastating pandemics that happened in the future. For example, it frequently gets invoked in support of arguments for enhancing biosecurity. But it seems significantly less true that the 1918 flu pandemic served as a warning shot for risks from nuclear weapons, and it is not clear that the situation would change if one were talking about a pandemic more severe than the 1918 flu pandemic.

Some people have suggested that one way to have a major, long-term influence on the world is for an intellectual movement to develop a body of ideas and have adherents to those ideas in respected positions (e.g., university professorships, high-level civil service or political staffer roles), with these ideas likely lying dormant for a while, but then potentially being taken up when there are major societal disruptions of some sort. I’ve heard these described as making sure there are good ideas “lying around” when an unexpected crisis occurs.

As an example, Kerry Vaughan describes how stagflation “helped to set the stage for alternatives to Keynesian theories to take center stage.” He also quotes Milton Freedman as saying: “the role of thinkers, I believe, is primarily to keep options open, to have available alternatives, so when the brute force of events make a change inevitable, there is an alternative available to change it.”

What evidence did COVID-19, reactions to it, and reactions that seem likely to occur in future, provide for or against these ideas? For example:

  • Was there a major appetite in governments for lasting changes that EA-aligned (or just very sensible and forward-thinking) civil servants were able to seize upon?
  • Were orgs like FHI, CSER, and GCRI, or other aligned academics, called upon by governments, media, etc., in a way that (a) seemed to depend on them having spent years developing rigorous versions of ideas about GCRs, x-risks, etc., and (b) seems likely to shift narratives, decisions, etc. in a lasting way?

And to more precisely inform future decisions, it’d be good to get some sense of:

  • How likely is it that similar benefits could’ve been seized by people “switching into” those pathways, roles, etc. during the crisis, without having built up the credibility, connections, research, etc. in advance?
  • If anyone did manage to influence substantial changes that seem likely to last, what precise factors, approaches, etc. seemed to help them do so?
  • Were there apparent instances where someone was almost able to influence such a change? If so, what seemed to block them? How could we position ourselves in future to avoid such blockages?

Here's another example of a prior statement of something like the idea I'm proposing should be investigated. This is from Carrick Flynn talking about AI policy and strategy careers:

If you are in this group whose talents and expertise are outside of these narrow areas, and want to contribute to AI strategy, I recommend you build up your capacity and try to put yourself in an influential position. This will set you up well to guide high-value policy interventions as clearer policy directions emerge. [...]
Depending on how slow these “entangle
... (read more)

Will governments and broader society now adequately prioritise pandemics, or some subset of them such as natural pandemics or respiratory disease pandemics? Does this mean that pandemics (or that subset) is mostly “covered”, and thus that “the EA portfolio” should move instead towards other things (e.g., any still overlooked types of pandemics, towards other x-risks, etc.)?

Conversely, should EA now place more emphasis on pandemics, because the “window” or “appetite” for people to work on such matters is currently larger than normal? If so, how long will that last? (E.g., if someone is just starting their undergrad, should they plan with that window/appetite in mind, or should they assume attention will shift away again by the time they’re in a position to take relevant roles.)

How to get disease surveillance right: monitoring the spread of a disease effectively without infringing on civil liberties.

Also, how effectively is the Fed's expansionary response to the COVID-19 crisis mitigating the worst risks of the pandemic? (I'm not remotely a macroecon expert so I don't know what the best questions are, but I know that Open Phil is interested in this area.)

Curated and popular this week
Relevant opportunities