Hide table of contents

Summary

Calculations

My calculations are in this Sheet.

I Fermi estimate the cost-effectiveness of epidemic/pandemic preparedness of 0.00236 DALY/$ multiplying:

  • The expected annual epidemic/pandemic disease burden of 68.2 MDALY. I obtained this from the product between:
    • The expected annual epidemic/pandemic deaths of 1.61 M, which I determined multiplying:
      • The epidemic/pandemic deaths per human-year from 1500 to 2023 of 1.98*10^-4, which is the ratio between 160 M epidemic/pandemic deaths, and 808 G human-years from Marani et. al 2021[1].

      • The population predicted for 2024 of 8.12 G.
    • The disease burden per death in 2021 of 42.4 DALY.
  • The relative reduction of the expected annual epidemic/pandemic disease burden per annual cost of 3.46 %/G$. I got this aggregating the following estimates with the geometric mean:
    • 8 %/G$ (= 0.2/(250*10^9/100)), which is based on Millett & Snyder-Beattie 2017:
      • “We extend the World Bank's assumptions to include bioterrorism and biowarfare—that is, we assume that the healthcare infrastructure would reduce bioterrorism and biowarfare fatalities by 20%”.
      • “We calculate that purchasing 1 century's worth of global protection in this form would cost on the order of $250 billion, assuming that subsequent maintenance costs are lower but that the entire system needs intermittent upgrading”.
    • 1.5 %/G$ (= 0.3/(20*10^9)), which is based on Bernstein et. al 2022:
      • 30 % is the mean between 10 % and 50 %, which are the values studied in Table 2.
      • “We find that the sum of our median cost estimates of primary prevention (~$20 billion) are ~1/20 of the low-end annualized value of lives lost to emerging viral zoonoses and <1/10 of the annualized economic losses”.

Relative to epidemic/pandemic preparedness, I calculate:

  • GiveWell’s top charities are 4.21 (= 0.00994/0.00236) times as cost-effective.
  • Corporate campaigns for chicken welfare, such as the ones supported by THL, are 6.35 k (= 15.0/0.00236) times as cost-effective.
  1. ^ 1 G stands for 1 billion. I assumed 5 k deaths (= (0 + 10)/2*10^3) for epidemics/pandemics qualitatively inferred (said) to have caused less than 10 k deaths, which are coded as having caused -999 (0) deaths. I also considered the deaths from COVID-19, which is not in the original dataset.
     

32

0
0

Reactions

0
0
Comments11


Sorted by Click to highlight new comments since:

This seems intuitively in the right ballpark (within an order of magnitude of GiveWell), but I'd caution that, as far as I can tell, the World Bank and Bernstein et al. numbers are basically made up.

I've previously written about how to identify higher impact opportunities. In particular, we need to be careful about the counterfactuals here because a lot of the money on pandemic preparedness comes from governments who would otherwise spend on even less cost effective things.

Thanks for the comment, Joshua!

I'd caution that, as far as I can tell, the World Bank and Bernstein et al. numbers are basically made up.

Because we do not know the relative reduction in the expected annual deaths caused by their proposed measures, right? I guess their values are optimistic, such that GiveWell's top charities are more than 4.12 times as cost-effective.

Interesting analysis! I don't have any experience conducting such analyses myself, but I was curious which interventions are considered part of pandemic preparedness when calculating the total cost. Does it also include indirect costs, such as research funding or capacity-building projects?

Thanks for the relevant question, Dhruvin!

Below is the relevant section from Bernstein et. al 2022. I have bolded the 6 measures included in the annual cost of 20 G$.

THE COSTS OF PRIMARY PREVENTION

Previously, we provided preliminary estimates of how much primary prevention might cost (9). We presented six estimates of annual costs. We estimated $19 billion to close down China’s wildlife farming industry, based on a Chinese report (76). A total of $476 million to $842 million were needed to reduce spillover from livestock based on (77) and the World Bank One World One Health farm biosecurity intervention program (78). The report provided the cost of implementing enhanced biosecurity for zoonoses around farming systems in low to middle income countries, and we extrapolated those data to the 31 countries with high risk of wildlife viral spillover risk from (65, 66).

The other four were our estimates for viral discovery ($120 million to $340 million), early detection and control ($217 million to $279 million), wildlife trade surveillance ($250 million to $750 million), and programs to reduce spillover from livestock ($476 million to $852 million). The most complicated estimate was reducing deforestation by half ($1.53 billion to $9.59 billion). These broad-brush estimates provide essential insights into the relative magnitude of each task. Here, we provide more details of the underlying issues determining costs and the challenges of implementation.

These kinds of analyses are generally a waste of time, because the people performing them have no idea about how outbreaks are identified and controlled. They have good intentions, but outbreak control isn't a simple linear world where you know all the variables and you can work with averages. As a result, these estimates tend to have little basis in reality.

Take the numbers from Berstein et al - they're patently ridiculous! "$19 billion to close down China’s wildlife farming industry". Never mind the credibility of the $19bn figure... who's going to tell China to shut down anything? Who think's the CCP are just going to do what they're told? What kind of a plan is this??

If you want to do a cost/benefit analysis, you need to do it by strategy. And there are lots of different strategies.

For example, what's the cost/benefit of a rapid elimination strategy? What's the cost/benefit on having wastewater / sewerage / environmental / random testing in major international ports of entry? What's the cost/benefit of investing in rapid testing manufacturing capacity? Or of training HCWs to implement the strategy?

If you respond quickly, you can reduce the size of the problem by orders of magnitude, and therefore reduce the costs of resolving it by orders of magnitude too. So where does that appear in the analysis? Nowhere, because these kinds of analyses don't allow for it. Instead, they just make vague assumptions about reducing healthcare costs, which is totally unsatisfactory.

I've tried to explain this before...

https://forum.effectivealtruism.org/posts/utE4WqYjjmYDwoiuJ/pandemicriskman-s-quick-takes#u2JxaKrmfJF4hbfh5

Thanks for the comment!

I took Millett & Snyder-Beattie 2017's and Bernstein et. al 2022's numbers at face value, but they are far from rigorous estimates, and I would agree better modelling is needed.

Take the numbers from Berstein et al - they're patently ridiculous!

Sidenote. I would not be surprised if their numbers are very off, but I think it is better to avoid terms like "ridiculous", which are confrontational, and therefore can make thinking clearly more difficult.

Interesting! This is a very surprising result to me because I am mostly used to hearing about how cost effective pandemic prevention is and this estimate seems to disagree with that.

Shouldn't this be a relatively major point against prioritizing biorisk as a cause area? (at least w/o taking into account strong long termism and the moral catastrophe of extinction)

Not really. This post's cost-effectiveness calculation was done at the cause area level, so it's an average of many interventions of highly varying cost-effectiveness, while GW top charities' cost-eff are evaluated at the (org-specific) intervention level.  

Thanks for the comment, Jacob!

This is a very surprising result to me because I am mostly used to hearing about how cost effective pandemic prevention is and this estimate seems to disagree with that.

Note that the cost-effectiveness of epidemic/pandemic preparedness I got of 0.00236 DALY/$ is still quite high. The value of a statistical life in high income countries is around 1 to 10 M$, which, for 51 DALY averted per life saved[1], leads to 5.10*10^-6 (= 51/(10*10^6)) to 5.10*10^-5 DALY/$ (= 51/10^6), i.e. 0.216 % (= 5.10*10^-6/0.00236) to 2.16 % (= 5.10*10^-5/0.00236) of my estimate for the cost-effectiveness of epidemic/pandemic preparedness.

Shouldn't this be a relatively major point against prioritizing biorisk as a cause area?

Not so much for prioritising global health and development over biorisk, since GiveWell's top charities being 4.21 times as cost-effective is not much considering uncertainty in my estimates. However, I would say definitely so for prioritising animal welfare over biorisk.

(at least w/o taking into account strong long termism and the moral catastrophe of extinction)

It is unclear to me whether such considerations would lead to prioritising biorisk, even under expected total hedonistic utilitarianism (which I strongly endorse).

  1. ^

    According to Open Philanthropy, “GiveWell uses moral weights for child deaths that would be consistent with assuming 51 years of foregone life in the DALY framework (though that is not how they reach the conclusion)”.

Note that the cost-effectiveness of epidemic/pandemic preparedness I got of 0.00236 DALY/$ is still quite high.


Point well-taken. 

I appreciate you writing and sharing those posts trying to model and quantify the impact of x-risk work and question the common arguments given for astronomical EV.

I hope to take a look at those more in depth some time and critically assess what I think about them. Honestly, I am very intrigued by engaging with well informed disagreement around the astronomical EV of x-risk focused approaches. I find your perspective here interesting and I think engaging with it might sharpen my own understanding.

:)

 

Thanks, Jacob! That is nice to know.

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
 ·  · 1m read
 · 
 ·  · 19m read
 · 
I am no prophet, and here’s no great matter. — T.S. Eliot, “The Love Song of J. Alfred Prufrock”   This post is a personal account of a California legislative campaign I worked on March-June 2024, in my capacity as the indoor air quality program lead at 1Day Sooner. It’s very long—I included as many details as possible to illustrate a playbook of everything we tried, what the surprises and challenges were, and how someone might spend their time during a policy advocacy project.   History of SB 1308 Advocacy Effort SB 1308 was introduced in the California Senate by Senator Lena Gonzalez, the Senate (Floor) Majority Leader, and was sponsored by Regional Asthma Management and Prevention (RAMP). The bill was based on a report written by researchers at UC Davis and commissioned by the California Air Resources Board (CARB). The bill sought to ban the sale of ozone-emitting air cleaners in California, which would have included far-UV, an extremely promising tool for fighting pathogen transmission and reducing pandemic risk. Because California is such a large market and so influential for policy, and the far-UV industry is struggling, we were seriously concerned that the bill would crush the industry. A partner organization first notified us on March 21 about SB 1308 entering its comment period before it would be heard in the Senate Committee on Natural Resources, but said that their organization would not be able to be publicly involved. Very shortly after that, a researcher from Ushio America, a leading far-UV manufacturer, sent out a mass email to professors whose support he anticipated, requesting comments from them. I checked with my boss, Josh Morrison,[1] as to whether it was acceptable for 1Day Sooner to get involved if the partner organization was reluctant, and Josh gave me the go-ahead to submit a public comment to the committee. Aware that the letters alone might not do much, Josh reached out to a friend of his to ask about lobbyists with expertise in Cal
Recent opportunities in Cause prioritization