weeatquince

5081Joined Sep 2014

Comments
421

Thank so much you for writing this I think it is an excellent piece and makes a really strong case for how longtermists should consider approaching policy. I agree with most of your conclusions here.

I have been working in the space for a number of years advocating (with some limited successes) for a cost effectiveness approach to government policy making on risks in the UK (and am a contributing author to the Future Proof report your cite). Interestingly despite having made progress in the area I am over time leaning more towards work on specific advocacy focused on known risks (e.g. on pandemic preparedness) than more general work on improve government spending on risks as a whole. I have a number of unpublished notes on how to assess the value of such work that might be useful so thought I would share below.

I think there is three points my notes might helpfully add to your work

  1. Some more depth about how to think about cost benefit analysis and in particular what the threshold is for government to take action. I think the cost benefit you describe is below the threshold for government action.
  2. An independent literature review type analysis on what the benefit cost ratio is for on the margin additional funds going into disaster prevention. (Literature list in Annex section).
  3. Some vague reflections as a practitioner in this space on the paths to impact

 

Note: Some of this research was carried out for Charity Entrepreneurship and should be considered Charity Entrepreneurship work. This post is in an independent capacity and does not represent views of any employer

 

1. The cost benefit analysis here is not enough to suggests government action

I think it is worth putting some though into how to interpret cost benefit analyses and how a government policy maker might interpret and use them. Your conservative estimate suggests a benefit $646 billion to a cost of $400 billion – this is a benefit cost ratio (BCR) of 1.6 to 1. 

Naively a benefit cost ratio of >1 to 1 suggests that a project is worth funding. However given the overhead costs of government policy, to governments propensity to make even cost effective projects go wrong and public preferences for money in hand it may be more appropriate to apply a higher bar for cost-effective government spending. I remember I used to have a 3 to 1 ratio, perhaps picked up when I worked in Government although I cannot find a source for this now.

According to https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5537512/ the average cost benefit ratio of government investment into health programs is 8.3 to 1. I highly expect there are many actions the US government could take to improve citizens healthcare with a CBA in the 5-10 to 1 range. In comparison on a 1.6 to 1 does not look like a priority.

Some Copenhagen Consensus analysis I read considers robust evidence for benefits between 5 to 15 times higher than costs as "good" interventions.

So overall if making the case to government or the public I think making the case that there is a 1.6 to 1 BCR is not sufficient to suggest action. I would consider 3 to 1 to be a reasonable bar and 5 to 1 to be a good case for action.

 

2. On the other hand the benefit cost ratio is probably higher than your analysis suggests

As mentioned you directly calculate a benefit cost ratio of 1.6 to 1 (i.e. $646 billion to $400 billion).

Firstly I note that reading your workings this is clearly a conservative estimate. I would be interested to see a midline estimate of the BCR too.

I made a separate estimate that I thought I would share. It was a bit more optimistic than this. It suggested that the benefit costs ratios (BCR) for disaster prevention are that, on the margin, additional spending on disaster preparedness to be in the region of 10 to 1, maybe a bit below that. I copy my sources into an annex section below.

(That said, spending $400 billion is arguably more than “on the margin” and is a big jump in spending so we might expect that spending at that level to have a somewhat lower value. Of course in practice I don’t think advocates are going to get government to spend $400bn tomorrow and that a gradual ramp up in spending is likely justified.)

 

3. A few reflections on political tractability and value 

My experience (based on the UK) is that I expect governments to be relatively open to change and improvement in this area. I expect the technocratic elements of government respond well to highlighting inconsistencies in process and decision making and the UK government has committed to improvements to how they asses risks. I expect governments to be a bit more reticent to make changes that necessitate significant spending or to put in place mechanisms and oversight that can hold them to account for not spending sufficiently on high-impact risks that might ensure future spending.

I am also becoming a bit more sceptical of the value of this kind of general longtermist work when put in comparison to work focusing on known risks. Based on my analysis to date I believe some of the more specific policy change ideas about preventing dangerous research or developing new technology to tackle pandemics (or AI regulation) to be a bit more tractable and a bit higher benefit to cost than then this more general work to increase spending on risks. That said in policy you may want to have many avenues you are working on at once so as to capitalise on key opportunities so these approaches should not be seen as mutually exclusive. Additionally there is a case for more general system improvements from a more patient longtermist view or from a higher focus on unknown unknown risks being critical.
 

 

ANNEX: My estimate 

On the value of general disaster preparedness

We can set a prior for the value of pandemic preparedness by looking at other disaster preparedness spending.

Real-world evidence. Most of the evidence for this comes from assessments of the value of physical infrastructure preparation for natural disasters, such as building buildings that can withstand floods. See table below.

SourceSummaryBCR

Natural Hazard Mitigation Saves: 2019 Report

Link

  • Looks at the BCR for different disaster mitigation prevention. For example:
    • Adopting building codes 11:1
    • Changing buildings 4:1
  • (We think this source has some risk of bias, although it does appear to be high quality.)

11:1

 

4:1

If Mitigation Saves $6 Per Every $1 …
(Gall and Friedland, 2020)

Link

  • “The value of hazard mitigation is well known: the Multihazard Mitigation Council (MMC) upped their initial estimate of $4 (MMC 2005) saved for every $1 spent on hazard mitigation to $6, and $7 with regard to flood mitigation (MMC 2017).”

4:1

 

6½:1

Other estimates. There are also a number of estimates of benefit cost ratios:

SourceSummaryBCR

Does mitigation save? Reviewing cost-benefit analyses of disaster risk reduction

(Shreve and Kelman, 2014)

Link

  • Suggest that disaster relief saves money but at what ratio is unclear and is highly dependent on situation location and kind of disaster.
  • BCR estimates tend to be less than 10 with a few going into the 10s and a very few much higher (largest was 1800)
  • BCRs may underestimate by putting a high discount rate
~10:1

Natural disasters challenge paper

(Copenhagen Consensus, 2015)

  • There are growing economic costs from natural disasters in recent years. This is especially true in developing countries where there may be limited insurance, higher risks and looser building codes. 
  • Looks at retrofitting schools to be earthquake resistant in seismically active countries, suggests this has a BCR close to 1:1
  • Looks at constructing a one-metre high wall to protect homes or elevating houses by one metre to reduce flooding. Suggests this has a BCR of 60:1.

1:1

 

60:1

IFRC 
Link
  • ““We estimate that for each dollar spent on disaster preparedness, an average of four dollars is saved on disaster response and recovery” says Alberto Monguzzi, Disaster Management Coordinator in the IFRC Europe Zone Office.”
4:1

Pandemic preparedness estimates

Other estimates. We found one example of estimates of the value of preparing better for future pandemics. 

SourceSummaryBCR

Not the last pandemic: … (Craven et al., 2021)

Link

  • Suggests a cost over 10 years of $285bn-$430bn would partially mitigate a damage $16,000bn every 50 years.
  • This roughly implies a BCR of <9:1
    • (16000/50) / (((285+430)/2)/10) = 8.95
<9:1

We also found three examples of estimates of the value stockpiling for future pandemics. 

SourceSummaryBCR

The Cost Efectiveness of Stockpiling Drugs, Vaccines and Other Health Resources for Pandemic … 

(Plans‑Rubió, 2020)

Link 

  • Looked at estimates for the cost effectiveness of stockpiling drugs for a pandemic. Example estimates:
    • “US$8550 per LYS in very high severity pandemics and US$13,447 per LYS in moderate severity pandemics”
    • “₤3800 per QALY and ₤28,000 per QALY for the 1918 and 1957/69 pandemic scenarios” 
  • Very roughly if we place a value of $50-100k per year of life this suggests a BCR of roughly 5:1 - 10:1.
~8:1

Cost-Benefit of Stockpiling Drugs …

(Balicer et al, 2005)

Link

  • Suggest various options for stockpiling in Israel for an influenza pandemic have a cost benefit ratios of 0.37, 0.38, 2.49, 2.44, 3.68
  • “investments in antiviral agents can be expected to yield a substantial economic return of >$3.68 per $1 invested, while saving many lives”
4:1
Link
  • “expanding the stockpile of AV drugs to encompass the whole UK population (≈60 million) might even be acceptable (≈£6,500 per QALY gained over a no intervention strategy for the 1918 scenario under base-case assumptions).”
  • Very roughly if we place a value of $50-100k per year of life this suggests a BCR of roughly 10:1.
~10:1

()

Link

  • “Procuring an adequate PPE stockpile in advance at non-pandemic prices would cost only 17% of the projected amount needed to procure it at current pandemic-inflated prices”
6:1

 

Historical data and estimates suggest the value of increasing preparedness is decent but not very high, with estimated benefit cost ratios (BCR) often around or just below 1:10. 

 

How this changes with scale of the disaster

There is some reason to think that disaster preparedness is more cost effective when targeted at worse disasters. Theoretically this makes sense as disasters are heavy-tailed and most of the impact of preventing and mitigating disasters will be in preventing and mitigating the very worst disasters. This is also supported by models estimating the effect of pandemic preparedness, such as those discussed in this talk. (Doohan and Hauck, 202?)

Pandemics affect more people than natural disasters so we could expect a higher than average BCR. This is more relevant if we pick preparedness interventions that scale with the size of the disaster (an example of an intervention that does not have this effect might be stockpiling, for which the impact is capped by the size of the stockpile, not by the size of the disaster).

However overall I did not find much solid evidence to suggest that the BCR ratio is higher for higher scale disasters.

We separately looked at two ideas on new technology:

  1. The idea listed here focused on new market incentives for antimicrobials.
  2. Advocacy for funding for new technology (not antibiotics) to help mitigate pandemics.

(We found this breakdown useful as the problems are different. The current patent system does not work for antimicrobials due to the need to limit the use of last line novel antibiotics. The current patent system works better for preparing for future pandemics but has limits as the pay out is uncertain and might not happen within the life-time of a patent.)

 

Under idea 1. Antimicrobials we didn’t look specifically at phage therapy. I don’t have a strong sense if phage therapy is in or out of scope of the various proposed policy changes, although I think the current focus is on antibiotics which would make phage therapy out of scope. This could be a thing for the new charity to look into. The existence of other emerging health tech that could also address microbial diseases could be seen as a case to reduce the expected impact of developing new antibiotics. This was not explicitly modelled (other than applying a 4% discount rate which should cover such things). 

Under idea 2. new tech for pandemics we very briefly considered phage therapy. It got cut as an idea at the very early stage of our research when considering what new tech will have the biggest effect on future pandemics. This is not to say that it is not a good idea – I tend to describe CE's research as rigorous but not comprehensive and I am sure that unfortunately good ideas are cut at the early stage of our prioritisation.

 

I hope that answers your question. Both reports should be made available in due course.

We also hope that any charity that begins life focusing on shifting market incentives for antibiotics could scale by moving onto policy advocacy and market shaping work for other key technologies. Technologies we were excited about more advocacy for are: platform DNA vaccine technology or UVC sterilisation or point of care diagnostics. 
 

Hi Nick, Great to hear from you and to get your on-the-ground feedback. I lead the research team at CE.

These are all really really great points and I will make sure they are all noted in the implementation notes we produce for the (potential) founders.

All our ideas have implementation challenges, but we think that delivering on these ideas is achievable and we are excited to find and train up potential founders to work on them!!
 

–-–  

One point of clarification, in case it is not clear: on kangaroo care we are recommended an approach of providing and adding extra staff into healthcare facilities to offer kangaroo care support, rather than trying to get current staff to take on the additional burden of teaching kangaroo care. We hope and expect (based on our conversations with experts) that this approach can sidestep at least some of the implementation issues identified by GiveWell.

Great! Its good to see things changing :-) Thank you for the update!

Yeah, I somewhat agree this would be a challenge, and there is a trade off between the time needed to do this well and carefully (as it would need to be done well and carefully) and other things that could be done.

I think it would surprise a lot if the various issues were insurmountable. I am not an expert in how to publish public evaluations of organisations without upsetting those organisations or misleading people but connected orgs like GiveWell do this frequently enough and must have learnt a thing or two about it in the past few years. To take one the concerns you raise: if you are worried about people reading too much into the list and judging the organisations who requested the grants rather than specific grants you could publish the list in a pseudoanonymised way where you remove names of organisations and exact amounts of funding – sure people could connect the dots but it would help prevent misunderstanding and make it clearer judgement is for grants not organisations.

 

Anyway to answer your questions:

  • On creating new projects – it is  easier for the Charity Entrepreneurship research team to know how to asses funding availability and the bar to beat for global health projects than for biosecurity projects. Sure we can look at where OpenPhil have given but there is no detail there. It is hard to know how much they base their decisions on different factors such as the trusted-ness of the people running the project versus some bar of expected effectiveness versus something else. Ultimately this can make us more hesitant to try and start new organisations that would be aiming to get funding from OpenPhil's longtermist teams than  we are to start new organisations that would be aiming to get funding from GiveWell (or other very transparent organisations).  This uncertainty about future funding is also a barrier we see in potential entrepreneurs and more clarity feels useful
  • On other funders could fill gaps that they believe OpenPhil has missed – I recently wrote a critique of the Lon-Term Future Fund pointing out that they have ignored policy work. This has led to some other funders looking into the space. This was only possible because their grant and grant evaluations are public. (This did require having inside knowledge of the space about who was looking for funding.) Honestly OpenPhil are already pretty good at this, you can see all their grants and identify gaps (like I believe no longtermist team at OpenPhil has ever given to any policy work outside the US) and then direct funds to fill those gaps. It is unclear to me how much more useful the tiers would be but I expect the lower tiers would highlight areas where OpenPhil is unlikely to fund in the future and other funders could look at what they think is valuable in that space and fund it.

 

(All views my own not speaking for any org or for Charity Entrepreneurship etc)

Thanks for the useful post Holden.

I think it would be great to see the full published tiered list.

In global health and development funders (i.e. OpenPhil and Givewell) are very specific about the bar and exactly who they think is under it and who they think is over it. Recently global development funders (well GiveWell) have even actively invited open constructive criticism and debate about their decision making. It would be great to have the same level of transparency (and openness to challenge) for longtermist grant making.

Is there a plan to publish the full tiered list? If not what's the reason / best case against having it public?

To flag some of the advantages

  • Those of us who are creating new projects would have a much better understanding of what OpenPhil would fund and be able to create better more aligned projects to OpenPhil's goals. The EA community lacks a strong longtermist incubator and this is I expect one of the challenges.
  • Other funders could fill gaps that they believe OpenPhil has missed, or otherwise use OpenPhil's tiers in their decision making.
  • It allows OpenPhil to receive useful constructive feedback or critiques.

Also, I wonder if we should try (if we can find the time) cowriting a post on giving and receiving critical feedback on EA. Maybe we diverge in views too much and it would be a train wreck of a post but it could be an interesting exercise to try, maybe try to pull out toc. I do agree there are things that both I think I and the OP authors (and those responding to the OP) could do better 

@Buck – As a hopefully constructive point I think you could have written a comment that served the same function but was less potentially off-putting by clearly separating your critique between a general critique of  critical writing on the EA Forum and critiques of specific people (me or the OP author).

Thank you Buck that makes sense :-)

 

“the content/framing seems not very useful and I am sad about the effect it has on the discourse”

I think we very strongly disagree on this. I think critical posts like this have a very positive effect on  discourse (in EA and elsewhere) and am happy with the framing of this post and a fair amount (although by no means all) of the content. 

I think my belief here is routed in quite strong lifetime experiences in favour of epistemic humility, human overconfidence especially in the domain of doing good, positive experiences of learning from good faith criticisms, and academic evidence that more views in decision making leading to better decisions. (I also think there have been some positive changes made as a result of recent criticism contests.)

I think it would be extremely hard to change my mind on this. I can think of a few specific cases (to support your views) where I am very glad criticisms were dismissed (e.g. the effective animal advocacy movement not truly engaging with abolitionist animal advocate arguments) but this seems to be more the exception than the norm. Maybe if my mind was changed on this it would be though more such case studies of people doing good really effectively without investing in the kind of learning that comes from well-meaning criticisms. 

I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant.

I think this is somewhat unfair. I think it is unfair to describe this OP as "unpleasant", it seems to be clearly and impartially written and to go out of its way to make it clear it is not picking on individuals. Also I feel like you have cherry picked a post from my post history that was less well written, some of my critical writing was better received (like this). If you do find engaging with me to be unpleasant, I am sorry, I am open to feedback so feel free to send me a DM with constructive thoughts.

Load more