We separately looked at two ideas on new technology:
(We found this breakdown useful as the problems are different. The current patent system does not work for antimicrobials due to the need to limit the use of last line novel antibiotics. The current patent system works better for preparing for future pandemics but has limits as the pay out is uncertain and might not happen within the life-time of a patent.)
Under idea 1. Antimicrobials we didn’t look specifically at phage therapy. I don’t have a strong sense if phage therapy is in or out of scope of the various proposed policy changes, although I think the current focus is on antibiotics which would make phage therapy out of scope. This could be a thing for the new charity to look into. The existence of other emerging health tech that could also address microbial diseases could be seen as a case to reduce the expected impact of developing new antibiotics. This was not explicitly modelled (other than applying a 4% discount rate which should cover such things).
Under idea 2. new tech for pandemics we very briefly considered phage therapy. It got cut as an idea at the very early stage of our research when considering what new tech will have the biggest effect on future pandemics. This is not to say that it is not a good idea – I tend to describe CE's research as rigorous but not comprehensive and I am sure that unfortunately good ideas are cut at the early stage of our prioritisation.
I hope that answers your question. Both reports should be made available in due course.
We also hope that any charity that begins life focusing on shifting market incentives for antibiotics could scale by moving onto policy advocacy and market shaping work for other key technologies. Technologies we were excited about more advocacy for are: platform DNA vaccine technology or UVC sterilisation or point of care diagnostics.
Hi Nick, Great to hear from you and to get your on-the-ground feedback. I lead the research team at CE.
These are all really really great points and I will make sure they are all noted in the implementation notes we produce for the (potential) founders.
All our ideas have implementation challenges, but we think that delivering on these ideas is achievable and we are excited to find and train up potential founders to work on them!!
–-–
One point of clarification, in case it is not clear: on kangaroo care we are recommended an approach of providing and adding extra staff into healthcare facilities to offer kangaroo care support, rather than trying to get current staff to take on the additional burden of teaching kangaroo care. We hope and expect (based on our conversations with experts) that this approach can sidestep at least some of the implementation issues identified by GiveWell.
Great! Its good to see things changing :-) Thank you for the update!
Yeah, I somewhat agree this would be a challenge, and there is a trade off between the time needed to do this well and carefully (as it would need to be done well and carefully) and other things that could be done.
I think it would surprise a lot if the various issues were insurmountable. I am not an expert in how to publish public evaluations of organisations without upsetting those organisations or misleading people but connected orgs like GiveWell do this frequently enough and must have learnt a thing or two about it in the past few years. To take one the concerns you raise: if you are worried about people reading too much into the list and judging the organisations who requested the grants rather than specific grants you could publish the list in a pseudoanonymised way where you remove names of organisations and exact amounts of funding – sure people could connect the dots but it would help prevent misunderstanding and make it clearer judgement is for grants not organisations.
Anyway to answer your questions:
(All views my own not speaking for any org or for Charity Entrepreneurship etc)
Thanks for the useful post Holden.
I think it would be great to see the full published tiered list.
In global health and development funders (i.e. OpenPhil and Givewell) are very specific about the bar and exactly who they think is under it and who they think is over it. Recently global development funders (well GiveWell) have even actively invited open constructive criticism and debate about their decision making. It would be great to have the same level of transparency (and openness to challenge) for longtermist grant making.
Is there a plan to publish the full tiered list? If not what's the reason / best case against having it public?
To flag some of the advantages
Also, I wonder if we should try (if we can find the time) cowriting a post on giving and receiving critical feedback on EA. Maybe we diverge in views too much and it would be a train wreck of a post but it could be an interesting exercise to try, maybe try to pull out toc. I do agree there are things that both I think I and the OP authors (and those responding to the OP) could do better
@Buck – As a hopefully constructive point I think you could have written a comment that served the same function but was less potentially off-putting by clearly separating your critique between a general critique of critical writing on the EA Forum and critiques of specific people (me or the OP author).
Thank you Buck that makes sense :-)
“the content/framing seems not very useful and I am sad about the effect it has on the discourse”
I think we very strongly disagree on this. I think critical posts like this have a very positive effect on discourse (in EA and elsewhere) and am happy with the framing of this post and a fair amount (although by no means all) of the content.
I think my belief here is routed in quite strong lifetime experiences in favour of epistemic humility, human overconfidence especially in the domain of doing good, positive experiences of learning from good faith criticisms, and academic evidence that more views in decision making leading to better decisions. (I also think there have been some positive changes made as a result of recent criticism contests.)
I think it would be extremely hard to change my mind on this. I can think of a few specific cases (to support your views) where I am very glad criticisms were dismissed (e.g. the effective animal advocacy movement not truly engaging with abolitionist animal advocate arguments) but this seems to be more the exception than the norm. Maybe if my mind was changed on this it would be though more such case studies of people doing good really effectively without investing in the kind of learning that comes from well-meaning criticisms.
I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant.
I think this is somewhat unfair. I think it is unfair to describe this OP as "unpleasant", it seems to be clearly and impartially written and to go out of its way to make it clear it is not picking on individuals. Also I feel like you have cherry picked a post from my post history that was less well written, some of my critical writing was better received (like this). If you do find engaging with me to be unpleasant, I am sorry, I am open to feedback so feel free to send me a DM with constructive thoughts.
Thank so much you for writing this I think it is an excellent piece and makes a really strong case for how longtermists should consider approaching policy. I agree with most of your conclusions here.
I have been working in the space for a number of years advocating (with some limited successes) for a cost effectiveness approach to government policy making on risks in the UK (and am a contributing author to the Future Proof report your cite). Interestingly despite having made progress in the area I am over time leaning more towards work on specific advocacy focused on known risks (e.g. on pandemic preparedness) than more general work on improve government spending on risks as a whole. I have a number of unpublished notes on how to assess the value of such work that might be useful so thought I would share below.
I think there is three points my notes might helpfully add to your work
Note: Some of this research was carried out for Charity Entrepreneurship and should be considered Charity Entrepreneurship work. This post is in an independent capacity and does not represent views of any employer
1. The cost benefit analysis here is not enough to suggests government action
I think it is worth putting some though into how to interpret cost benefit analyses and how a government policy maker might interpret and use them. Your conservative estimate suggests a benefit $646 billion to a cost of $400 billion – this is a benefit cost ratio (BCR) of 1.6 to 1.
Naively a benefit cost ratio of >1 to 1 suggests that a project is worth funding. However given the overhead costs of government policy, to governments propensity to make even cost effective projects go wrong and public preferences for money in hand it may be more appropriate to apply a higher bar for cost-effective government spending. I remember I used to have a 3 to 1 ratio, perhaps picked up when I worked in Government although I cannot find a source for this now.
According to https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5537512/ the average cost benefit ratio of government investment into health programs is 8.3 to 1. I highly expect there are many actions the US government could take to improve citizens healthcare with a CBA in the 5-10 to 1 range. In comparison on a 1.6 to 1 does not look like a priority.
Some Copenhagen Consensus analysis I read considers robust evidence for benefits between 5 to 15 times higher than costs as "good" interventions.
So overall if making the case to government or the public I think making the case that there is a 1.6 to 1 BCR is not sufficient to suggest action. I would consider 3 to 1 to be a reasonable bar and 5 to 1 to be a good case for action.
2. On the other hand the benefit cost ratio is probably higher than your analysis suggests
As mentioned you directly calculate a benefit cost ratio of 1.6 to 1 (i.e. $646 billion to $400 billion).
Firstly I note that reading your workings this is clearly a conservative estimate. I would be interested to see a midline estimate of the BCR too.
I made a separate estimate that I thought I would share. It was a bit more optimistic than this. It suggested that the benefit costs ratios (BCR) for disaster prevention are that, on the margin, additional spending on disaster preparedness to be in the region of 10 to 1, maybe a bit below that. I copy my sources into an annex section below.
(That said, spending $400 billion is arguably more than “on the margin” and is a big jump in spending so we might expect that spending at that level to have a somewhat lower value. Of course in practice I don’t think advocates are going to get government to spend $400bn tomorrow and that a gradual ramp up in spending is likely justified.)
3. A few reflections on political tractability and value
My experience (based on the UK) is that I expect governments to be relatively open to change and improvement in this area. I expect the technocratic elements of government respond well to highlighting inconsistencies in process and decision making and the UK government has committed to improvements to how they asses risks. I expect governments to be a bit more reticent to make changes that necessitate significant spending or to put in place mechanisms and oversight that can hold them to account for not spending sufficiently on high-impact risks that might ensure future spending.
I am also becoming a bit more sceptical of the value of this kind of general longtermist work when put in comparison to work focusing on known risks. Based on my analysis to date I believe some of the more specific policy change ideas about preventing dangerous research or developing new technology to tackle pandemics (or AI regulation) to be a bit more tractable and a bit higher benefit to cost than then this more general work to increase spending on risks. That said in policy you may want to have many avenues you are working on at once so as to capitalise on key opportunities so these approaches should not be seen as mutually exclusive. Additionally there is a case for more general system improvements from a more patient longtermist view or from a higher focus on unknown unknown risks being critical.
ANNEX: My estimate
On the value of general disaster preparedness
We can set a prior for the value of pandemic preparedness by looking at other disaster preparedness spending.
Real-world evidence. Most of the evidence for this comes from assessments of the value of physical infrastructure preparation for natural disasters, such as building buildings that can withstand floods. See table below.
Natural Hazard Mitigation Saves: 2019 Report
Link
11:1
4:1
If Mitigation Saves $6 Per Every $1 …
(Gall and Friedland, 2020)
Link
4:1
6½:1
Other estimates. There are also a number of estimates of benefit cost ratios:
Does mitigation save? Reviewing cost-benefit analyses of disaster risk reduction
(Shreve and Kelman, 2014)
Link
Natural disasters challenge paper
(Copenhagen Consensus, 2015)
1:1
60:1
Link
Pandemic preparedness estimates
Other estimates. We found one example of estimates of the value of preparing better for future pandemics.
Not the last pandemic: … (Craven et al., 2021)
Link
We also found three examples of estimates of the value stockpiling for future pandemics.
The Cost Efectiveness of Stockpiling Drugs, Vaccines and Other Health Resources for Pandemic …
(Plans‑Rubió, 2020)
Link
Cost-Benefit of Stockpiling Drugs …
(Balicer et al, 2005)
Link
()
Link
Historical data and estimates suggest the value of increasing preparedness is decent but not very high, with estimated benefit cost ratios (BCR) often around or just below 1:10.
How this changes with scale of the disaster
There is some reason to think that disaster preparedness is more cost effective when targeted at worse disasters. Theoretically this makes sense as disasters are heavy-tailed and most of the impact of preventing and mitigating disasters will be in preventing and mitigating the very worst disasters. This is also supported by models estimating the effect of pandemic preparedness, such as those discussed in this talk. (Doohan and Hauck, 202?)
Pandemics affect more people than natural disasters so we could expect a higher than average BCR. This is more relevant if we pick preparedness interventions that scale with the size of the disaster (an example of an intervention that does not have this effect might be stockpiling, for which the impact is capped by the size of the stockpile, not by the size of the disaster).
However overall I did not find much solid evidence to suggest that the BCR ratio is higher for higher scale disasters.