It's great to see more thoughtful posts proposing new global health and development interventions. Upvoted.
Just sharing as an FYI - one of the key papers you cite for the cost-effectiveness analysis issued a correction, and the revised estimate of DALYs averted is 149 thousand rather than 14.9 million. It looks like there was an order of magnitude error somewhere along the way in their calculations.
Thanks - this is super interesting and I agree that hypertension is a promising cause area - and that taxes on unhealthy foods may be a promising intervention.
The estimate of the degree to which an average 1 mg reduction in sodium consumption reduces incidence of high systolic blood pressure in a single country and hence the global disease burden of hypertension, relies on an extremely long and complicated chain of calculations – there is hence a high degree of uncertainty here.
I agree with your assessment that this is an area of uncertainty. In particular, I think we need to be careful about assuming linear effects on BP and burden of disease based on a very small change in daily sodium intake. You've thought deeply about this problem, so would be keen to hear your thoughts on what I've sketched out below.
You estimated that a tax would reduce sodium consumption by 67 mg per day; a meta-analyses suggests that reducing dietary salt by a mean of 4.4 g per day leads to a mean reduction of systolic BP of 4.18 mm Hg. There was a bigger drop (5.39 mm Hg) in hypertensive people. [Note: my best guess is that your estimate of sodium reduction is for molecular sodium whereas these numbers are for salt - NaCl - 4.4 g of NaCl would be about 1.7 g of Na.] Based on this, I would say it's very generous to assume any more than 0.5 mm Hg average reduction in systolic BP from 67 mg less sodium per day. 0.5 mm Hg is well within measurement error. This may be a case where a very small effect multiplied by a huge number of people still has a huge effect, but I think more evidence would be helpful. In particular, can we find evidence that:
Maybe deriving additional estimates of the health impacts of -67 mg/day salt intake, using different reference classes, could reduce this uncertainty.
Thanks for this feedback! This is exactly why I posted, so before I provide any specific responses to your points, please know that I appreciate all of the questions and suggestions and I'm already thinking of how they could be addressed in a future version of this proposal.
1. I appreciate your point that the key step in the theory of change is not clear - and I think this is not due to a gap in the data itself but instead due to a gap in my presentation of the evidence. The key supporting evidence is linked out from this statement:
My analysis of existing research studies shows that training HWs to properly care for newborn babies is likely to be highly cost-effective, with an average cost of $59 per DALY averted ($100 per DALY averted is sometimes cited as a benchmark for highly effective interventions)....
The linked post cites six studies that show reductions in mortality due to HW training. While there are remaining reasons for skepticism, I think these six studies support this key step in the theory of change, at least for some types of training. Regarding your sub-points on point (1), I accept the feedback that we can and should provide more detail on the evaluation in a future version of this. The six studies provide pretty clear guidance on the type of data we would collect.
2. I agree that a roadmap of regions / countries / priority courses would be helpful to include and can add this to a future version. Thanks for the suggestion. We'd want to start with topics that have the strongest existing evidence base (such as neonatal care and management of childhood illness).
3. The dollar amount may seem high, but this is a technology development project. I think it will be very difficult to build a truly excellent learning platform that is tailored to this target audience without attracting top engineering talent, and that gets expensive. As I mentioned in the post, we've already done substantial piloting on a shoestring and I plan to continue to do that! I'll think further about whether we can present a tiered approach, with additional pilots done with an MVP.
Many thanks for reading and for your suggestions, which I've acted on! The title is now updated :).
Thanks! I have a few possible names but haven't picked one (and the associated website domain name) yet. The pilots described here recently wrapped up but I'd be happy to share a demo module hosted on our MVP that's focused on neonatal / child health. Please DM me if you're interested.
Thanks so much. This is a tour de force! I have one more suggestion about this model. I know that GiveWell has strong reasons to use its own metrics rather than DALYs or QALYs. The problem is that DALYs and QALYs are much more widely used in the academic literature.
My suggestion is the model should report estimated $/DALY averted in addition to (not instead of) the preferred units of cost-effectiveness as a multiple of cash transfers. This would:
Even if the EA cost-effectiveness units are indisputably better, the benefits of being able to engage more directly with the research community seem to outweigh the costs of adding a few rows to the model.
Thanks, your points make a lot of sense to me! The case does seem to be stronger for R&D generally and it's helpful to know that you're not arguing for investment in a specific stage of research. I also agree that targeting existing interventions for improvement could be very high yield :).
This is very neat, thanks for sharing! Some comments.
First, the term “academic research” is used a lot in the text. Does this modeling speak to a need for more academic research or more research and development more generally?
Secondly, this modeling seems sensitive to assumptions about the efficacy of new interventions.
"...we found that, on average, investing at least 50% of the initial annual budget in scientific research is optimal even if the new interventions are only about half as cost-effective as the best existing intervention, on average…"
One could argue that new interventions are unlikely to be, on average, even half as effective as the best existing interventions, given that the best current interventions are recognized to be outliers (maybe even extreme outliers). Could you use some historical data to model average effectiveness of new interventions? There is a lot of cost-effectiveness data out there for public health interventions.
Thanks for the great post!
I completely agree and have been thinking about many of the same things. Many of the properties that make for-profit startups such a source of innovation (razor focus, fast decision-making, competition, rapid iteration) could also apply to nonprofit startups aiming to become highly effective. Here's a bit of a challenge that I see in this:
To extend your analogy to for-profit startups further, it seems there's a need for a VC-like ecosystem, with multiple rounds of funding in escalating amounts and milestones based on progress towards highly cost-effective benchmarks.
Also agreed that Charity Entrepreneurship is leading in this space. :)
Thanks for the post. Some quick comments!
I think that people now matter more than people in the future.
This could be interpreted as a moral claim that, on an individual basis, a current person matters more than a counterfactual future person. Based on the rest of your post, I don't think you're claiming that at all. Instead you're making arguments about the uncertainty of the future.
I think a lot has been written about these claims around future uncertainty and those well-versed in longtermism have some compelling counterarguments. It would be nice to see a concise summary of the arguments for and against written in a way that's really accessible to an EA newcomer.