Hide table of contents

If OpenPhil expanded its Bayesian mindset into the Global Health & Development space, it might find appeal in some uptapped opportunities (especially on the "Development" side). 

When it comes to improving the well-being of the world’s most disadvantaged people, the EA community tends to move the most funding towards opportunities that are fairly well illuminated. By this I mean ones that have not only demonstrated impact, but that also leave us with few open questions about why, when, and where they work.

Take insecticide-treated mosquito nets: we know they block and kill mosquitos that mainly bite at night and transmit a parasite. We know that this parasite can cause malaria; what malaria does to the body; and why it is particularly dangerous to children. We know that the impact of mosquito nets depends on the season; on the incidence of the parasite in the community (which is measurable), and of course on their adoption: the nets need to be unpacked and hung to be effective (also measurable). And there are entire organizations dedicated to making sure nets are procured, distributed, and hung by the millions. The mechanism is clear, the monitoring protocols quite obvious, and the health benefits beyond doubt. 

Now consider another opportunity (picked for illustration; I do not mean to over-state my due diligence). Skills for Effective Entrepreneurship Development (SEED) is a mini-MBA course that was designed by UC Berkeley and got implemented with a few thousand Ugandan high school graduates by a nonprofit called Educate. Three and a half years in, a randomized evaluation estimates sixty cents in additional monthly income per dollar invested: the program seemed to pay for itself over and over again every second month.1 If we were to assume that these effects have been constant since the time of intervention, it would imply a social rate of return on investment in excess of 700%. But along with these remarkable point estimates come lots of perplexing questions: What explains the difference to other entrepreneurship efforts?2 Why did the different SEED variants yield such similar results – was content even first order? Which needs did the program address for which students? What features should the trainers have, and in what ways does this depend on the student body? How would this program perform in lower-income, more remote, or less well-educated settings? What were the economic knock-on effects on people beyond the study? How might things change during a recession? Can the quality of implementation hold up at scale – and how would you know? Any excitement about SEED seems to rest upon lots of Jenga blocks. 

At present, the EA community moves the vast majority of funding toward interventions that resemble mosquito nets (i.e., ones that are “simple” in that they mainly involve delivering a discrete product or asset, which is most typical in public health), and not the likes of SEED (i.e., complex interventions that involve intangibles, which is most typical in education and economic development). 

Why does EA giving in Global Health and Development correlate with “simplicity”? One reason may be that EA places much weight on health outcomes.3 But it also appears that EA prefers simpler interventions because it likes to be confident in what it funds. We can infer this from a benchmarking exercise: if we consider the set of assumptions that would make a marginal dollar invested in SEED less cost-effective than one invested in cash transfers or mosquito nets, those assumptions would have to be very pessimistic indeed. 

I am not implying that an aversion to ambiguity is an oversight or mistake. The EA movement has achieved considerable impact by building confidence in and consensus around cost-effective giving opportunities, and this probably hinged on a commitment to objectivity and clarity. 

But OpenPhil is not representative of all EA: its hits-based approach allows it to make long-shot bets, and its Longtermism focus area just about maxes out on uncertainty and ambiguity.4 So OpenPhil might be in a position to find value in a cause area that allowed for more speculative investments within Global Health & Development. Here is the result of an incentivized straw poll I ran in my network of research contacts:

Say you want to donate in a way that - given your own judgment and expectations - maximizes expected impact on Global Health & Development outcomes. How would your donation compare to the grantmaking of the Effective Altruism (EA) movement in the Global Health & Development space?

Specifically, how “speculative” would your donation be compared to the typical grant dollar moved by EA to Global Health & Development? “More speculative” implies that you are more inclined to accept uncertainty, ambiguity, and potential blind spots. “Less speculative” implies that you are less inclined to accept uncertainty, ambiguity, and potential blind spots.

For more details on this data, consult the footnote.[1]

One way to be more speculative might be to fund more research and innovation5 (and the researcher-oriented poll above may be biased for that reason). But norms around the creation and use of development research may in fact be part of the problem. When we speak of "evidence", we usually refer to the results of null hypothesis tests in academic papers that were prepared to stand on their own. This ends up giving "simple" approaches a leg up.

People sleep under a net or don’t; and they contract a parasite or don’t. It's no wonder that frequentist statistics got us to fairly complete knowledge about why, when, and where mosquito nets work; it's no surprise that we have reached a point of consensus that available knowledge is actionable. 

Meanwhile, in the case of SEED - sure, we can estimate its average impact in a given setting, but it will probably have a different impact on anybody who attends it, and it could be implemented in any number of ways. The notion that "it works”, or even that "it works under conditions A and B", will remain unattainable - even if we were to run a hundred RCTs, the resulting evidence will not add up to something very coherent that can inspire complete confidence. Questions around participant heterogeneity, local adaptation, or implementation quality will continue to make or break the program. We have to come to terms with the fact that we will never achieve the kind of objective clarity about SEED that we already have about mosquito nets. Even so, it should probably be pursued further. 

I argue that this might become attractive to OpenPhil if it added a new epistemic framework in the Global Health & Development space – one that is already OpenPhil’s second nature in its Longtermist focus area, where uncertainty is entirely inescapable.6 Simply put, this would involve adopting a more “Bayesian” perspective, and also taking a more activist stance in navigating the exploration-exploitation trade-off. Here are some potential ingredients of such a strategy: 

  • When developing cost-effectiveness models, don’t be put off by opportunities that come with lots of uncertain model parameters; populate them with your priors as placeholders.
  • Think of the body of knowledge not only as a set of papers, but also as a set of mental models and beliefs that have been honed by research. Periodically collapse this body of knowledge into unidimensional predictions and prescriptions to improve placeholders in your cost-effectiveness models. This can be done with much greater rigor than the poll above: backtested prediction data will increasingly make it possible extract empirically validated respondent weights and optimize for wisdom-of-crowd effects.7,8
  • Think of research and implementation not as separate steps, but as an opportunity to have some impact right away while simultaneously learning how to become even more impactful in the future.9 To integrate research more closely with implementation and scaling, bolster organizations that are committed to working at the intersection (akin to a university hospital or a tech firm).
  • In cases where it seems important to stress-test specific assumptions or model parameters (for example, because they appear pivotal, can be directly measured, and yet lack predictive consensus), initiate heavily coordinated research efforts involving large-scale, multi-site replication studies in partnership with specialized research initiatives.10
  • Consider a range of possible endgames and off-ramps, including letting some leads fizzle out; adapting others over time; de-risking a few to the point that they could become consensus top charities; or leveraging emerging insights to inform policy design. Where inputs are unobservable but outcomes measurable, payment-by-results schemes can be an option as well.

I do not believe that any of these ideas currently elude OpenPhil, nor am I making the case that they represent an inherently superior way of funding Global Health and Development. But I think that operationalizing some of them might allow OpenPhil to get more comfortable with educational and economic development interventions. Clearly, there exist opportunities here that - while they may be fragile - come with estimated social rates of return far exceeding what a philanthropist can expect to earn in financial market returns. Choosing "spending" over "saving" seems warranted. 

Whether this would lead to a new cause area within Global Health & Development, or involve an expansion of the Research, Experimentation, and Exploration Portfolio within Global Health & Development, may just be semantics.

Footnotes and Citations

  1. L Chioda, D Contreras-Loya, P Gertler, D Carney (2021): Making Entrepreneurs: Returns to Training Youth in Hard Versus Soft Business Skills. NBER Working Paper 28845
  2. David McKenzie (2021): Small Business Training to Improve Management Practices in Developing Countries: Re-Assessing the Evidence for ‘Training Doesn’t Work’. Oxford Review of Economic Policy 37(2)
  3. GiveWell (2017): Approaches to Moral Weights: How GiveWell Compares to Other Actors. GiveWell Blog Post
  4. H Karnofsky (2016): Hits Based Giving. Open Philanthropy Project Blog Post
  5. S Buck (2022): Why Effective Altruists Should Put a Higher Priority on Funding Academic Research. EA Forum Blog Post
  6. T Davidson (2021): Semi-Informative Priors over AI Timelines. Open Philanthropy Project Blog Post
  7. S DellaVigna, D Pope, E Vivalt (2019). Predict Science to Improve Science. Science 366(6464)
  8. N Otis (2022): Policy Choice and the Wisdom of Crowds. mimeo
  9. M Kasy, A Sautmann (2021): Adaptive Treatment Assignment in Experiments for Policy Choice. Econometrica 89(1)
  10. Consider demonstration work by the Yale Research Initiative on Innovation & Scale or the EGAP Metaketa Initiative
  1. ^

    I emailed 100 first-degree contacts in my network who are professionally active in development & society and hold or pursue a doctoral degree. I shared the above question with them, and that de-identified responses would be shared on the Effective Altruism forum. If they were to respond within 48h, I would move $100 to one of three charitable causes – specifically, the one that seemed to best match their judgment. (If the answer was “Don’t know / can’t say”, a random one would be selected). The causes would not be identified ex ante to avoid reputational confounds. The response rate was 76%. With the support of a matching gift program, I ended up moving $1.1k to Against Malaria Foundation, $1.6k to GiveWell, and $4.9k to Educate. 

Comments4
Sorted by Click to highlight new comments since: Today at 6:05 AM

Hi Richard, 

I think you've identified a problem in the funding space, and I've had numerous conversations with others about this. A couple of comments:

  1. As mentioned in another comment, I think that Open Phil's Global Health and Development team is evolving to fill some of this gap. But they have certain issue areas of concentration, and also have a limited team evaluating grants, so I think they aren't well-suiting to identifying high-impact opportunities in all areas (especially small grants). 
  2. I think the right venue for this would be EA Funds' 'Global Health and Development Fund'. Currently this fund is managed by Eli Hassenfeld and GW staff, which I think is a missed opportunity to provide a venue for more high-risk opportunities. While this fund has dispersed to some more 'speculative' orgs (like GCD, IPA), the most decision was to give 4.2 million to the Against Malaria Fund in Jan 2022. It seem like they don't give many small grants either. I personally think it would be great if this fund had different managers, who explicitly looked for funding opportunities that are high impact in expectation but don't fit into the processes and priorities of Open Phil or GiveWell. 

Hi Dan. I think GiveWell deserves credit for managing to move huge retail donor $s to evidence based causes, using their high-clarity, high-objectivity approach. 

Your approach at GivingGreen seems to test an interesting alternative. You encourage retail donors to give to climate change mitigation policy advocacy opportunities, which you perceive as higher expected impact than carbon offsets, yet offer recommendations on both. Clearly the policy angle is much more "complex" and can't be assessed without leaning on a bunch of assumptions about how politics & other complex systems work. 

But I think it remains to be seen if there is significant retail donor demand for this kind of advice. It's one thing to gain confidence in somebody analysis on a simple matter like mosquito nets, it's another to trust sombody's assumptions and world views about a complex matter like climate change mitigation policy advocacy. 

OpenPhil might be in a position to expand EA’s expected impact if it added a cause area that allowed for more speculative investments in Global Health & Development.

My impression is that Open Philanthropy's Global Health and Development team already does this? For example, OP has focus areas on Global aid policy, Scientific research and South Asian air quality, areas which are inherently risky/uncertain.

They have also take a hit based approach philosophically, and this is what distinguishes them from GiveWell - see e.g.

Hits. We are explicitly pursuing a hits-based approach to philanthropy with much of this work, and accordingly might expect just one or two “hits” from our portfolio to carry the whole. In particular, if one or two of our large science grants ended up 10x more cost-effective than GiveWell’s top charities, our portfolio to date would cumulatively come out ahead. In fact, the dollar-weighted average of the 33 BOTECs we collected above is (modestly) above the 1,000x bar, reflecting our ex ante assessment of that possibility. But the concerns about the informational value of those BOTECs remain, and most of our grants seems noticeably less likely to deliver such “hits".

[Reposting my comment here from previous version]

Have you seen this?

Curated and popular this week
Relevant opportunities