Recently I ran into a volunteer for UNICEF who was gathering donations for helping malnourished children. He gave me some explanation on why child wasting is a serious problem and how there are cheap ways to help children who are suffering from it (the UNICEF website has some information on child wasting and specifically on the treatment of wasting using simplified approaches, in case you are interested).

Since I happen to have taken the Giving What We Can pledge and have read quite a bit on comparing charities, I asked what evidence there is that compares this action to - say - protecting people from malaria with bednets or directly giving cash to very poor people. The response I got was quite specific: the volunteer claimed that UNICEF can save a life with just 1€ a day for an average period of 7 months. If these claims are true then that means they can save a life for 210€, a lot less than the >3000$ that Givewell estimates is needed for AMF to save one life. Probably these numbers should not be compared directly, but I am still curious to know why there can be over an order of magnitude difference between the two. So to practice my critical thinking on these kinds of questions, I made a list of possible explanations for the difference:

  1. The UNICEF campaign has little room for additional funding.
  2. The program would be funded anyway from other sources (e.g. governments).
  3. The 1€/day figure might not include all the costs.
  4. Some of the children who receive the food supplements might die of malnutrition anyway.
  5. Only some of the children who receive the food supplements would have died without them.
  6. Children who are saved from malnutrition could still die of other causes.

Obviously I do not have the time nor resources of GiveWell so it is hard to determine how much all of these explanations count in the overall picture, or if there are others that I missed. Unfortunately, there does not seem to be much information on this question from GiveWell (or other EA organizations) either. Looking on the GiveWell website, the most I could find is this blog post on mega-charities from 2011, which makes the argument that mega-charities like UNICEF have too many different campaigns running simultaneously, and that they do not have the required transparency for a proper evaluation. The first argument sounds fake to me: if there are different campaigns, then can you not just evaluate these individual campaigns, or at least the most promising ones? The second point about transparency is a real problem, but there is also the risk of measurability bias if we never even consider less transparent charities.

I would very much like to have a more convincing argument for why these kind of charities are not rated. If for nothing else then at least it would be useful for discussing with people who currently donate to them, or who try to convince me to donate to them. Perhaps the reason is just a lack of resources at GiveWell, or perhaps there is research on this but I just couldn't find it. But either way I believe the current state of affairs does not provide a convincing case of why the biggest EA evaluator barely even mentions one of the largest and most respected charity organizations.

[Comment: I'm not new here but I'm mostly a lurker on this forum. I'm open to criticism on my writing style and epistemics as long as you're kind!]

51

0
0

Reactions

0
0

More posts like this

Comments24
Sorted by Click to highlight new comments since: Today at 12:37 PM

I agree with all the comments below, but just to add further clarity from my perspective, a doctor who has worked a little with malnutrition in Uganda.

  1. This claim will be wrong. Any reasonable cost estimate for malnutrition treatment will not achieve that degree of cost effectiveness, unless perhaps if they had a new technology or product outside of the current treatment paradigm. Its just as likely the individual got it wrong as they were told to say that by UNICEF.

 To explain (using basic hack estimates) why this is probably so wrong, here's some basic math. They will likely have underestimated the cost of treatment and overestimated the effect

  • I'm guessing they claim to treat one child with malnutrition for $1 a day. But this would barely cover the cost of the food, let alone the cost of medical staff and medications needed. Perhaps they only include the cost of UNICEF'S contribution to the food, not the cost of medical care, tests and medications which is borne by the public health system and/or the patients' family

    I'm going to pretty wildly estimate the real cost of treating malnutrition as $4-10 daily (I'm sure givewell and others have more accurate estimates).
  • I'm guessing they are assuming they and they alone save every child they treat, not accounting for at least 3 factors that some children treated will die, that some would have been treated to varying extents without UNICEF intervention and that  some would have survived without the treatment. I'm going to very generously assume that UNICEF's intervention saves the lives of between 1 and 5 and 1 and 20 of the kids that they support

    These factors alone could make an order of magnitude difference in their cost effectiveness estimate - which to be honest I doubt they have really done. More likely someone probably had basic flawed logic like

    "We contribute $1 worth of food for each kid in this program which must save that child's life"
  1. UNICEF in my personal experience here in Gulu, Northern Uganda is a classic "White Landcruiser" NGO and shows the classic signs being an inefficient charity, paying a lot of office workers huge salaries while claiming to "strengthen government programmes" in ways that can't really be measured and which I usually don't believe. I'm actually a big fan of activism and lobbying as a potentially cost-effective method but they don't do activism and lobbying, they just cosy alongside.

Thank you for this detailed reply! This agrees with my intuition that the numbers the volunteer told me were not at all realistic. My main goal with writing this post was not to question the effectiveness, but to bring it to the attention of the community that there is value in making this kind of information (such as what you write in this comment) available publicly, instead of the current silence on mega-charities.

Nice one Jesper!

I completely agree there would be enormous value in making this kind of info public. The BINGOS (Big NGOs) know that too, but they also know that if they publically estimated cost effectiveness of their interventions, it might make for pretty sombre reading as ther cost effectiveness might be orders of magnitude less than many other orgs - and I'm not just talking the creme de-la creme Givewell ones. 

There's also the problem that they would probably expose that some of their most public "Marquee" interventions might be less effecitve than some of their lesser known interventions, even within the same BINGO. For example if they found out that the small amount of money they spent deworming to prevent river blindness was far more cost-effective than their more heart-throb malnutrition intervention they are advertising, how would they respond to that?

Unfortunately it is most likely In the BINGOs best interests NOT to do (or at least not to publish) cost-effectiveness analysis on their interventions. It is a rational decision on their part. All they might achieve is more scrutiny on how low-impact their interventions were and perhaps even get less funding as a result

That's why they don't publish these kinds of analysis, not because its that hard to do - they have the staffing and expertise. Small orgs like ours and Lafiya Nigeria have had a go even without specific expertise. I'm somewhat surprised UNICEF dipped their toe into publicly sharing any kind of cost-effectiveness here, which we possibly should be giving them some credit for. At least they kind of tried?

I realise it sounds like this is dripping in cynicism, but I genuinely think this explains to a large part why the BINGOs don't publish this kind of info and instead go full ostrich.

Hi Nick,

I really appreciate your insight here. I've been thinking lately about lobbying the IRS in the US to require cost effectiveness disclosures in yearly reports. There are a couple concerns I have...

1: if it is even possible to convince the IRS to add cost effectiveness disclosures to the 990

2: if orgs have the expertise/capacity to evaluate their programs

3: I think the disclosure requirement must be VERY vague to allow orgs to disclose what they think is most appropriate

4: lack of oversight means these disclosure could be easily exaggerated

 

Benefits:

  1. Within a similar outcome, effectiveness could be compared across orgs. Something we have never been able to do without folks like Givewell, so things like the program ratio (ew) are used by many donors.
  2. What gets measured gets managed, so this will likely lead to orgs operating more effectively overall.

IRS and US charity regulators do a lousy job of preventing charitable scams and near-scams. I don't see any likely universe in which they have bandwidth to decide on a cost-effectiveness methodology that works for 1.5 MM US charities doing a extremely wide range of activities.

Very likely it would be so loose all charities could give themselves high marks.... and if it were somehow not loose, there's no clear reason to think the resultant methodology would be anything like EA.

After all, it's a political decision (my guess is that it would require legislation by Congress, not just Treasury regs: medium confidence as a legal matter without wading through the tax code, high confidence that IRS would never do this w/o Congressional directive).

Fair points, and the idea is certainly a massive longshot.

It's unclear to me exactly how the decision for the big redesign in 2007 was made, and if congress was involved at all. My guess is that a change like this could be made just by the IRS.

I am FAR from an expert on evaluation but even if orgs just reported outputs (X people helped, Y houses built, etc.), that information would be immensely useful. Guidestar is allowing orgs to disclose output metrics to get a platinum seal, but this is a voluntary disclosure. Form 990 could use a similar library of metrics for orgs to select from and make disclosure mandatory. I don't think the form would be like EA at all - but the data would be extremely useful for donors/orgs trying to actually evaluate charity effectiveness.

At the end of the day, orgs may just completely BS the outputs so the idea certainly needs some work. Maybe encourage audits of outcomes?

I read an org's 990 before making a non-trivial donation, and my guess from your bio is that you do too. But I wonder how many people (1) would take the time to carefully read the 990, (2) have enough methodological sophistication to see through at least moderate levels of obfuscation, but (3) are not significant enough donors that they feel (or are) empowered to call someone and ask for someone to provide relevant information.

Thanks Kyle, I like the idea of the IRS cost effectiveness thing but agree with Jason that practially it would never work. Even if it came in, it would be so loose as to be meaningless.

This is a great question! There is a great lack of good cost-effectiveness estimates of large multilaterals such as UNICEF. The problem is that they are extremely difficult to create for the reasons outlined in the Givewell article you linked.

Different vaccine programs carried out by GAVI, for example, vary massively in cost-effectiveness. HPV vaccines don't look as cost-effective as rotavirus vaccines, so depending on where additional funding will be spent the cost-effectiveness will vary quite a bit!

At aidpolicy.org we have been toying with ranking of multilaterals on $/daly in style of Givewell, but not only would it be a massive undertaking, the resulting estimates would have very high error bars to the point that we worry nobody would take them seriously.

There are some rankings such as the QUODA by CGD, which can give you a sense of the relative effectiveness of multilaterals (For $/daly purposes I would primarily look at their prioritization and evaluation criteria), but you won't be able to use the QUODA to compare a multilateral with Givewell.

I'm near certain the 1$/7 months claim is incorrect - or at least calculated with much fewer caveats than Givewell's CEAs. My best guess is that UNICEF is significantly less cost-effective than givewell's charities. Between any mega-charity and Givewell's maximum impact fund, I would recommend Givewell for individual donors.

As @freedomandutility points out, the question Givewell is trying to answer is: "what is the most impact you, an individual, can have on the margin with your donations". This answer is not necessarily going to be the same for a government with ten billion to spend. Even a single medium-sized government could cover Givewell's entire funding gap and have plenty left over. Finding something as cost-effective as Givewell's which can effectively absorb $100b is not easy!

I don't mean to say this to justify the current system, I believe governments and multilaterals alike are doing a less-than-stellar job with their development efforts. Were a government to actually fully fund Givewell, Givewell should just lower their bar and recommend additional charities.

One idea I've been toying with is for individual donors to donate to subnational government agencies for capacity building in low-income countries (e.g. donate to the public health department of a city). It has been exceedingly easy to donate to causes like the recent conflict in Ukraine, or donate to the US government, but there doesn't seem to be many opportunities for individual donors to give to subnational government agencies. I'm not sure how to go about implementing this idea, but I think it could be highly effective if done correctly. 

Thanks George and Jason all good points

One small other point I will add is that this already happens a LOT, through a couple of mechanisms which include (this is just what I've seen in Uganda)

  1. Supplementary funding for programs - for example AMF give money to local government health departments to help them distribute nets
  2. Results based funding for government health centers, for example paying the local government providers money for every delievery they do.
  3. Straight programme funding - people like World vision and Save the children sometimes deposit money in local government accounts for implementing / supervision of health programs

I would say this stuff is huge bikkies, but it is ongoing. Personally here in Northern UGanda I'm generally not a big fan of this approach (inefficiency, corruption, money just goes to supplement already largeish salaries) but the idea isn't bad. In other places it might work better.

This is an interesting idea, but it would be challenging to ensure that the subnational government didn't reduce its own spending. Maybe it would work with some sort of capital expense that wouldnt counterfactually occur -- if it has a neutral effect on costs going forward (a productivity-enhancing capital expense could lead to reduced operational expenses to obtain the same level of public health services).

Ah, ok. That's a fair point. There is a substitution effect. My main intuition here, though, is that extremely effective NGOs like BRAC basically provide a parallel public health / social safety net for people, yet often-times what would be great is if the government itself was able to provide these services. There is a substitution effect no matter what organization you donate to. For example, I would bet money that public health departments where Against Malaria is more active counterfactually spend less on malaria prevention than otherwise. 

e.g. here's an excerpt from Stefan Dercon's recent book, Gambling on Development:

Success in delivering effective health services stands out, and although the government expanded services, the most dynamism at scale was offered by NGOs. The role of BRAC (originally the Bangladesh Rural Advancement Committee) was pivotal. In 1990, it developed a model of community health workers, some paid but many volunteers, who offered advice but were equipped with basic health and sanitary products they were allowed to sell. By 2005, BRAC workers were outnumbering government community health workers. With other NGOs following suit, more than three-quarters of health workers are now supplied by NGOs. BRAC alone reached up to 110 million people with health information and basic services, such as detecting the vast majority of malaria and tuberculosis cases in the country.

As far as your example on counterfactual funding -- we could gauge what the government was spending on malaria prevention before AMF started up operations in the area, and what it was spending after. If pre-AMF spending on malaria prevention in an area were low, that sets the ceiling on how much AMF spending could be crowding out local government spending in that area. I think GiveWell tries to account for crowding out local funding.

You could give money to the subnational government earmarked for a specific purpose that you're confident that the government wouldn't have counterfactually funded. However, that burdens the developing country public health services with managing your and 100 other donor earmarks, and potentially destroys some of the advantages of working within the public system.

Perhaps you could try a fancier earmark, conditioning a grant for more health workers on the government funding as many workers as it had before. But that's going to require even more monitoring, and you may also be locking the government into spending its own money in a way that's suboptimal to meet your grant terms.

If you don't earmark to something that wouldn't have otherwise been funded, you risk an equivalent reduction in public spending and the net effect of your donation going to better roads or something (not trivial, but just a general donation to the government). That's a 100 percent slippage.

So while I agree that there's likely some substitution effect in all cases, the magnitude of that effect (as well as the administrative difficulty and cost of mitigating the risk) could vary by an order of magnitude.

That's not to say you couldn't find a way to do what you're suggesting without incurring more-than-AMF levels of substitution effects . . . only that I think it would be rather challenging.

GiveWell has a 2021 post Why malnutrition treatment is one of our top research priorities, which includes a rough estimate of "a cost of about $2,000 to $18,000 per death averted" through treating "otherwise untreated episodes of malnutrition in sub-Saharan Africa." You can click through to the footnotes and the spreadsheets for more details on how they calculated that.

Hi, Jesper,

Thank you for this post, and apologies that it took a while for us to respond!

We agree that more public information clarifying the value of donating to these large charities would be helpful. One thing that has changed about GiveWell since the 2011 blog post is that we now have a much larger staff and have gained more research experience, so we have more capacity to investigate the complicated questions that working with very large charities can bring up. We're now more open to investigating opportunities within "mega-charities" than we were previously.

One factor that we consider whenever we make a grant is funging, or the possibility that a grant from GiveWell will cause other actors to allocate their funding differently. If a program gets money from GiveWell, another funder that would have supported that program might then decide to fund a different program that's less cost-effective, reducing the impact of our funding. Or, the organization that runs the program could decide to move some of its unrestricted funding to another of its programs that's less impactful. We would want to probe the possibility of the latter scenario as part of any investigation into a large organization that runs many programs.

We've spent a significant amount of time researching malnutrition treatment programs in the last few years, and made multiple grants, including to the large charity International Rescue Committee (IRC) and the smaller Alliance for International Medical Action (ALIMA). In late 2021, we published a blog post about why malnutrition treatment programs seemed extremely promising. But, although we did recommend grants for these programs, we have found it challenging to model their cost-effectiveness. In particular, we don't have a clear sense from studies of how many deaths they prevent, due to ethical considerations limiting the research that can be done—it's (justifiably) unethical to withhold malnutrition treatment, so it's not possible to conduct a true randomized controlled trial of treatment vs. no treatment.

After conducting extensive internal research, plus hiring a couple of external experts to do their own analysis, we believe some malnutrition treatment programs are in the range of cost-effectiveness of programs we would consider directing funding to—i.e., similar to that of our top charities, which we estimate can save a life for roughly $3,500-$5,500. We still have major uncertainties about parts of our cost-effectiveness analyses, which we're unlikely to resolve. But we think we may be able to reduce our uncertainty in other areas, and we're moving forward with work on those aspects of our model. Simultaneously, we're still investigating specific charities' programs (in specific locations) as potential giving opportunities. 

All that said, like you, we would be very surprised if the true cost to save a life (for any program, not just malnutrition treatment) were on the order of $210. Our cost per life saved estimates include all costs of running the program, including non-philanthropic costs (such as those covered by the government), and attempt to account for the other factors you mention, such as the fact that not all treated children would otherwise die from malnutrition and the likelihood that another funder would support this program if we didn't.

If you're curious to learn more, you can read a page about one of the grants we made to ALIMA here, and our most recent malnutrition treatment intervention report here.

I hope that's helpful!

Best,

Miranda Kaplan

GiveWell Communications Associate


 

If people care maybe I can look I to this more seriously and write up something longer, but I find it quite unlikely that their claim is correct. I think many of your numbered points are likely correct, but I bet 3 is significant. CEA is tough to do well, and easy to shape.

That said, wasting really is a serious concern and might be quite cheap to treat so if UNICEF was going to be highly cost-effective it might be here.

That's a good point, like you say I suspect helping to treat wasting would be among UNICEF'S more cost effective interventions.

I wouldn't necessarily advise looking into it unless you can find that number in the OP published somewhere. GiveWell have indeed already done their analysis also which is likely better than you or I could manage. We don't have strong evidence (I dont' think) yet that it is an official UNICEF number

FWIW I have heard that the Gates Foundation’s work on TB is more cost effective than many EA-recommended charities, but EA doesn’t recommend TB interventions because of factors like room for funding, replaceability of donations by governments and larger funders etc.

So maybe this intervention is in a similar category?

FYI, Open Philanthropy recently regranted $40 million to the Gates Foundation's TB work, so I wouldn't say that EA "doesn't recommend" TB interventions. 

However, I don't know if there are GiveWell-competitive options for individual donors in TB, or whether the people who chose the OP regrant would recommend Gates Philanthropy Partners as an option for individuals (I don't see a way to target donations to GPP more specifically, so it seems like you may just be investing in their entire portfolio, which is presumably worse than their TB-focused work on average).

I work at Open Phil, but this comment doesn't necessarily reflect Open Phil's views.

One fundamental issue is that they aren't providing evidence for their claims about cost effectiveness.

The response I got was quite specific: the volunteer claimed that UNICEF can save a life with just 1€ a day for an average period of 7 months.

If they had or have any reference, that could be evaluated. As-is, it sounds like that's the non-counterfactual treatment cost for sucessful cases, while also ignoring overhead and administrative costs.

Hey, thanks for this post. It raises a lot of issues I've been wondering about myself, though in the context of other charities and I think your intuitions are right.

On the other hand, as far as UNICEF itsefl is concerned, it's not just a charity, it's primarily a UN agency and shares its overall goals. So in addition to its child-focused mandate, it works on UN agenda, incl. political stability and peacebuilding more broadly. These are activities whose cost-effectiveness is difficult to assess.

But indeed, UNICEF itself for too many of its programmes (especially the most complex ones) does not have much data on effectiveness and sometimes even on the costs themselves. I can see several reasons for that:

1. UNICEF more often than other organisations supports national governments and channels its funds into specific ministries. This has the advantage of stabilising those governments and strengthening existing institutions rather than creating parallel systems. The downside is the loss of some (and often almost all) control over how the money is spent. Another consequence is the reluctance to share with the public information about the funds that go to non-democratic regimes etc. All this is not conducive to financial transparency. 

2) UNICEF works largely in the area of humanitarian aid (and not development aid), unlike most of GiveWell's charities. This has two basic consequences: 
- the areas in which it operates are unstable, making it difficult to conduct counterfactual cost-effectiveness analyses. If I invest in capacity-building in a government, and then there is a coup d'état there, and then there is a currency devaluation and public officials leave, it is difficult to see benefits of my actions. If I prepare an educational programme in refugee camps, but then its inhabitants are further displaced, then a fire breaks out destroying the infrastructure, and then all the aid staff is forced to leave the country, the cost-effectiveness of my endeavours is necessarily limited.
- initiatives that are geared towards ad hoc and makeshift solutions are necessarily less cost-effective than investing in sustainable infrastructure and long-term reforms. 

3. In line with the principle of inertia in an overgrown bureaucracy, all the novelties are adopted with a time lag, and these include, in many regional offices, the monitoring and economic analysis of operations. Indeed, the lack of attachment to numbers is downright shocking in some places. Which is to some extent a consequence of the approach of the donors themselves.

Does this mean that it is not worth directing funds (or rather investing aid) to volatile areas? As a rule, financial markets avoid such regions. Should aid do the same? These are questions to which there is no single answer. Nevertheless, it would be wonderful to see UNICEF one day on the path of full financial transparency and clear communication regarding (sometimes necessary) tradeoffs of their key decisions. And it would be great to have more discussion around the humanitarian sector in EA, in comparison to development (but I'm quite newvto the movement and maybe that discussion has already taken place). 
 

I want to raise another argument against donating to mega-charities, in favour of those who we are confident are highly cost-effective. Mega-charities, by their nature, run many programs, in many locations. In effect, they draw multiple times from the distribution of interventions by effectiveness. By the Central Limit Theorem, the average of any such samples, if not heavily conditioned on high cost-effectiveness, will be closer to the average of the underlaying distribution. The larger the sample, the closer to the average we can expect. For an individual donor who cannot reasonably choose to support only a single intervention at a mega-charity, we should expect that a donation contributes towards their average cost-effectiveness. The same goes for directed donations, which suffer from high fungibility, as GiveWell mention in this comment. In other words, it's difficult to be one of the very best on average if you are doing lots of different stuff. Even if some of the interventions you do are really effective, your average effectiveness will be dragged down by the other interventions.

Thus, from a donor perspective, mega-charities are similar to index funds, you aren't likely to go very wrong by supporting them, and the average ROI is close to the average of all relevant interventions. However, you cannot expect to get the highest ROI by supporting mega-charities. For that, you need an organisation that specialises on one, or a few interventions that comes from the very top of the distribution.

I think it's a good sentiment, but I strongly disagree with one aspect of this.

I think you can go very wrong by supporting mega charities.

Mega charities often do lots of things really badly, so aren't really like index funds. In the charity field I don't see why diversification would mean you would close in on the average. More likely the quality of all your interventions will written and you will do worse overall

Especially if your are just chasing the money like most mega charities, as you move to more and more areas you have less expertise in, your quality is likely to continue to deteriorate, rather than revert to a mean

More from Jesper
Curated and popular this week
Relevant opportunities