New Answer
Ask Related Question
New Comment

1 Answers sorted by

Trying to craft an answer, may update this as I think some more. Of course, this answer is crafted from my perspective on what is 'best'. You will probably want to consult with your friend as to what they value (as well as discussing this issue with them to help them carefully consider their priorities.)


I suspect that MSF is as good as best for the institutional 'laying the groundwork' to permit relief in these specific disaster cases, but I don't have hard evidence. (E.g., latest GiveWell evaluation of MSF in 2012.)

Note that (I recall reading discussions of) the work of an organization like MSF is particularly hard to analyze with a GiveWell-style CEA, both because:

... they do many things and it's hard to isolate costs or earmark funds specific activities, and because

... much of their work has harder-to-immediately-measure benefits, such as providing an environment to permit further aid and an international presence, and (maybe) building institutions.

That's not to say we shouldn't try to measure this; I think we should!

Open Philanthropy's choices

I defer to the careful judgment and research of Open Phil, who recently gave :

$20 million to the International Rescue Committee (IRC) for malnutrition treatment in Chad, Niger, Somalia, Burkina Faso, and the Democratic Republic of the Congo and $7 million to The Alliance for International Medical Action (ALIMA) for malnutrition treatment in Chad. This is a new intervention for GiveWell, and they wrote about the scale of the need, evidence base, and open questions here.

Regular malnutrition is not the same thing as famine relief, I think, but there may be some overlap.

This is why we need rigorous evaluation of a wider set of charities/causes

(One of my hobby horses)

As I've argued before, the case you present offers an example of one reason (of several reasons) why we should fund and do systematic measurement and evaluation of 'harder to evaluate' charities and causes.

I hope that the use of Fermi estimation involving meta-analysis of existing (limited evidence, calibrated judgment where evidence is lacking, quantified uncertainty, and MonteCarlo estimation will enable this more. See Hazelfire/QURI work here. I'm also hope that Sogive can move in this direction. I'm also interested in fostering multiple independent evaluations of the same programs and charities to get a sense of our reliability here.