Hide table of contents

In relevance to ongoing outreach work in EA Israel, an upcoming Charity Effectiveness Prize, and our local charity-evaluation work, I've become interested in how does the evaluation process affects charities who are under evaluation. 

I can (sort of) quantify and understand the direct impact of the recommendation on the top charities. I can also (sort of) imagine the kind of impact the recommendation process has on popularizing cost-effectiveness (although I'd love to read a detailed report on the topic).

What I'd like to understand better at the moment are the generally-framed questions around how does more evidence lead to higher performance: 

  1. How do self-reflecting charities adapt to new evidence? GiveDirectly, for example, performs a lot of RCTs on direct cash transfers. I'd be very interested in examples of GiveDirectly and other charities making strategic or practical changes due to new evidence, and examples of randomized trials or other analyses being performed to make high-level decisions. Are there examples of nonprofits that completely started over when their interventions proved ineffective? I'd also be interested to learn more about what charities recommended by evaluation orgs learn from the evaluation process itself and whether this helps them improve (or cause harm).
  2. How are charities that aren't empirically grounded impacted from doing charity evaluation? Most charities, unfortunately, are not performing RCTs or otherwise invest in gathering evidence about the impact of their actions. Generally speaking, how do such charities respond to evidence or claims of (in)effectiveness? How reasonable is it to expect charities without a strong self-evaluation history to improve as a result of performing an analysis later on? Are there examples where organizations like GiveWell and ACE successfully helped improve the performance of investigated charities?

26

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

I can only address one of your points from question 1. Evidence Action has abandoned and completely started at least one of their interventions which proved to be ineffective in light of new evidence. If I remember correctly, it was in Bangladesh.

Thank you!

I've searched and found this post describing it. The summary:

Evidence Action is terminating the No Lean Season program, which was designed to increase household food consumption and income by providing travel subsidies for seasonal migration by poor rural laborers in Bangladesh, and was based on multiple rounds of rigorous research showing positive effects of the intervention. This is an important decision for Evidence Action, and we want to share the rationale behind it.  

Two factors led to this, including the disappointing 2017 evidence on

... (read more)
1
smaq
3y
Thank you. Yes, this is exactly what I referring to.
Curated and popular this week
Relevant opportunities