According to the announcement on their blog (heard through Catherine Low).

They seem to be acknowledging the importance of cost-effectiveness now:

Why Cost-effectiveness?
Take a simple thought exercise: A program has a limited budget of $100,000 to improve literacy in a community. It can choose between two approaches to do so: one that can boost literacy by a grade level for 100 students and a second that can also boost literacy by a grade level but for 200 students. All else equal, a sensible program administrator would choose the second, as of course it reaches twice as many students. This is a cost-effectiveness decision. We have limited resources and unlimited needs. Cost-effectiveness is a decision tool that makes those resources go further - helping more people in more ways.

However, their criteria still includes:

"Direct" criterion: At least two-thirds of the nonprofit's activities (as measured by percent of total program service expenses) are directly delivered to beneficiaries and reasonable to expect impact measurement for. Many nonprofits work one or more steps removed from beneficiaries, such as by conducting research, advocating for policy change or making grants to other organizations. We do not yet have a method for consistently estimating the impact of these nonprofits, and so have excluded them from the Impact & Results beacon at this time.

Will be interesting to see what the outcomes of this are. It first guess, I imagine it'll be mixed.

48

0
0

Reactions

0
0
Comments6
Sorted by Click to highlight new comments since: Today at 4:04 PM

In July, Charity Navigator announced their new nonprofit rating system that they call Encompass. This system looks at four “beacons” to determine their rating of each charity. One of these beacons is Impact & Results. At the time, they did not specify how they would evaluate this beacon. Yesterday's latest post from them finally sets down the initial methodology they will use.

Some basic takeaways:

  • At the same time as releasing their new rating system, they intend to increase the number of charities they rate from 9,000 to 160,000. Clearly, most of the ratings must be automated to do this, so only a vanishingly small proportion of charities will be evaluated on the basis of Impact & Results. It's not clear how they will prioritize which charities get rated in this beacon in the future, but they're starting with this list of cause areas and they have a sign-up form for charities that want to be evaluated on Impact & Results.
  • They will not be comparing causes nor sometimes even intervention types. Their system will only look at cost-effectiveness within a cause area in some cases, and only within a single intervention type for other cases. For example, they may rate the most highly cost-effective charity that provides emergency shelters for the homeless population. There will be no indication whatsoever that a cataract surgery charity scoring 100 points on Impact & Results might be more effective than a Veterans Disability Benefits charity scoring 100 points on Impact & Results.
  • Within each cause area, they give four possible scores for Impact & Results: 0 points for charities without publicly available data, 50 points if they provide data but are determined to be inefficient, 75 points if they are found to be effective, and 100 points if they are found to be highly effective. Assuming they successfully find the most highly effective charities, this would give the likely incorrect appearance that the most highly effective charities are only 1/3 better than charities who just barely do better than breaking even. It's also not clear what percentage of charities within a cause area may be simultaneously rated at 100 points in the Impact & Results beacon.
  • Within each cause area, they use vastly simplified calculations to determine impact. For example, when it comes to emergency shelters, they assume that all beds are equally good; they disregard counterfactual beds that would be available if the charity not existed there; they give full marks for providing a bed, even if other beds were available at the time; and when determining costs, if the charity doesn't specify what it costs for them to provide the cost of a bed, they instead just use the average cost as reported by HUD’s Housing Inventory Count dataset. While I believe these are humongous assumptions to be making, I don't necessarily think these simplifications are bad considering their goal; if they're serious about analyzing hundreds of thousands of charities, then they have to make simplifications somewhere. [EDIT 16 Oct: Elijah Goldberg of ImpactMatters clarifies in a comment below that this bullet point may be misleading.]

They have noted that they are looking into additional alternative methodologies for the future.

The system that Charity Navigator is using for its Impact & Results beacon was acquired from ImpactMatters, which was previously discussed on the EA Forum.

Hi Eric, thanks for your note! Happy to provide some more context on a few things:

  • You're right, the 160,000 include an analysis of finance & accountability that is automated off of 990s. The Impact & Results is not automated. Honestly, the key barrier to "scale" here is smart labor (a team of 3 has been working on this). Certainly in typical EA terms, many of the nonprofits that are analyzed are not the most cost-effective. But we also know that standard EA nonprofits are a fraction of the $300 bil nonprofit sector, and there is a portion of that money that has high intra-cause elasticity but low inter-cause elasticity. Impact analysis could be a way of shifting that money, yielding very cost-effective returns (again, ImpactMatters spent half a million or so last year to rate $15 billion in nonprofit spending. How much did we actually move? Probably not a lot. But hopefully this acquisition changes that, and we'll be running experiments over the next year to figure that out).
    • If anyone is looking to pitch in on the cost-effectiveness analysis, we're looking to build a small volunteer team - more
  • True! But the beauty is (as we see it), that now there is actually a largeish raw dataset that donors can use to apply own weighting & build benefit cost analyses. The barrier to benefit/cost has never been the b/c methodology ... but the raw CEA estimates to feed in
  • I'm not sure what "incorrect" means in this context. Fwiw, we are working on moving to a continuous scale that may address some of your critiques. But I don't think that anyone believes that cardinal rankings are actually that much use in the space.
  • Got to disagree! Sorry, but this is incorrect - maybe not the claim about oversimplification, but your summary of our methodology certainly is. Some nonprofits run an emergency shelter as well as other shelter or housing programs, such as transitional housing or permanent supportive housing. Wherever possible, we exclude from our calculation the value of these non-emergency shelter services as well as the costs of providing them. If the nonprofit has not separated out programmatic costs in this way, we apply a standard cost adjustment. The cost adjustment is calculated using HUD’s Housing Inventory Count dataset. The Housing Inventory Count dataset reports the number of individuals sheltered by each nonprofit on a single night in January, broken out by six types of shelter and housing programs, including emergency shelter. This allows us to calculate the number of individuals sheltered as part of a nonprofit’s emergency shelter program as a percentage of the total individuals it sheltered across all programs. We then multiply the proportion by total programmatic costs, yielding an estimate of costs associated only with the nonprofit’s emergency shelter program. See Reference Manual on Data Analysis for more details on this calculation.

Thanks for engaging here Elijah and thanks for your hard work. It means a lot to me and I am sure many others here. 

This was an excellent comment and saved me a lot of time I'd otherwise have spent reading the methodology in full. Thank you for posting it!

I have some comments following up on this in this shortform here. (By the way, I wrote that before seeing your post)

So far the outcomes don't seem great to me, but I think there is still room for things to improve. I hope to keep at this.

Thanks for writing this piece.

And good on Charity Navigator for the change. I hope it works out well for them and that more effective charities get more donations as a result.