Hide table of contents

We aim to maximize our impact. That means we focus on directing funds as cost-effectively as we can. Rather than recommending a long list of potential giving options, we focus on finding the organizations that save or improve lives the most per dollar.[1]

Going forward, we will no longer publish a list of standout charities alongside our list of top charities. <strong>We think our standout charities are excellent, but we believe donors should support top charities.</strong>[2]

Removing standout charities will lead our website to better reflect our recommendations for donors. We hope it will reduce confusion about the difference between top and standout charities and help us direct funding as cost-effectively as possible.

We continue to see the nine standout charities we’ve shared as very strong organizations. This decision doesn’t in any way reflect changes in our evaluation of their programs.

What are standout charities?

We define standout charities as follows:

Standout charities “support programs that may be extremely cost-effective and are evidence-backed. We do not feel as confident in the impact of these organizations as we do in our top charities. However, we have reviewed their work and believe these groups stand out from the vast majority of organizations we have considered in terms of the evidence base for the program they support, their transparency, and their potential cost-effectiveness.”

In other words, we expect that funds directed to top charities are more likely to have a significant impact than those directed to standout charities. We created the standout charity designation to recognize organizations we reviewed that didn’t quite meet our criteria to be top charities, but were very good relative to most. We also hoped the designation would incentivize organizations to engage in our intensive review process.[3]

Confusion between top charities and standout charities

However, we’ve realized that it’s confusing to have two different designations for organizations on our website.[4] Our recommendation for donors is (and always has been) to give to top charities. Our number-one recommendation for GiveWell donors who want to do as much good as possible is to give to our Maximum Impact Fund, which is allocated to the most cost-effective funding opportunities among our top charities. We don’t allocate the Maximum Impact Fund to standout charities.

Maintaining a list of standout charities for donors is not consistent with our goal of directing funds as cost-effectively as possible.

No changes in our evaluation of standout charities

We made this decision by thinking through how we can communicate more clearly—it wasn’t spurred by any change whatsoever in our views of the standout charities we’ve featured.

Going forward

We think our standout charities are doing great work, even though we’re discontinuing the “standout charity” designation. We’ve recommended that Open Philanthropy make a $100,000 exit grant to each standout charity on our list.

We’re no longer accepting donations for standout charities. We’re contacting donors who have recurring donations set up for our standout charities. If you have an open recurring donation and you haven’t heard from us, please contact us to make sure we accommodate your preferences for cancelling or redirecting your donations.

If you’d like to continue to donate to any of the standout charities, you can do so at the following links. (Note: the links below show tax-deductible options for donors based in the United States. If you’re donating from another country and interested in information on tax-deductibility, please check each organization’s website or contact it directly.)

If you have any questions about your donations, please don’t hesitate to contact us at donations@givewell.org.


  1. We focus on providing a short list of impact-maximizing options that we have intensely vetted. We don’t aim to recommend a long list of potential options for donors. ↩︎

  2. For example, in a 2019 blog post on standout charities ("What are standout charities?"), we wrote: “We don’t advise giving to our standout charities over our top charities because we believe that top charities have a greater impact per dollar donated. By definition, top charities have cleared a higher bar of review from GiveWell.” ↩︎

  3. We discussed this in "What are standout charities?" ↩︎

  4. In the 2019 blog post referenced above, we wrote: “The standout charity designation, though valuable for the reasons mentioned above, has created communication challenges for us. People who rely on our recommendations to make donations have expressed confusion about how our view of standout charities compares to that of top charities.” ↩︎

21

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 5:05 PM

This line of reasoning seems sensible to me. However, it does raise the following question: will GiveWell also stop recommending GiveDirectly, given that, by your own cost-effectiveness numbers, it's 10-20x less cost-effective than basically all your other recommendations? And, if not, why not?

I can understand the importance of having some variety of options to recommend donors, which necessitates recommending some things that are worse than others, but 10x worse seems to leave quite a lot of value on the table. Hence, I'd be curious to hear the rationale.

I'll  post Catherine's reply and then raise a couple of issues:
 

Thanks for your question. You’re right that we model GiveDirectly as the least cost-effective top charity on our list, and we prioritize directing funds to other top charities (e.g. through the Maximum Impact Fund). GiveDirectly is the benchmark against which we compare the cost-effectiveness of other opportunities we might fund.

As we write in the post above, standout charities were defined as those that “support programs that may be extremely cost-effective and are evidence-backed” but “we do not feel as confident in the impact of these organizations as we do in our top charities.”

Our level of confidence, rather than their estimated cost-effectiveness, is the key difference between our recommendation of GiveDirectly and standout charities.

We consider the evidence of GiveDirectly’s impact to be exceptionally strong. We’re not sure that our standout charities were less cost-effective than GiveDirectly (in fact, as we wrote, some may be extremely cost-effective), but we felt less confident in making that assessment, based on the more limited evidence in support of their impact, as well as our more limited engagement with them.

 

I don't see a justification here for keeping GiveDirectly in the list. Okay, there are charities GiveWell is 'confident' in, and those that they aren't, and GiveDirectly, like the other top picks, is in the first category. But this still raises the question of why to recommend GiveDirectly at all. Indeed, it's arguably more puzzling: if you think there's basically no chance A is better than B, why advocate for A? At least if you think A might be better than B, then you might defend recommending A on the grounds there's a chance, that is, if someone believes X, Y, Z they might sensibly believe it's better.

The other thing that puzzles me about this response is its seemingly non-standard approach to expected value reasoning. Suppose you can do G, which has a 100% chance of doing one 'unit' of good, or H, which has a 50% chance of doing 3 'units' of good. I say you should pick H because, in expectation, it's better, even though you're not sure it will be better. 

Where might having less evidence fit into this?

One approach to dealing with different levels of evidence is to discount the 'naive' expected value of the intervention, that is, the one you get from taking the evidence at face value. Why and by how much should you discount your 'naive' estimate? Well, you reduce it to what you expect you would conclude its actual expected value was if you had better information. For instance, suppose one intervention has RCTs with much smaller samples, and you know that effect sizes tend to go down when interventions use larger samples (they are harder to implement at scale, etc.). Hence, you're justified in discounting it because and to that extent. Once you've done this, you have the 'sophisticated' expected values. Then you do the thing with the higher 'sophisticated' expected value. 

Hence, I don't see why lower ('naive') cost-effectiveness should stop someone from recommending something.

My major concern: Making this less prominent further increases the 'gap in the market' for people who could be convinced to care about effectiveness, but are not eager to give in these particular categories. The GiveWell Top 9 charities do 6 things, and other than GiveDirectly all of these are health interventions.

Which is great.

But we all know many people and orgs that say

  • 'we want to give to something education (or female, or food...) related'
  • 'we already supported mosquito nets and vitamins, what else can we do' or
  • 'we need to support an organization that has been around a long time (or is top-rated on Charity Navigator, or is associated with some country or religion)

To some extent, I agree this is misguided.

But at the moment these people often end up giving to something we know to much less effective (like giving science equipment to schools in New York, or supporting food banks in London).

I think we need a credible rating to give to people, to be able to discern between, e.g., the Fistula Foundation and St. Jude's, or between Donors Choose and Development Media International ... rather have these people donate to something we have a good reason to think is many orders of magnitude less efficient.

There is no place these people can go to. Impact Matters offered something in this direction, but they were taken over by Charity Navigator, and the integration does not look promising (at least not yet).

SoGive does a lot of things right, and Sanjay is EA-aligned, but it's reach is limited (UK focused) and it concentrates on outputs rather than impact, in a sense.

So I think we are missing the chance to positively influence a huge amount of funds if we don't have a 'real impact based charity rating' that

  • includes a somewhat more diverse and mainstream list of charities,
  • compares among charities that are not necessarily at the very top
  • where the nature of the intervention is somewhat harder to asses (or the charity does more than one intervention, some of which may even be in the top categories)

While well-intended, I fear that de-emphasizing standout charities is a step away from this, in the wrong direction.

There is an adjacent issue that the difference in perspective might not be somewhat arbitrary restrictions like in your examples, but instead be a matter of weighing different moral goods (e.g., the value of older vs younger lives) differently.

I agree. Your issue is relevant to more sophisticated EA-adjacent types too. I would love to have/build a simple app (R-shiny) that allows the user to input these concerns and tailor their analysis.

The main issue I find with this is that those "standout" charities should be treated as possible future top charities, and not publishing an equivalent list might make it harder for them to become so.

My understanding is that most of these standout charities are not likely to become future top charities within the next few years. 

GiveWell instead gives incubation grants to organizations/projects that would support the development of future top charities, and the organizations they grant to are usually not their standout charities (except for their 2020 grant to DMI and a 2017 grant to Zusha!). Some organizations they have given incubation grants to are Fortify Health, Pure Earth, and Centre for Pesticide Suicide Prevention.

I'm not sure if it's necessary for GiveWell to make a list of these potential future top charities. Maybe just looking at which organizations they give incubation grants to might serve the same purpose?

If the issue was donor confusion, why not keep the list but rename it 'Good but not Great Charities' or similar?