One of the roles of Giving What We Can (GWWC) is to help its members and other interested people figure out where to give. If you go to their site and click " start giving" they list charitable funds, including GiveWell's All Grants Fund and the EA Infra Fund, both of which our family donated to in 2022. Showing funds first reflects their view (and mine!) that most people should donate via funds: a grantmaker with time and resources to determine where money is most needed can generally do a better job allocating funds than an individual.

One of the big downsides of donating via a fund, however, is that you have to trust its grantmakers are allocating your money in line with what you would want. Perhaps they have different values or ways of thinking about uncertainty, risk, and evidence. Or perhaps they're just not very good at their job. One of GWWC's roles is evaluating these funds, helping people figure out who to trust, but if you're more skeptical GWWC also recommends individual charities.

They maintain a list of charity evaluators they trust, and if one of those evaluators recommends a charity then GWWC will list it prominently on their site and badge it as "top rated". You can see these on GWWC's donating page if you scroll down past the funds.

There was recently some discussion on the EA Forum around one of these evaluators, Founders Pledge, and one of their recommended charities, StrongMinds. In March 2019, Founders Pledge performed a detailed investigation of StrongMinds, decided that their work treating depression in low-income countries was highly cost-effective, and wrote up a public evaluation explaining this decision (summary, details). GWWC then listed StrongMinds as a top-rated charity. All makes sense.

While Founders Pledge has continued to follow StrongMinds' work and stands by their recommendation, they haven't had the resources to update their public review. Since Founders Pledge continues to recommend StrongMinds, GWWC continues to consider it a top-rated charity.

This is not a great situation: if you want to be giving to individual charities because you don't trust grantmakers deciding privately what most needs funding, you don't want to be taking Founders Pledge's word that StrongMinds is still a highly cost-effective opportunity. How has their funding outlook changed over the last nearly four years? Have there been more recent studies on their work or on this kind of intervention?

A case with even less public information is Suvita. GWWC says they recommend Suvita because Founders Pledge's Global Health and Development Fund made a grant there in July 2022. GWWC links to that fund's Q2 2022 grants writeup which has a single paragraph on Suvita.

I think what Founders Pledge is doing here is fine; this is a reasonable level of transparency for a fund making a $50k grant. On the other hand, for a charity that GWWC is promoting directly to donors it's very little to back up a designation of "top rated".

(After sharing a draft of this post with Founders Pledge they linked me to a more detailed writeup on Suvita, but it isn't currently linked from the rest of their site or from GWWC.)

On the EA Forum I proposed that one of GWWC's requirements for endorsing recommendations from their trusted evaluators be that they're supported by current public evaluations. In the case of StrongMinds, once Founders Pledge's public evaluation became stale GWWC could have removed the "top rated" badge. GWWC's response was that they thought their current policy was correct because "our goal is primarily to provide guidance on what are the best places to give to according to a variety of worldviews, rather than what are the best explainable/publicly documented places to give."

In this case, I don't think this should be their goal. The biggest advantage I see to GWWC pointing people to specific charities, not just funds, is that this simpler approach supports people in directing their money effectively even if they don't trust the private decisions of evaluators. This doesn't work without recommendations being backed by reasonably detailed public current evaluations.

Note that this doesn't require that most donors read the evaluations: lower-trust donors still (rightly!) understand that their chances of funding work that's pretty different from what they thought they were funding are much lower if an evaluator has written up a public case. On the other hand, there are several reasons why a donor willing to take an evaluator's word for how effective a charity is might still prefer to donate to an individual charity instead of a fund:

  • Taxes. Donations to, for example, StrongMinds are tax-advantaged in 22 countries while donations via the EA Funds platform are only tax-advantaged in 3. If the fund is planning on granting to charity X this year, then you donating to X has similar effects to donating to the fund.

  • Preference adjustments. Perhaps you agree with a fund in general, but you think they value averting deaths too highly relative to improving already existing lives. By donating to one of the charities they typically fund that focuses on the latter you might shift the distribution of funds in that direction. Or maybe not; your donation also has the effect of decreasing how much additional funding the charity needs, and the fund might allocate more elsewhere.

  • Ops skepticism. When you donate through a fund, in addition to trusting the grantmakers to make good decisions you're also trusting the fund's operations staff to handle the money properly and that your money won't be caught up in unrelated legal trouble. Donating directly to a charity avoids these risks.

These are real concerns, but they're the kind of concerns sophisticated and committed donors are likely to have. These are the kind of people who are much less likely to put a lot of weight on a "top rated" badge, or to be on the fence about whether to donate. Supporting donors in these kinds of situations is good, but that mostly just requires listing the charities, not marking them as "top rated". Overall, I still think limiting the "top rated" badge and promotion to charities that have current public evaluations is the right choice for GWWC.


Disclosure: my wife used to be President of GWWC, but I haven't run this post by her and I don't know what she thinks of this proposal. I sent a draft of this post to GWWC and Founders Pledge; thanks to Sjir at GWWC for discussion on the Forum that led to this piece, and to Matt at Founders Pledge for his quick responses.

93

0
0

Reactions

0
0

More posts like this

Comments9
Sorted by Click to highlight new comments since: Today at 4:06 AM

Thanks again for this suggestion Jeff! However, for reasons mostly outlined in my comment here (under (4)) GWWC's position remains that we should not restrict charity recommendations only to those who have a recent public evaluation available. I'd be interested in any more arguments coming out of this discussion that would update our view though, and these could feed into a revision of our inclusion criteria later this year.

There's one thing I'd like to add - based on the emphasis of your new post: as you mention, there are multiple reasons why people choose to donate to charities over funds, even while we generally think that donating to funds will be the higher-impact option. I think I have lower credence than you seem to have in "not trusting funds" being the most prominent one, but even if it is, I don't think the current situation is problematic for donors for whom this is the main reason: those donors can easily see whether a particular top-rated charity has a recent public evaluation available (e.g. this will be highlighted on its charity page on the GWWC website), and adjust their decisions accordingly. By keeping the current policy, the "top-rated" label remains representative of where we expect money will actually do the most good, rather than it being adjusted for a subgroup of donors who have a lower trust in funds.

(As an aside, I don't see why the other reasons you mention for giving to charities (e.g. tax deductibility) would be more characteristic of "sophisticated and committed" donors than having a view on whether or not to trust particular evaluators/funds)

Could you define what counts as sufficiently "current" in your view?

I am a bit concerned that there are already practices that significantly favor larger organizations, such as GiveWell only considering orgs with tons of room for more funding for its top charity status. It's not cost-effective for evaluators to devote considerable resources to updating evaluations on midsize organizations very often. And there are downsides to putting all the eggs in a few baskets, which I fear a demanding currentness requirement would promote.

I think I would ask the recommending evaluator to affirm every X years that there are no known significant/material changes from the public evaluation, and have an absolute sunset after Y years -- but X and Y would differ to some extent on current organization size. Otherwise, I would model a decay function based on the date of the last public evaluation -- the assumed cost-effectiveness reduces over time as the amount of potential information not in the public analysis grows, and removal is triggered when the adjusted effectiveness no longer meets GWWC's bar.

In general, I think it makes more sense for smaller organizations to be supported via funds, where grantmakers can engage directly with organization leadership.

On staleness, I would go the other way: a review of a small, agile, or quickly growing organization goes stale a lot faster than a larger and more stable one.

I like your first idea in theory, but I think you have to have enough varied funds in place first. Of the four recommended funds in Global Health & Development, all are GiveWell at their core and even GiveWell All Grants is an estimated 75 percent to GiveWell top charities per https://www.givewell.org/research/all-grants. So everyone who doesn't score well on GiveWell's system is going to get shut entirely out under that approach. This is absolutely not a criticism of GiveWell, which does what it is intended to do very well.

On your second point, I also agree in theory -- I think SM's growth in room for funding is one reason I qualitatively find the public report a bit stale. But how quickly to age these out loops, in part, back to whether the funds are diverse enough and willing enough to fund a range of small/midsized organizations.

I'll just add that from SoGive's perspective, this proposal would work. We have various views on charities, but only the ones which are in the public domain are robustly thought through enough that we would want an independent group like GWWC to pick them up.

The publication process forces us to think carefully about our claims and be sure that we stand by them.

(I appreciate that Sjir has made a number of other points, and I'm not claiming to answer this from every perspective)

SoGive is not currently on GWWC's list of evaluators --GWWC plans to look into us in 2023.

"One of the roles of Giving What We Can (GWWC) is to help its members and other interested people figure out where to give." Is this a recent addition to the GWWC mission statement? I've been a member for a while and wasn't under the impression that GWWC was in the business of doing charity evaluations or meta-charity/fund evaluations. I assumed GWWC always emphasized the pledge, why to give, how much to give, but not saying much about where to give beyond pointing to GiveWell or whatever. If a big component of GWWC has always been about where to give, I must have missed that. Has GWWC emphasized the where to give piece more in recent years?

I think the current thinking is: "Evaluating the evaluators": GWWC's research direction

Initially GWWC did their own charity evaluation, and had some public disagreements with GiveWell (ex: GWWC's 2014 Why we (still) don’t recommend GiveDirectly). Sometime around 2016 (compare team archive snapshots in mid-2016 with mid-2017) GWWC disbanded their research department, and then stopped having full time staff. In 2020 Luke took over GWWC leadership and my interpretation is the "evaluating the evaluators" direction was started under Luke.

EDIT: I had tried to find a GWWC blog post about getting out of research, but it turns out it was a CEA post:

Our research wasn’t able to add enough value beyond GiveWell and the Open Philanthropy Project. Our model involved conducting research into areas that GiveWell/Open Philanthropy Project had not fully explored and were unlikely to explore anytime soon. Our team’s areas of expertise overlapped considerably with those of GiveWell/Open Philanthropy Project, however. Without venturing well beyond our areas of expertise, there were fewer opportunities to provide value here than we expected. Although this might not always be the case, we believe that research that is within the focus areas of GiveWell/Open Philanthropy Project is most efficiently conducted within those organizations. James Snowden, formerly of our philanthropic advising team, will be joining GiveWell, where we believe his research will have greater impact.

https://www.centreforeffectivealtruism.org/blog/cea-strategic-update

Yep - Jeff's pretty much captured it all here.

GWWC's mission is to "make giving effectively and significantly a cultural norm" and the pledge plays a big part in that, as does advocating for and educating about effective giving.

Supporting donors/members in giving effectively has always been a part of GWWC but what that's looked like has changed over the years (from very detailed charity evaluation through to just linking off to GiveWell/ACE/EA Funds when there was no one working full time on GWWC).

Thanks for the clarification!

I took the pledge in 2016 which coincided with when the research department disbanded per Jeff's comment. I think that explains why I perceived GWWC to not be in the business of doing evaluations. Glad to see "evaluate the evaluators" is working its way back in.

Curated and popular this week
Relevant opportunities