Hide table of contents

The Giving What We Can research team is excited to share the results of our first round of evaluations of charity evaluators and grantmakers! After announcing our plans for a new research direction last year, we have now completed five[1] evaluations that will inform our donation recommendations for this giving season. There are substantial limitations to these evaluations, but we nevertheless think that they are a significant improvement on the status quo, in which there were no independent evaluations of evaluators’ work. We plan to continue to evaluate evaluators, extending the list beyond the five we’ve covered so far, improving our methodology, and regularly renewing our existing evaluations.

In this post, we share the key takeaways from each of these evaluations, and link to the full reports. [EDIT 27 November] Our website has now been updated to reflect the new fund and charity recommendations that came out of these evaluations.  We shared these reports here in advance of our website update so those interested had time to read them and could ask questions before our AMA on the 27th and 28th of November. Please also see our website for more context on why and how we evaluate evaluators.

One other exciting (and related) announcement: [EDIT 27 November] we’ve now launched our new GWWC cause area funds! These funds (which you’ll see referenced in the reports) will make grants based on our latest evaluations of evaluators, advised by the evaluators we end up working with.[2] We are launching them to provide a strong and easy default donation option for donors, and one that will stay up-to-date over time (i.e., donors can set up a recurring donation to these funds knowing that it will always be allocated based on GWWC’s latest research). We will still encourage donors to donate directly to funds and charities recommended by evaluators if they prefer to select specific programs and will continue to host a broader variety of promising (but not currently recommended) programs on our donation platform as well.

We look forward to your questions and comments, and in particular to engaging with you in our AMA! (Please note that we may not be able to reply to many comments until then, as we are finalising the website updates and some of us will be on leave.)

Global health and wellbeing

GiveWell (GW)

Based on our evaluation, we’ve decided to continue to rely on GW’s charity recommendations and to ask GW to advise our new GWWC Global Health and Wellbeing Fund.

Some takeaways that inform this decision include:

  • GW’s overall processes for charity recommendations and grantmaking are generally very strong, reflecting a lot of best practices in finding and funding the most cost-effective opportunities.
  • GW’s cost-effectiveness analyses stood up to our quality checks. We thought its work was remarkably evenhanded (we never got the impression that the evaluations were exaggerated), and we generally found only minor issues in the substance of its reasoning, though we did find issues with how well this reasoning was presented and explained. 
  • We found it noteworthy how much subjective judgement plays a role in its work, especially with how GW compares different outcomes (like saving and improving lives), and also in some key parameters in its cost-effectiveness analyses supporting deworming. We think reasonable people could come to different conclusions than GW does in some cases, but we think GW’s approach is sufficiently well justified overall for our purposes. 

For more, please see the evaluation report.

Happier Lives Institute (HLI)

We stopped this evaluation short of finishing it, because we thought the costs of finalising it outweighed the potential benefits at this stage. 

For more on this decision and on what we did learn about HLI, please see the evaluation report.

Animal welfare

EA Funds’ Animal Welfare Fund (AWF)

Based on our evaluation, we’ve decided to recommend the AWF as a top-rated fund and to allocate half of our new GWWC Effective Animal Advocacy Fund’s budget to the AWF.

The key findings informing our recommendation of the AWF are:

  • Its recent marginal grants and overall grant decision-making look to be of sufficiently high quality.
  • We expect the AWF will have significant room to fund grants at or above the quality of its recent marginal grants.
  • We don’t know of any clearly better alternative donation options in animal welfare. 

We did find what we think to be significant room for improvement in some of the AWF’s grantmaking reasoning and value-add to its grantees beyond funding — much of which the AWF acknowledges and is planning to address. However, we don’t think this room for improvement affects the AWF’s position as being — to our knowledge — among the best places to recommend to donors.

For more, please see the evaluation report.

Animal Charity Evaluators (ACE)

Based on our evaluation, we’ve decided to not currently rely on ACE’s charity recommendations nor to recommend ACE’s Movement Grants programme (MG) as a top-rated fund. However, we still think ACE’s funds and recommendations are worth considering for impact-focused donors and we will continue to host them on the GWWC donation platform. We’ve also decided to recommend the work of one of ACE’s recommendations — The Humane League (THL) — on corporate campaigns for chicken welfare as a top-rated program, and plan to allocate half of the GWWC Effective Animal Advocacy Fund’s budget to it until our next review in animal welfare.

Our key findings informing these decisions are:

  • When compared with the AWF, we think ACE’s Movement Grants fund (MG) performs slightly less strongly on several proxies we looked into for the marginal cost-effectiveness of its grants.
    • We therefore think the AWF is currently a slightly better donation option for the impact-focused donor. However, we are open to part of this difference being explained by reasonable disagreements on optimal grantmaking strategy. Moreover, if the AWF had not been available as a better alternative by our criteria, we might have recommended MG upon further consideration. We think MG will become more competitive with the AWF, according to our criteria, if it succeeds in implementing the improvements that ACE has planned and in moving closer to the vision for MG that ACE has shared with us. 
  • ACE’s charity evaluations process does not currently measure marginal cost-effectiveness to a sufficient extent for us to directly rely on the resulting charity recommendations. 
    • We see some reasons to be hopeful this will change in future evaluations, and still think ACE’s recommendations are worth considering for impact-focused donors. We also expect the gain in impact from giving to any ACE-recommended charity over giving to a random animal welfare charity is much larger than any potential further gain from giving to the AWF or THL’s corporate campaigns over any (other) ACE-recommended charity, and note that we haven’t evaluated ACE’s recommended charities individually, but only ACE’s evaluation process.
  • THL’s corporate campaign work for chicken welfare is plausibly a highly cost-effective donation opportunity. 
    • This assessment is not based on a direct investigation by the GWWC research team, but supported by four separate pieces of evidence, one of which is ACE’s recommendation. 

We decided not to make an explicit comparison between THL’s corporate campaign work and the AWF in terms of their marginal cost-effectiveness, as we thought we would be unlikely to find a justifiable difference between the two in the limited time we had available. We decided to recommend both as top-rated options and plan to have our GWWC Effective Animal Advocacy Fund allocate half of its disbursements to THL’s program and to ask the AWF to advise the other half.

For more, please see the evaluation report.

Reducing global catastrophic risks

EA Funds’ Long-Term Future Fund (LTFF)

Based on our evaluation, we’ve decided to recommend the LTFF as a top-rated fund and to allocate half of our new GWWC Risks and Resilience Fund’s budget to the LTFF.

The key findings informing our recommendation of the LTFF are:

  • The LTFF has high-quality applicants to make grants to, and has a good basic process for selecting among those.
  • The LTFF’s significant room for funding makes it more likely that donations to it will be cost-effective.
  • We don’t know of any clearly better alternative donation option in reducing GCRs.[3]

We also found some areas where we think the LTFF could significantly improve:

  • Improving the quantity, diversity, and quality of its recorded grant reasoning.
  • Improving its response time for grant applications.

The issues we identified seem to mainly be a result of LTFF’s difficulty in maintaining and scaling its grantmaking capacity to match a significant increase in funding applications. This is something the LTFF is aware of and working to address. 

We found no clear, justifiable reasons for the donor’s extra dollar to be better spent at the LTFF than at Longview’s Longtermism Fund (or vice versa). As a result, we recommend both as top-rated funds and plan to allocate half of the budget of our Risks and Resilience Fund to each until our next evaluation. We did outline several differences between the LTFF and the LLF so motivated donors can decide for themselves which they think fits their values and starting assumptions best.

For more, please see the evaluation report.

Longview’s Longtermism Fund (LLF)

Based on our evaluation, we’ve decided to recommend the LLF as a top-rated fund and to allocate half of our new GWWC Risks and Resilience Fund’s budget to the LLF.

The key findings informing our recommendation of the LLF are:

  • Longview has solid grantmaking processes in place to find highly cost-effective funding opportunities.
  • In the grants we evaluated, we generally saw these processes working as intended, which makes us optimistic about the cost-effectiveness of the grants. 
  • The scope and structure of the LLF is — by design — consistent with what we are looking for with our Risks and Resilience Fund: a fund that makes grants that are relevant and understandable to a wide variety of donors looking to reduce global catastrophic risks.
  • We don’t know of any clearly better alternative donation option in reducing GCRs.[3]

We found no clear, justifiable reasons for the donor’s extra dollar to be better spent at the LLF than at the EA Long-Term Future Fund (or vice versa). As a result, we recommend both as top-rated funds and plan to allocate half of the budget of our Risks and Resilience Fund to each until our next evaluation. We did outline several differences between the LLF and the LTFF so motivated donors can decide for themselves which they think fits their values and starting assumptions best.

For more, please see the evaluation report.


 

  1. ^

    We decided not to complete the HLI evaluation: more on that in the report.

  2. ^

    Note that because GWWC currently doesn’t do individual charity evaluations, in nearly all cases “being advised” will simplify to us granting to the funds and recommendations of the evaluators we have selected based on our evaluations. We have chosen this phrasing because we may want to make exceptions (our recommendation of THL’s corporate campaigns can be seen as one) and we want to retain flexibility to update our Funds’ strategy over time.

  3. ^

    Note that we haven’t yet evaluated Founders Pledge’s Global Catastrophic Risk Fund, but aim to do so next year.

Comments44
Sorted by Click to highlight new comments since:

Removed. 

[This comment is no longer endorsed by its author]Reply

First of all, thank you for the extensive comments!

I can give more context during our AMA next week if helpful (I won't have much time to engage in the coming few days unfortunately), but wanted to just quickly react to avoid a misunderstanding about our views here. I've copy-pasted from the relevant section from the report below:

To be clear, there are strong limitations to this recommendation:

  • We didn’t ourselves evaluate THL’s work directly, nor did we compare it to other charities (e.g., ACE’s other recommendations).
  • The availability of evidence here may be high relative to other interventions in animal welfare, but is still low compared to interventions we recommend in global health and wellbeing. We haven’t directly evaluated Open Philanthropy, Rethink Priorities, or Founders Pledge as evaluators. 
  • We have questions about the external validity of the evidence for corporate campaigns, i.e. whether they are as cost-effective when applied in new contexts (e.g. low- and middle-income countries in Africa) as they seem to have been where the initial evidence was collected (mainly in the US and Europe). 
  • We also have questions about the extent to which the evidence for corporate campaigns is out of date, as the Founders Pledge and Rethink Priorities reports are from more than four years ago and we would expect there to be diminishing returns to corporate campaigns over time, as the “low-hanging fruits” in terms of cost-effectiveness are picked first.

Taken together, all of this means we expect funding THL’s current global corporate campaigns to be (much) less cost-effective than the corporate campaigns in 2016-2017, which were evaluated in those reports.^1 

^1 It is worth noting that Open Philanthropy confirmed to us that it thinks so as well: its referral is not a claim that funding THL’s corporate campaigns will be exactly as cost-effective as it probably was a couple of years ago, when THL achieved big wins on a small budget, but a claim that funding them is likely still among the most cost-effective options in the space, and that THL can productively use a lot of extra funding without strongly diminishing marginal returns to funding currently provided.

So in short, we share your impression that THL's work is (much) less cost-effective than it was a few years ago. We are aware of Open Phil's views on this, and their referral of THL's work to us took these diminished expected returns into account. The FP and RP reports weigh (much) less heavily in our recommendation of THL's current work than ACE's and OP's recommendations, but we think those reports still provide a useful (and publicly accessible) reference on corporate campaigns as an intervention more generally.
 

Agree with lots of the above. 

It also just seems very bizarre that the GWWC's animal fund pays out half to EA AWF and half to THL. Surely if you thought that EA AWF was a good evaluator or donation opportunity for donors, you would just let them manage the entirety of the fund? As then EA AWF would be able to distribute to THL if they actually thought THL was the most effective use of funds on the margin. And if not, even better, as they can give to more effective opportunities.

Also responding to the below points in your ACE evaluation report

We also think that recommending at least one competitive alternative to the AWF in the animal welfare space — if we transparently and justifiably can — is valuable. 

I'm also curious why you felt the need to recommend at least one competitive alternative to the AWF, when the AWF itself is a fairly diversified fund? Arguably, you marked ACE down for similar reasoning in your evaluation of their Movement Grants (that they were spreading their grants across many groups rather than focusing mostly on the most effective groups)

However, we still think funding them [THL] is likely highly cost-effective, and the most justifiable charity recommendation we can currently make, based on the available evidence and our limited time.

We decided not to make an explicit comparison between THL’s corporate campaign work and the AWF in terms of their marginal cost-effectiveness, as we thought we would be unlikely to find a justifiable difference between the two in the limited time we had available, including because the types of evidence we have for each are so different.

Statements like this make me worry that this evaluation focused too much on the certainty of some positive impact, rather than maximising expected impact (i.e. measurability bias). As mentioned in the comment above, you would struggle to find many experienced animal advocates who would confidently recommend THL as the single best marginal giving opportunity. In reality, they would likely either advocate for a spread of groups using different approaches or just simply give to a fund (e.g. EA AWF or ACE). 

Thanks for your comments and questions, James.

Surely if you thought that EA AWF was a good evaluator or donation opportunity for donors, you would just let them manage the entirety of the fund? As then EA AWF would be able to distribute to THL if they actually thought THL was the most effective use of funds on the margin. And if not, even better, as they can give to more effective opportunities.

The short answer is "no": we don't think we can currently justify the claim that giving to the AWF is better than giving to THL's corporate campaigns, or vice versa. We did indeed conclude from our evaluation that the AWF can likely use marginal funds cost-effectively, but that isn't the same as deferring to them on all fronts (including because we also found significant room for improvement, as explained in the AWF report), nor does it imply the AWF is better at allocating extra capital than THL is.

I'm also curious why you felt the need to recommend at least one competitive alternative to the AWF, when the AWF itself is a fairly diversified fund? Arguably, you marked ACE down for similar reasoning in your evaluation of their Movement Grants (that they were spreading their grants across many groups rather than focusing mostly on the most effective groups)

Our goal is to provide recommendations to help donors maximise their impact from the perspective of a variety of worldviews, and it's in that light that we decided to also recommend THL's corporate campaigns: consider that someone else could have made an (I think justifiable) comment that is entirely the opposite of what you are saying, i.e. that we should only recommend THL because there we actually have some independent evidence of their intervention working and being highly cost-effective, which is lacking for many if not most of the projects the AWF funds (given the early stage of the AW charity evaluation space).

We criticized ACE MG not for making grants to multiple groups but for doing so at the seeming expense of expected impact. As mentioned above, we don't think THL's corporate campaigns are a worse donation opportunity than the AWF, and we think there may be donors who think it's more cost-effective in expectation, for instance because they put less weight on the individual judgement of grantmakers (or on our judgement in evaluating AWF to be a good donation opportunity!) and think the publicly available evidence for THL is stronger.

Statements like this make me worry that this evaluation focused too much on the certainty of some positive impact, rather than maximising expected impact (i.e. measurability bias).

I think you're right to worry about this - I do as well! - as I would say there is some implicit measurability bias in our recommendations. Most notably, we ended up recommending THL's corporate campaigns over other ACE recommendations not because we have strong evidence that they are a better donation opportunity than any other individual ACE recommendation, but because they are the only one where we think we have sufficient evidence to justify recommending them.

However, this is importantly different from us prioritising certainty of some positive impact over maximum expected impact: THL's corporate campaigns is our best guess donation opportunity to maximise expected impact (alongside the AWF). If we thought we could have easily justified any one of ACE's other recommendations was better - or even just as good - from that perspective, we would have recommended them, but we currently can't. And please note that "justifying" here isn't about finding "certainty of positive impact": we are looking for the expected value case (as we do for the AWF and our other recommendations as well).

As mentioned in the comment above, you would struggle to find many experienced animal advocates who would confidently recommend THL as the single best marginal giving opportunity. In reality, they would likely either advocate for a spread of groups using different approaches or just simply give to a fund (e.g. EA AWF or ACE).

This is a much stronger claim than we are making (THL's corporate campaigns being the "single best marginal giving opportunity"): we think it's one of the two best donation opportunities we can, from the information we have available, recommend to a broad set of donors to maximise their expected impact. We are not claiming that nobody could do better (certainly by their individual values/worldview!), and encourage donors to do their own (further) research if they have the time and expertise available. This is also why we host a broader selection of promising programs donors can look into and support on our donation platform.

THL's corporate campaigns is our best guess donation opportunity to maximise expected impact (alongside the AWF). If we thought we could have easily justified any one of ACE's other recommendations was better - or even just as good - from that perspective, we would have recommended them, but we currently can't. And please note that "justifying" here isn't about finding "certainty of positive impact": we are looking for the expected value case (as we do for the AWF and our other recommendations as well).

Based on your paragraph below from the ACE Report, I'm inferring that you only looked at three (out of 11) ACE recommendations, which only included charities evaluated in 2023, rather than 2022? So by default, GFI, Sinergia, Fish Welfare Initiative, Kafessiz and DVF were all excluded from potentially being identified (which seems illogical, as there is no obvious reason to think that charities evaluated in 2022 would be less cost-effective).[1]

ACE helpfully — and on very short notice — provided us with private documentation to elaborate on the cases for three of its 2023 charity recommendations [emphasis mine]. Unfortunately — potentially in part because of time constraints ACE had  — we still didn’t find these cases to provide enough evidence on the marginal cost-effectiveness of the charities to justify relying on them for our recommendations.

Given you only looked at three of the ACE 2023 recommendations (and you didn't say which ones), I'm wondering how you can make such a strong claim for all of ACE's recommended charities?

If we thought we could have easily justified any one of ACE's other recommendations [emphasis mine] was better - or even just as good - from that perspective, we would have recommended them, but we currently can't. 

On a slightly unrelated point: For the referral from OP, I would be curious to hear if you asked them "What is the most cost-effective marginal giving opportunity for farmed animal welfare" (to which they replied THL's corporate campaigns) or something closer to "Do you think THL is a cost-effective giving opportunity on the margin?"

This is a much stronger claim than we are making (THL's corporate campaigns being the "single best marginal giving opportunity"): we think it's one of the two best donation opportunities we can, from the information we have available, recommend to a broad set of donors to maximise their expected impact.

Fair enough! I should have said "One of the top 2 marginal giving opportunities" but I still think I stand by my point that many experienced animal advocates would disagree with this claim, and it's not clear that your charity recommendation work has sufficient depth to challenge that (e.g. you didn't evaluate groups yourself), in which case it's not clear why folks should defer to you over subject-matter experts (e.g. AWF, OP or ACE).

 

  1. ^

    You might say there is weaker evidence of their cost-effectiveness as it's been a year since they were evaluated but since you said you focused on the expected value case rather than certainty of positive impact, I assume this wasn't your issue.

So by default, GFI, Sinergia, Fish Welfare Initiative, Kafessiz and DVF were all excluded from potentially being identified (which seems illogical, as there is no obvious reason to think that charities evaluated in 2022 would be less cost-effective)

Yes they were, as were any other charities than the three charities we asked ACE to send us more information on (based on where they thought they could make the strongest case by our lights). Among those, we think ACE provided the strongest case for THL's corporate campaigns, and with the additional referral from Open Phil + the existing public reports by FP and RP on corporate campaigns, we think this is enough to justify a recommendation. This is what I meant by there indeed being a measurability bias in our recommendation (which we think is a bullet worth biting here!): we ended up recommending THL in large part because there was sufficient evidence of cost-effectiveness readily and publicly available. We don't have the same evidence for any of these other charities, so they could in principle be as or even more cost-effective than THL (but also much less!), and without the evidence to support their case we don't (yet) feel justified recommending them. We don't have capacity to directly evaluate individual charities ourselves (including THL!), but continue to host many promising charities on our donation platform, so donors who have time to look into them further can choose to support them.

To put this differently, the choice for us wasn't between "evaluating all of ACE's recommendations" and "evaluating only THL / three charities" (as we didn't have capacity to do any individual charity evaluations). The choice for us was between "only recommending the AWF" and "recommending both the AWF and THL's corporate campaigns" because there happened to already be sufficiently strong evidence/evaluations available for THL's corporate campaigns. For reasons explained earlier, we stand by our decision to prefer the latter over the former, even though that means that many other promising charities don't have a chance to be recommended at this point (but note that this is the case in charity evaluation across cause areas!).

Given you only looked at three of the ACE 2023 recommendations (and you didn't say which ones), I'm wondering how you can make such a strong claim for all of ACE's recommended charities?

Could you clarify which "strong claim for all of ACE's recommended charities" you are referring to? From the executive summary of our report on ACE:

We also expect the gain in impact from giving to any ACE-recommended charity over giving to a random animal welfare charity is much larger than any potential further gain from giving to the AWF or THL’s corporate campaigns over any (other) ACE-recommended charity, and note that we haven’t evaluated ACE’s recommended charities individually, but only ACE’s evaluation process.

On a slightly unrelated point: For the referral from OP, I would be curious to hear if you asked them "What is the most cost-effective marginal giving opportunity for farmed animal welfare" (to which they replied THL's corporate campaigns) or something closer to "Do you think THL is a cost-effective giving opportunity on the margin?"

The latter, because a referral by OP on its own wouldn't have been sufficient for us to make a recommendation (as we haven't evaluated OP): for recommending THL's corporate campaigns, we really relied on these four separate pieces of evidence being available.

I should have said "One of the top 2 marginal giving opportunities" but I still think I stand by my point that many experienced animal advocates would disagree with this claim, and it's not clear that your charity recommendation work has sufficient depth to challenge that (e.g. you didn't evaluate groups yourself), in which case it's not clear why folks should defer to you over subject-matter experts (e.g. AWF, OP or ACE).

We're not even claiming it is one of the top 2 marginal giving opportunities, just that it is the best recommendation we can make to donors based on the information available to us from evaluators. If you could point us to any alternative well-justified recommendations/evaluators for us to evaluate, we'd be all ears.

And we don't claim people should defer to us directly on charity evaluations (again, we don't currently do these ourselves!). Ultimately, our recommendations (including THL!) are based on the recommendations of the subject-matter experts you reference. The purpose of our evaluations and reports is to help donors make better decisions based on the recommendations and information these experts provide.

First, we want to sincerely thank Giving What We Can for running this “Evaluating the Evaluators” exercise. We recognize that ACE has set ourselves a difficult task, compounded by the fact that we’re the only organization doing what we do. Therefore, receiving this kind of feedback is both very rare and very welcome. There’s a great deal in GWWC’s report that will help us improve our processes for 2024, which ultimately means more animals will be helped and spared. While we were disappointed that GWWC has decided not to defer to our recommendations this year or recommend our Movement Grants program as a top-rated fund, we were heartened by the positive points in their report and their optimism about ACE’s future, and look forward to receiving further helpful feedback in a future evaluation from them. We were also delighted that GWWC recommended the EA Animal Welfare Fund as an effective giving opportunity.

Second, as an organization that values transparency and seeks to be open about our own limitations, we appreciated GWWC’s same openness about the limitations to, and uncertainties around, their evaluation. As they noted, this included limitations to their animal welfare expertise, the early stage of the charity evaluation space in animal welfare, and the time constraints forcing them to take a minimum viable product approach to this evaluation. The bulk of this year’s process also coincided with the culmination of our charity recommendation decisions—which, combined with GWWC’s demanding deadlines, made some aspects of the process challenging for us. GWWC fully recognized this, and we are confident that any future evaluation exercise will be even more helpful than this year’s.  

Third, we were reassured that much of GWWC’s constructive feedback aligns with ACE’s own self-identified areas for improvement. For example, we agree that we need to continuously assess whether we want to give out fewer, larger Movement Grants than we do currently. We also agree that we should be more strategic in using the valuable feedback we get from our Movement Grants grantees to inform our own views on priority tactics and translate this into useful information for the broader animal advocacy movement.

We are also continuously working toward improvements to our charity evaluation methods, such as how to more sensitively capture differences in scope. As in previous years, we will be conducting a thorough review of the top-priority improvements to make to next year’s Movement Grants and Charity Evaluations programs. We will certainly draw on GWWC’s feedback for this while also acknowledging that capacity constraints will inevitably make it impossible to make all of the improvements we would like. 

Fourth, there are some elements of GWWC’s report that we did not fully agree with. For example, while we agree that there’s plenty of opportunity for improvements to our Cost Effectiveness model to ensure that it reflects the cost effectiveness of charities’ achievements as accurately as possible, we would like to highlight that this year’s model is the result of considerable research, external guidance, and exploration of alternatives. We built this year’s model systematically to try to capture the most important aspects of what makes achievements impactful, based on empirical evidence wherever possible, and the quantitative metric closest to impact on animals that we could access for all charities’ achievements (e.g., the number of people reached per dollar for an educational campaign). We consulted with several external experts on how to best combine these scores into a single score and went through several iterations to ensure that the scores held up in confidence checks. We also think it’s likely that GWWC is overestimating how easy it is to deliver on their recommendation of reliably estimating the “marginal cost effectiveness of a dollar spent on the charity, based on the charity’s specific context.” In some past rounds of our Charity Evaluations program, ACE carried out back-of-the-envelope-calculation (BOTEC)-type cost-effectiveness modeling using Guesstimate, which aligns with GWWC’s recommendation. However, ACE then decided to change this approach for the reasons outlined here. We continuously review this decision and are open to reintroducing elements of our past approach if we determine it would be valuable for more effectively advancing our theory of change. 

As another example of disagreement, GWWC noted that they would prefer our Movement Grants program to focus exclusively on national or international projects rather than the types of regional projects we have sometimes funded. However, given that one of the aims of the Movement Grants program is to build up the movement in priority regions with a relatively small animal advocacy movement, and that for some regions, we receive significantly more applications for regional projects than for high-quality, tractable, national-level applications, we expect that we will continue funding some regional projects that we consider particularly promising. Relatedly, GWWC disagrees with our view that funding projects in countries with very little animal advocacy representation should be a key part of ensuring the animal advocacy movement’s long-term success, noting that others may reasonably be more sympathetic to this view.

Fifth, we share GWWC’s commitment to prioritizing marginal funding to the projects where it will be the most cost effective. However, elements of ACE’s work—none of which are unique to us—make this particularly complex to achieve in practice. With our charity evaluations, for example, we only re-evaluate charities every two years, and we do not directly disburse most of the funding that we influence to our Recommended Charities. As such, ACE does not ​​regularly calculate cost effectiveness over a range of possible allocations and distributes funding only to those above a particular effectiveness bar; instead, we recommend a set of charities that we are confident will put additional funding toward effective use to help animals over a longer time horizon without our oversight. We make this decision based on all of our evaluation criteria, with a strong focus on Cost Effectiveness (which examines the effectiveness of past work) and Room For More Funding (which assesses whether a charity’s planned uses for funding over the next two years will be roughly similar to their past work). 

It is also worth noting that because we have shifted to one recommendation level for our charities this year (as opposed to the previous Top Charities/Standout Charities distinction), we plan to develop a new decision-making process that better accounts for the marginal cost effectiveness of funding disbursed from ACE’s Recommended Charity Fund. This will better enable us to leverage our grantmaking role in addition to our recommendation role.  

For our Movement Grants—especially smaller projects, projects benefiting species for whom few interventions have been tried, and projects in regions where the movement is particularly small, so there’s particularly little evidence—it is not currently possible to sufficiently investigate each project application to make meaningful cost-effectiveness estimates. We, therefore, rely on proxies such as the coherence of an applicant’s theory of change, the priority of their focus animal groups, and the neglectedness of the region in which they operate.

Additionally, because we believe that supporting a range of different approaches is essential for an effective animal advocacy ecosystem (and because our recommendations influence donor and public opinion), we want to feature a plurality of approaches with a strong potential for impact. Because of the high uncertainty about the most effective ways to help animals, and because the different interventions reinforce and facilitate each other, we think supporting a range of approaches is both necessary and beneficial for the animal advocacy ecosystem as a whole. We view this as preferable to diverting funding near-exclusively to charities and programs with the most convincing shorter-term theories of change, which runs the risk of dismissing potentially pivotal interventions due to measurability bias.

Sixth, we are glad that GWWC chose to recommend The Humane League (THL) based in part on our evaluation. As we note in our 2023 review, we view giving to THL as an excellent opportunity to support initiatives that create the most positive change for animals. At the same time, we are disappointed that GWWC chose to only recommend one of our Recommended Charities and to restrict grants for their corporate campaign work. After months of evaluation, we are confident that all of our Recommended Charities represent highly promising giving opportunities. We are also convinced of the need for a pluralistic and resilient movement incorporating a range of effective tactics toward different outcomes to achieve wellbeing for as many animals as soon as possible globally. 

We also want to make clear that while GWWC’s decision might imply that THL represents a superior giving opportunity compared to our other recommended charities, this is not a view that ACE shares. We view all of our recommended charities, including THL, as highly impactful giving opportunities. Following GWWC’s initial conclusion that they weren’t going to defer to our overall charity recommendations this year, we would have welcomed the opportunity to provide supporting materials for more than three of our Recommended Charities and to have had more time to do so.  

Lastly, and most importantly, we congratulate, once more, the latest additions to our list of Recommended Charities. We will diligently continue working to ensure that our approach to evaluation and our methods capture the full extent of charities’ work as accurately as possible, and we expect these improvements to be an ongoing endeavor that will continue for as long as ACE exists. At the same time, following months of research, preparation, and evaluation, we are confident that Kafessiz Türkiye, Dansk Vegetarisk Forening, Faunalytics, Fish Welfare Initiative, Good Food Institute, Legal Impact for Chickens, New Roots Institute, Shrimp Welfare Project, Sinergia Animal, The Humane League, and Wild Animal Initiative all do incredible work and represent extremely promising giving opportunities. We are pleased that GWWC’s report recognizes this, both through their direct recommendation of our Recommended Charity The Humane League, and through their strong recommendation to impact-maximizing donors to give to ACE’s recommended charities over the average animal welfare charity.

-ACE Team

It's worth pointing out that ACE's estimates/models (mostly weighted factor models, including ACE's versions of Scale-Tractability-Neglectedness, or STN) are often already pretty close to being BOTECs, but aren't quite BOTECs. I'd guess the smallest fixes to make them more scope-sensitive are to just turn them into BOTECs, or whatever parts of them you can into BOTECs[1], whenever not too much extra work. BOTECs and other quantitative models force you to pick factors, and scale and combine them in ways that are more scope-sensitive.

 

For the cost-effective criterion, ACE makes judgements about the quality of charities' achievements with Achievement Quality Scores. For corporate outreach and producer outreach, ACE already scores factors from which direct average impact BOTECs could pretty easily be done with some small changes, which I'd recommend:

  1. Score "Scale (1-7)" = "How many locations and animals are estimated to be affected by the commitments/campaign, if successful?" in terms of the number of animals (or animal life-years) per year of counterfactual impact instead of 1-7.
  2. Ideally, "Impact on animals (1-7)" should be scored quantitatively using Welfare Footprint Project's approach (some rougher estimates here and here) instead of 1-7, but this is a lower priority than other changes. Welfare improvements per animal or per year of animal life can probably vary much more than 7 times, though, and can end up negative instead, so I'd probably at least adjust the range to be symmetric around 0 and let researchers select 0 or values very close to it.
  3. The BOTEC is then just the product of "Impact on animals (1-7)" (the average[2] welfare improvement with successful implementation), "Scale", "Likelihood of implementation (%)", expected welfare range and the number of years of counterfactual impact (until similar welfare improvements for the animals would have happened anyway and made these redundant). Similar BOTECs could be done for the direct impacts of other interventions.

For groups aiming to impact decision-making or funding in the near term with research like Faunalytics, ACE could also highlight some of the most important decisions that have been (or are not too unlikely to be) informed by their research so that we can independently judge how they compare to corporate outreach or other interventions. ACE could also use RP's model or something similar to get impact BOTECs to make comparisons with more direct work.

For other charities, ACE could also think about how to turn the models into BOTECs or quantitative models of important outcomes. These can be intermediate outcomes or outputs that aren't necessarily comparable across all interventions, if impact for animals is too speculative, but the potential upside is high enough and the potential downside small enough.[1]

 

For the Impact Potential criterion, ACE uses STN a lot and cites the 80,000 Hours' article where 80,000 Hours explains how to get a BOTEC by interpreting and scoring the factors in specific ways. ACE could just follow that procedure and then the STN estimates would be BOTECs.

That being said, STN is really easy to misapply generally (e.g. various critiques here), and I'd be careful about relying on it even if you were to follow 80,000 Hours' procedure to get BOTECs. For example, only a tiny share of a huge but relatively intractable problem, like wild animal welfare/suffering, may be at all tractable, so it's easy to overestimate the combination of Scale and Tractability in those cases. See also Joey's Why we look at the limiting factor instead of the problem scale and Saulius's Why I No Longer Prioritize Wild Animal Welfare. STN can be useful for guiding what to investigate further and filtering charities for review, but I'd probably go for BOTECs done other ways, like above to replace Achievement Quality Scores, and with more detailed theories of change.

  1. ^

    For example, you could do a BOTEC of the number of additional engagement-weighted animal advocates, which could be part of a BOTEC for impact on animals, but going from engagement-weighted animal advocates to animals could be too speculative, so you stop at engagement-weighted animal advocates. This could be refined further, weighing by country scores.

  2. ^

    Per animal or per animal life-year, to match Scale.

  3. ^

    It seems ACE did so for the Scale factor, but no specific quantitative interpretation for the others.

Hi Michael, thanks a lot for the helpful comments, and for taking the time to be so thorough in your feedback. We've been thinking a lot about how to produce proxies for impact that can be meaningfully compared with one another, with BOTECs being one possible way to help achieve that, so it's really useful to get your views. We'll talk these through as a team as we consider improvements to our process for the coming years.

- Max

Thank you! As we mention in the report, we're grateful for how you've engaged with our evaluations process, and I think this comment is a good illustration of the open, constructive and collaborative attitude you've had throughout it. We look forward to re-evaluating ACE's work next year, and in the meantime remain excited to host many of ACE's funds and recommendations on our donation platform as promising opportunities for donors to consider.

[comment deleted]3
0
0

I'm very excited to see this. To be honest when I first heard of the "evaluate the evaluators" project I was very skeptical and thought it would just be a rubber stamp on the EA ecosystem in a way that would play well for social media and attract donations.

I definitely was wrong!

It's good to see that there actually was substantive meta-evaluation here and that the GWWC meta-evaluators did not pull punches!

Thank you, Peter, we're obviously very happy to hear this!

I’ve decided to curate this post. An evaluation of the evaluators, written up by an independent organisation, is a valuable resource for donors and the evaluators themselves. Even though EA charity evaluators are often pretty good at marking their own homework, I’d be happy to see it no longer being a necessity. This project gets up a step closer.

Additionally, I think the response from the evaluated organisations is wonderful. I’d like to highlight ACE’s response which exemplifies truth seeking and collaboration, while pushing back on some of the substance of the report, and making their reasoning clear in doing so.

Thank you, Toby! We appreciate the positive feedback and definitely share your thoughts about the value of this exercise.

- Max

Hi GWWC folks! Just wanted to extend a hearty thanks to you on behalf of the Happier Lives Institute. We appreciate that you looked into us and we respect your reasons for wanting to come back to us later. 

Naturally, we're doing our best to make this work out-of-date, make your concerns obsolete, and give you reasons to review our output. We've just dropped our updated psychotherapy report and 2023 giving season recommendations (which keeps StrongMinds and adds AMF!), and I hope you'll enjoy both.

Thanks Peter, and we'd of course like to extend the thanks back to HLI for being such an excellent collaborator here! Congratulations on publishing your new research. I'm eager to read more about it over the coming weeks and hopefully to dive into it in more detail next year in our next round of evaluations. 

Bumping that GWWC's research team is running an AMA here. They will respond to questions there until Tuesday 28th 9pm UTC. 

Great to read this. Did you evaluate The Life You Can Save org as part of this process? Can you share any feedback if so?

Hi Rebecca — we did not look into The Life You Can Save for this round. As shared here we only looked into the six evaluators/funds listed in this post, and in our "Why and how GWWC evaluates evaluators" we shared how we decided which evaluators to prioritise. It's too soon to say which evaluators we'll look into next, though we can share that our current inclination is that looking into Founders Pledge's research, and expanding the cause areas we include (like climate change, or "meta" work) is a particularly high priority. 

What a great post, thank you so much for doing this important work. 

I'm interested to know why you chose to "still think ACE’s funds and recommendations are worth considering for impact-focused donors and we will continue to host them on the GWWC donation platform" and later say "ACE’s charity evaluation process does not currently measure marginal cost-effectiveness to a sufficient extent for us to rely directly on the resulting charity recommendations". I understand that there may be hope for the future but right now if the role of EA is to nudge people to the opportunities that have the highest marginal impact per dollar, shouldn't GWWC focus exclusively on EAWF, or are you saying there is something lacking in the analysis here?

Would appreciate some clarification

Thanks for your question!

The important nuance here is that while we did not think ACE's current charity evaluation process measures marginal cost-effectiveness to a sufficient extent to directly rely on ACE's recommendations, that isn't the same as the (stronger) claim that its recommendations are necessarily worse donation opportunities than the AWF or THL's corporate campaigns, and it also isn't the same as claiming that ACE's process doesn't track marginal cost-effectiveness at all.

We can't say confidently how ACE's (other) recommendations compare to the AWF or THL's corporate campaigns, as we haven't individually evaluated and compared them. So we want to offer donors who have the time and expertise to look into these promising individual charities the opportunity to do so and potentially donate to them if they find them to be maximising impact by their worldview, as we do for many more charities and funds on our platform that we can't currently justify recommending (for instance because they haven't been evaluated (yet)).

You may also be interested in our answer to this somewhat related question under the AMA post.

Hi Sjir, Alana and Michael,

Thanks for the update on this valuable project!

Based on our evaluation, we’ve decided to continue to rely on GW’s charity recommendations and to ask GW to advise our new GWWC Global Health and Wellbeing Fund.

It is understandable that you asked GW to advise your new GWWC Global Health and Wellbeing Fund instead of directing the donations it received to EA Funds' Global Health and Development Fund, as this is also advised by GW. However, this arguably means there will be a significant correlation between the grants made by 4 funds:

Have you considered allocating the donations made to the GWWC Global Health and Wellbeing Fund to GW's funds? As an aside, it is also unclear to me what is the added value of EA Funds' Global Health and Development Fund.

There is definitely substantial overlap between the four funds you listed; and especially on GWWC's fund and EA Funds'. In principle, it doesn't have to be this way:

  • GWWC's Global Health and Wellbeing Fund could potentially grant based on evaluations other than GW (e.g., potentially from Founders Pledge, or Happier Lives Institute, etc., depending on how our subsequent evaluations go).
  • EA Funds' Global Health and Development Fund could similarly appoint new advisors, or change its scope. But I can't speak on behalf of EA Funds!
  • GW's Top Charities Fund and All Grants Fund do make different grants, with the latter having a wider scope, but there is overlap.  

Have you considered allocating the donations made to the GWWC Global Health and Wellbeing Fund to GW's funds?

We expect that in effect this will be what happens. That is, we expect GW to advise our fund as a proxy for the All Grants Fund. Operationally, it's better for us to directly grant to the organisations based on GW's advice (rather than, for example, sending the money to GW to regrant it) so that, among other reasons, charities can receive the money sooner. We already have this process setup for donations made to GW's funds on our platform. This means that, at least right now, giving to either the All Grants Fund, or our cause area fund, will have the same effect. But as above, this could change based on future evaluations of evaluators, which we see as a feature for donors who want to setup recurring donations to track our latest research.

Thanks for clarifying, Michael! Your approach makes sense to me. On the other hand, the value of EA Funds' Global Health and Development Fund in its current form remains unclear to me.

If I remember correctly, GHDF predated GW's creation of the Top Charities Fund and the All Grants Fund. In addition, I think GiveWell UK is of fairly recent origin, so EA Funds would have offered UK tax advantages that were not then (at least readily) available through GiveWell. So I think at least some of the original advantages of GHDF may have become much less significant with subsequent developments at GiveWell?

Hi Jason,

If I remember correctly, GHDF predated GW's creation of the Top Charities Fund and the All Grants Fund.

The All Grants Fund was launched in August 2022, and GW's Maximum Impact Fund was renamed to Top Charities Fund one month later. GHDF made its 1st grant in 2017. The Maximum Impact Fund had been making grants since 2014.

In addition, I think GiveWell UK is of fairly recent origin, so EA Funds would have offered UK tax advantages that were not then (at least readily) available through GiveWell.

Good point! GiveWell UK was launched in August 2022.

So I think at least some of the original advantages of GHDF may have become much less significant with subsequent developments at GiveWell?

I think so.

At Giv Effektivt (Denmark), we're looking towards expanding from Global Health to multiple cause areas by next summer (2024), probably starting with a limited set of options. Your work here will play an important role in those decisions. Thanks!

A fun meta-reflection: where will this chain of evaluations stop? Will there be an evaluation of evaluator-evaluators? Evaluations all the way down? I guess whatever goes down in the comments here will be exactly that.

That's great to hear Jonas, please let us know if we can do anything else to help! As mentioned in our reports and back when we announced the project, part of the motivation for doing this work is to support other effective giving organisations like Giv Effektivt to be able to make more informed decisions on their recommendations.

And yes, agree that these comments provide a bit of the next layer... let's see where it stops!

For the other recommended funds on the GWWC website, will you be evaluating the EA Infrastructure Fund, Founders Pledge Climate Fund, and the Founders Pledge Patient Philanthropy Fund? What will happen to their current recommended status in the meantime?

Also, did you evaluate GW's Top Charity Fund of All Grants Fund?

Re Top Charities vs All Grants, you can read it in the linked evaluation on GiveWell: https://docs.google.com/document/d/1rn1d69KR3zfVzZaRZvSZNp1vcDap1hudzxngNqeFHuc/edit. As I read it, it's both - and an evaluation of GiveWell as an advisor overall.

GWWC picked Deworming as a probe into the difference between All Grants and Top Charities, and came away with the expected conclusion. Paraphrased: "AGF recommendations are highly impactful in expectation but with a wider outcome space (higher uncertainty either way)".

In this round of evaluations, we only looked into Animal Charity Evaluators, GiveWell, Happier Lives Institute, EA Funds' Animal Welfare Fund and Long-Term Future Fund, and Longview's Emerging Challenges Fund. In future evaluations, we would like to look into Founders Pledge's work, climate change more generally, and other evaluators. It's too soon to commit to which, and in which order, just yet.

Also, did you evaluate GW's Top Charity Fund of All Grants Fund?

Jonas' reply is spot on here — we essentially looked into both and to GW more generally.

I'm sympathetic to the problem of measuring "marginal cost effectiveness" for ACE's movement grants, that does seem difficult and at risk of measurement bias. But let's change the topic for now: 

I'm curious how much thought has gone into considering the dilemmas here as "spread movement infrastructure broadly everywhere vs concentrate it in particular social groups/geographies/municipalities/political levers/corporate targets"?

There's an extent to where establishing footholds widely is important, maintaining the universality of the movement and maybe benefiting from a wide diffusion of tractable goals.

But on the other hand, it seems to me that concentration of funds may benefit from possible social tipping points? 

  • Protests and leafleting both seem to more likely to happen with, say 20 individuals, 5 of whom show up on a given day. 
  • Likewise, the salience of such acts seems somewhat predictable - 5 fill a sidewalk, 20 might be enough to pressure a particular target, 100 enough for a march/parade.
  • "Friend group", "social scene", "group identity", all seem dependent on different scales.
  •  Social media effectiveness might require 10,000 viewers.
  • Major (positive) media attention might require 250 moderately dedicated individuals. 
  • Signature collecting clearly benefits from concentration, as do "letters to your representatives"
  • More abstractly, there are possible tipping points for achieving clear social consensus on topics, what ideas are conformed too rather than against, how many are needed to veto dinner party decisions, and so on

Obviously this isn't remotely an empirical matter, there are no objective numbers to refer to, I made these up. And I'm of course equivocating between $$$ and social network building, there's plenty of room for particularly capable organizations and individuals to dominate. 

But what if we concentrated millions in groups in Berkeley, California? What if hundreds of thousands go to fueling activism against one single provision in a particular bill? What about 10-timing the funding of a single university group? 

I may not be "on the ground" enough to get a good sense of actual movement dynamics, so feel free to discount what I've said on that basis. And maybe I'm just naive as to the amount of funds that can actually be productively used by a given group?

But it seems like one essential problem of the movement is that it is drowning in just causes. Random barn fire happens, 10k cows burn alive - what now? 

I agree with GWWC's case that AWF may be overestimating the value of addressing geographic neglectedness. But I'm wondering if the problem is wider? Could animal movement-building giving in general do with a bit more concentration? EA has been great at pioneering new, neglected frontiers in animal welfare; but perhaps there's a point where we should shy away from novelty and bet everything on a few choice picks?[1]

 

  1. ^

    I realize that the emphasis on the humane league could be construed as doing just this, but I vague notion that "corporate campaigning" isn't exactly a movement-builder tactic and so may have more of ceiling on possible benefits? Very uncertain on this.

I really love this summary of GiveWell, it resonates with how I feel about their great work, and brings out a couple of nuances as well nice one!

Thanks Nick! It was really illuminating for me personally to look under the hood of GW, and I'm glad you appreciated our summary of the work. 

Hi,

Do you have any thoughts on whether AWF, LTFF and Longview’s Emerging Challenges Fund (ECF) should publish more information about the rationale behind their grants?

With respect to AWF, you say:

The AWF has a section on Grantmaking and Impact on its website in which it lists several past grantees and their type of work, but it doesn’t go into detail on what these organisations achieved or what they achieved with the grants they received from the AWF in particular. We don’t think it would be worth it for the AWF to do a follow-up impact evaluation on each grant it makes, but we think it could at least provide a few more detailed examples of successful past grants, to illustrate to donors what their donations may lead to.

I agree doing follow-up impact evaluations of all grants is not needed, but I also think it would be useful to know about the cases for the grants at the time they were made. You note LTFF shares such cases, but only for a tiny minority of the grants, and follow-up impact evaluations are essentially absent too.

ECF has grant write-ups of a few paragraphs, but you note "Longview has solid grantmaking processes in place to find highly cost-effective funding opportunities", so maybe it could share more without much more additional work?

Thanks for your work on this!

I am interested in how you would prioritise between ACE's Movement Grants (MG) and their recommended charities. What would you recommend, if you had to recommend one of them, and why? From how I read your analysis, it seems that you think that MG are the better option. Do I read that correctly?

Hi Moritz, yes if you ask me personally, I would currently lean towards recommending MG over a randomly picked ACE recommended charity, though I'm far from confident in this / it's not a claim I would be able to justify to the extent we usually want to justify our recommendations as GWWC. It's mainly based on my view that the difference between the AWF and MG is fairly small (both are broadly trying to make cost-effective grants and are getting promising applications on the margin), whereas our criticism of ACE's charity evaluations process a bit more fundamentally challenges it coming up with highly cost-effective donation opportunities on the margin (though I also don't want to overstate our conclusion there). I would furthermore guess that MG is/will be more funding-constrained relative to its aims/applications than most of ACE's individual charity recommendations. (but really, this is a guess: note that I haven't looked into the charity recommendations individually!)

Thanks for your perspective and transparency Sjir! That seems reasonable from my prior perspective and how I read your report.

bout Hi,

I appreciate THL has room for more funding. You say in the report on animal charity evaluators that:

A direct referral from Open Philanthropy’s Farm Animal Welfare team — the largest funder in the impact-focused animal welfare space — on THL indeed currently being funding-constrained, i.e. that it has ample room to cost-effectively use marginal funds on corporate campaigns and that there aren’t strong diminishing returns to providing THL with extra funding.

However, Open Philanthropy (OP), which granted 8.3 M$ to THL in 2023, presumably wants to fund THL up to a certain point. Other donors donating to THL could simply mean OP has to donate less. So I wonder whether donating to THL has the same effect as donating to OP. If this is so, donating to THL would be equivalent to mostly supporting human welfare interventions. Based on OP's grants data on 17 February 2024, only 9.98 % of the money granted by OP has gone to animal welfare interventions.

I guess donations to AWF do not suffer as much from the above.

Update. Lewis Bollard, program director of farm animal welfare at Open Philanthropy, shared his thoughts on the above. My understanding is that one should basically not worry much displacing OP's grants.

Hello,

We conducted a survey of 16 fundraising organisations to help us decide which evaluators to prioritise investigating.

Would it be possible to share more details about this survey (e.g. identifying the 16 organisations involved)?

Hi Vasco — not all organisations shared permission to have their name shared, but it includes many of the fundraising organisations on this list.

Executive summary: Giving What We Can (GWWC) evaluated five organizations focused on charity evaluation and grantmaking, assessing their processes and making recommendations on which to rely on for donations.

Key points:

  1. GWWC will continue relying on GiveWell for global health recommendations and ask them to advise a new GWWC fund. GiveWell's processes are strong, though judgement plays a role in comparisons and some analyses.
  2. The evaluation of Happier Lives Institute was stopped as the costs were deemed to outweigh the benefits.
  3. The EA Funds' Animal Welfare Fund is recommended as a top fund option based on grant quality and room for more funding.
  4. Animal Charity Evaluators is not recommended for charity recommendations currently. One program by The Humane League is recommended based on multiple sources.
  5. The EA Funds' Long-Term Future Fund and the Longview Philanthropy Longtermism Fund are both recommended top funds for reducing global catastrophic risks, with GWWC allocating to both. Each fund has some areas for improvement.


This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

[comment deleted]0
0
0
Curated and popular this week
Relevant opportunities