We’re the research team at Giving What We Can

Ask us anything!

We’ll be answering questions Monday the 27th from 2pm UTC until Tuesday the 28th at 9pm UTC. 

Update 28 November 6.20pm UTC: thank you for all the great questions! We've answered most of them by now, and plan continue to answer questions for a bit longer, probably until tomorrow morning ~5am UTC.
 

Please post your questions as comments to this post, to the post on our evaluations of evaluators, or to the post on our recommendations and cause area funds. And please upvote the questions you’d like us to answer most. We’ll do our best to answer as many as we can, though we can’t guarantee we’ll be able to answer all of them.

In addition to discussing our new reports, recommendations and funds, we are happy to answer any questions you may have about our research plans for next year, about the impact evaluation we did earlier this year, about GWWC more broadly, or about anything else you are interested in!


 

Comments27
Sorted by Click to highlight new comments since: Today at 9:23 AM

How large is the comparison class for charity evaluators? When you are defining best practice for a charity evaluator, do you have a small set of charity evaluators in mind, or do you find examples in analogous evaluation projects and platforms (perhaps some that are nothing to do with EA, or nothing to do with charity in general)?

There is a relatively small comparison class here; we often say we’re focused on “impact-focused” evaluators. Here is our database of evaluators we know of that we might consider in this reference class. In the medium to long run, however, we could imagine there being value in investigating an evaluator completely outside EA, with a very different approach. There could be some valuable lessons of best-practice, and other insights, that could make this worthwhile. I expect we probably won’t prioritise this until we have looked into more impact-focused evaluators. 

I'm just commenting here, but it's in reference to the evaluating the evaluator's posts and comments below. There is some confusion as to where GWWC's recommendation for THL's corporate campaign work is coming from? Is there some strong evidence for this that was not published? Why is THL considered but no other direct charities? 

If we were going off the conclusion of the evaluating the evaluators shouldn't GWWC just be recommending EAWF? 

Thanks for all your hard work on this :) 

Thanks Lauren for your question, and thanks Vasco for helping to answer it! I've replied to the comment under the post on our evaluations that I believe you're referring to, and am happy to elaborate on any part of my answer there (and what's in the report / what Vasco shared) if helpful.

Hi Lauren,

You and other interested readers may want to check this section of GWWC's evaluation of ACE. Here is the part of the section I see as most relevant to your comment:

ACE helpfully — and on very short notice — provided us with private documentation to elaborate on the cases for three of its 2023 charity recommendations. Unfortunately — potentially in part because of time constraints ACE had  — we still didn’t find these cases to provide enough evidence on the marginal cost-effectiveness of the charities to justify relying on them for our recommendations. However, for one of the charities recommended by ACE — The Humane League — we reasoned that a strong enough case would be made by combining ACE’s recommendation with further evidence in favour of the marginal cost-effectiveness of THL’s work on corporate campaigns for chicken welfare:

  • 2018 evaluation by Founders Pledge of THL’s corporate campaigns’ cost-effectiveness at that point in time.
  • 2019 review by Rethink Priorities of the cost-effectiveness of corporate campaigns for chicken welfare more generally.
  • A direct referral from Open Philanthropy’s Farm Animal Welfare team — the largest funder in the impact-focused animal welfare space — on THL indeed currently being funding-constrained, i.e. that it has ample room to cost-effectively use marginal funds on corporate campaigns and that there aren’t strong diminishing returns to providing THL with extra funding.

To be clear, there are strong limitations to this recommendation: 

  • We didn’t ourselves evaluate THL’s work directly, nor did we compare it to other charities (e.g., ACE’s other recommendations).
  • The availability of evidence here may be high relative to other interventions in animal welfare, but is still low compared to interventions we recommend in global health and wellbeing.
  • We haven’t directly evaluated Open Philanthropy, Rethink Priorities, or Founders Pledge as evaluators. 
  • We have questions about the external validity of the evidence for corporate campaigns, i.e. whether they are as cost-effective when applied in new contexts (e.g. low- and middle-income countries in Africa) as they seem to have been where the initial evidence was collected (mainly in the US and Europe).
  • We also have questions about the extent to which the evidence for corporate campaigns is out of date, as the Founders Pledge and Rethink Priorities reports are from more than four years ago and we would expect there to be diminishing returns to corporate campaigns over time, as the “low-hanging fruits” in terms of cost-effectiveness are picked first.


Taken together, all of this means we expect funding THL’s current global corporate campaigns to be (much) less cost-effective than the corporate campaigns in 2016-2017, which were evaluated in those reports. However, we still think funding them is likely highly cost-effective, and the most justifiable charity recommendation we can currently make, based on the available evidence and our limited time. We also think that recommending at least one competitive alternative to the AWF in the animal welfare space — if we transparently and justifiably can — is valuable. We hence decided to make this exception to evaluating only evaluators to make an individual charity recommendation, motivated by our overarching principles of usefulness, justifiability, and transparency.

Hi Sjir, Alana and Michael,

Thanks for all your work on evaluating the evaluators! I think this is a very valuable project.

Some questions:

  • If someone wanting to donate 1 M$ who was not pre-commited to any particular area asked for your advise on which of your recommended funds is more cost-effective, and wanted to completely defer to you without engaging in the decision process, what would you say?
  • Are you confident that donating to the Animal Welfare Fund (AWF) is less than 10 times as cost-effective as donating to GiveWell's Top Charities Fund (TCF)? If not, have you considered investigating this?
  • If you thought donating to the AWF was over 10 times as cost-effective as donating to TCF (you may actually agree/disagree with this), would you still recommend the latter (relatedly)? If so, would you disclaim that your best guess was that AWF was significantly more cost-effective than TCF?
  • Are you confident that donating to TCF is beneficial accounting for effects on animals? If not, have you considered investigating this? I did not find "animal" nor "meat" in your evaluation of GiveWell.
  • If you thought donating to TCF resulted in a net decrease in welfare due to the meat-eater problem (you may actually agree/disagree with this), would you still recommend it? If so, would you disclaim that your best guess was that TCF resulted in a net decrease in welfare, but that you recommended it for other reasons?

I appreciate these are difficult questions, and my understanding is that GWWC has broadly been thinking about them along the lines of Open Phil's worldview diversification approach. However, I also think Open Phil has not been transparent about its application in the context of prioritisation between human- and animal-focussed interventions. It would be great if you could be transparent about your process.

In any case, regardless of whether you look into the above or not, evaluating the evaluators is still useful!

Hi Vasco, thanks for your questions!

I’ll answer what I see as the core of your questions before providing some quick responses to each individually. 

As you suggest, our approach is very similar to Open Philanthropy’s worldview diversification. One way of looking at it is, we want to provide donation recommendations that maximise cost-effectiveness from the perspective of a particular worldview. We think it makes sense to add another constraint to this, which is that we prioritise providing advice to more plausible worldviews that are consistent with our approach (i.e., focusing on outcomes, having a degree of impartiality, and wanting to rely on evidence and reason).

I’ll share how this works with an example. The “global health and wellbeing” cause area contains recommendations that appeal to people with (some combination) of the following beliefs:

  1. We should prioritise helping people over animals
  2. Some scepticism about highly theoretical theories of change, and a preference for donating to charities whose impact is supported by evidence
  3. It’s very valuable to save a life
  4. It’s very valuable to improve someone’s incomes

People may donate to the cause area without all of these beliefs, or with some combination, or perhaps with none of them but with another motivation not included. Perhaps they have more granular beliefs on top of these, which means they might only be interested in a subset of the fund (e.g., focusing on charities that improve lives rather than save them).

Many of your questions seem to be suggesting that, when we account for consumption of animal products, (3) and (4) are not so plausible. I suspect that this is among the strongest critiques for worldviews that would support GHW. I have my own views about it (as would my colleagues), but from a “GWWC” perspective, we don’t feel confident enough in this argument to use it as a basis to not support this kind of work. In other words, we think the worldviews that would want to give to GHW are sufficiently plausible. 

I acknowledge there’s a question-begging element to this response: I take it your point is why is it sufficiently plausible, and who decides this? Unfortunately, we can acknowledge that we don’t have a strong justification here. It’s a subjective judgement formed by the research team, informed by existing cause prioritisation work from other organisations. We don’t feel well-placed to do this work directly (for much the same reason as we need to evaluate evaluators rather than doing charity evaluation ourselves). We would be open to investigating these questions further by speaking with organisations engaging in this cause-prioritisation — we’d love to have a more thoughtful and justified approach to cause prioritisation. In other words, I think you’re pushing on the right place (and hence this answer isn’t particularly satisfying).

More generally, we’re all too aware that there are only two of us working directly to decide our recommendation and are reluctant to use our own personal worldviews in highly contested areas to determine our recommendations. Of course, it has to happen to some degree (and we aim to be transparent about it). For example, if I were to donate today, I would likely give 100% of my donations to our Risks and Resilience Fund. I have my reasons, and think I’m making the right decision according to my own views, but I’m aware others would disagree with me, and in my role I need to make decisions about our recommendations through the lens of commonly held worldviews I disagree with. 

I’ll now go through your questions individually:

If someone wanting to donate 1 M$ who was not pre-commited to any particular area asked for your advise on which of your recommended funds is more cost-effective, and wanted to completely defer to you without engaging in the decision process, what would you say?

We’d likely suggest donating to our cause area funds, via the “all cause bundle”, splitting their allocations equally between the three areas. This is our default “set-and-forget” option, that seems compelling from a perspective of wanting to give a fraction of one’s giving to causes that are maximally effective from particular worldviews. This is not the optimal allocation of moral uncertainty (on this approach, the different worldviews could ‘trade’ and increase their combined impact); we haven’t prioritised trying to find such an optimised portfolio for this purpose. It’d be an interesting project, and would encourage anyone to do this and share it on the Forum and with us!

Are you confident that donating to the Animal Welfare Fund (AWF) is less than 10 times as cost-effective as donating to GiveWell's Top Charities Fund (TCF)? If not, have you considered investigating this?

We are not confident. This is going to depend on how you value animals compared to humans; we’re also not sure exactly how cost-effective the AWF Fund is (just that it is the best option we know of in a cause area we think is generally important, tractable and neglected).

If you thought donating to the AWF was over 10 times as cost-effective as donating to TCF (you may actually agree/disagree with this), would you still recommend the latter (relatedly)? If so, would you disclaim that your best guess was that AWF was significantly more cost-effective than TCF?

If we thought there wasn’t a sufficiently plausible worldview whereby TCF was the best option we knew of, we would not recommend it. 

Are you confident that donating to TCF is beneficial accounting for effects on animals? If not, have you considered investigating this? I did not find "animal" nor "meat" in your evaluation of GiveWell.

We did not consider this, and so do not have a considered answer. I think this would be something we would be interested in considering in our next investigation.

If you thought donating to TCF resulted in a net decrease in welfare due to the meat-eater problem (you may actually agree/disagree with this), would you still recommend it? If so, would you disclaim that your best guess was that TCF resulted in a net decrease in welfare, but that you recommended it for other reasons?

As above, we would if we didn’t think there was a sufficiently strong worldview by which TCF was the best option we knew of. This could be because of a combination of the meat eater problem, and that we think it’s just not plausible to discount animals. It’s an interesting question, but it’s also one where I’m not sure our comparative advantage is coming to a view on it (though perhaps, just as we did with the view that GW should focus on economic progress, we could still discuss it in our evaluation). 

Thanks for the thoughtful reply, and being transparent about your approach, Michael! Strongly upvoted.

To better reflect how your different recommendations are linked to particular worldviews, I think it would be good to change the name of your area/fund "global health and wellbeing" to "global human health and wellbeing" (I would also drop "health", as it is included in "wellbeing"). Another reason for this is that Open Phil's area "global health and wellbeing" encompasses both human and animal welfare.

We did not consider this [GiveWell's top charities effects on animals], and so do not have a considered answer. I think this would be something we would be interested in considering in our next investigation.

I think it would be great if you looked into this at least a little.

I acknowledge there’s a question-begging element to this response: I take it your point is why is it sufficiently plausible, and who decides this? Unfortunately, we can acknowledge that we don’t have a strong justification here. It’s a subjective judgement formed by the research team, informed by existing cause prioritisation work from other organisations.

I think it makes sense GWWC's recommendations are informed by the research team. However, I wonder how much of your and Sjir's views are being driven by path dependence. GWWC's pledge donations from 2020 to 2022 towards improving human wellbeing were 9.29 (= 0.65/0.07) times those towards improving animal welfare. Given this, I worry you may hesitate to recommend interventions in animal welfare over human welfare even if you found it much more plausible that both areas should be assessed under the same (impartial welfarist) worldview. This might be a common issue across evaluators. Maybe some popular evaluators realised at some point that rating charities by overhead was not a plausible worldview, but meanwhile they had built a reputation for assessing them along that metric, and had influenced significant donations based on such rankings, so they continued to produce them. I hope GWWC remains attentive to this.

Thanks Vasco, this is good feedback.

To better reflect how your different recommendations are linked to particular worldviews, I think it would be good to change the name of your area/fund "global health and wellbeing" to "global human health and wellbeing"

We considered a wide variety of names, and after some deliberation (and a survey or two), we landed on "global health and wellbeing" because we think it is a good balance of accurate and compelling. I agree with some the limitations you outlined, and like your alternative suggestion, especially from a "Researcher's" point of view where I'm very focused on. I'll share this with the team, but I expect that there would be too much cost switch at this point. 

However, I wonder how much of your and Sjir's views are being driven by path dependence. [...] Given this, I worry you may hesitate to recommend interventions in animal welfare over human welfare even if you found it much more plausible that both areas should be assessed under the same (impartial welfarist) worldview.

It's a bit tricky to respond to this having not (at least yet) done an analysis comparing animal versus human interventions. But for if/when we do, I agree it would be important to be aware of the incentives you mentioned, and to avoid making decisions based on path dependencies rather than high quality research. More generally, a good part of our motivation for this project was to help create better incentives for the effective giving ecosystem. So we'd see coming to difficult decisions on cause-prioritisation, if we thought they were justified, as very much within the scope of our work and a way it could add value.

Thanks, Michael!

More generally, a good part of our motivation for this project was to help create better incentives for the effective giving ecosystem. So we'd see coming to difficult decisions on cause-prioritisation, if we thought they were justified, as very much within the scope of our work and a way it could add value.

Makes sense!

(The actual question is in bold; the rest is background before and a potential recommendation on how to communicate your project's scope after).

Regarding "evaluating the evaluators": it seems to me that there are two main types of charity evaluators out there. Some seek to identify the single best use of donor funds ("best-charity evaluators"). The funds are the pure example of this, but GiveWell fits well into this camp after it removed GiveDirectly from being a top charity and eliminated the standout charities. I think GW would probably say that a donation to any of the four top charities could plausibly be the highest-impact use of one's donations, depending on circumstances and imprecision in models. 

Others organizations seek to present donors with a wider range of high-effectiveness options, without implying that each could plausibly be the best possible use of donors' money ("great-charity evaluators"). Donors will need to consult their values and do more of their own research. These organizations often serve the important person of making more donations tax-deductible and saving effective charities the hassle of incorporating and applying in their country of operation. The Life You Can Save is the most obvious example, although it's not clear to me how much their listings are based on their own evaluations vs. deference to trusted evaluators.

In my view, both types of recommenders play an important role in the effective giving ecosystem -- but I appreciate why focusing on best-charity evaluators is consistent with GWWC's goals for this project.

A footnote in your HLI report makes it sound like you are mainly evaluating against the standards of a best-charity evaluator. Is that an accurate characterization? The quote is:

At the beginning of our evaluating the evaluators project, we considered the relevant bar as to whether the evaluator “reliably recommended and/or granted to the most cost-effective funding opportunities based on a sufficiently plausible worldview.” We received pushback from external reviewers that, taken literally, this was an overly onerous bar that no evaluator could claim to meet. The issue was that we were using “most cost-effective” in a confusing way. We intended it to mean (roughly): “compared to other available recommendations or funding opportunities that we have evaluated, there is no clear superior alternative that seems more cost-effective, ex-ante” Given this is a bit of a mouthful, we decided to explain our approach as we’ve done here: selecting evaluators that we think are best able to help donors maximise their impact.

I have no concerns with that approach, but would recommend making it clear in posts and webpages that you are evaluating best-charity evaluators under standards appropriate for best-charity evaluators. You might also explicitly state that you don't intend to evaluate great-charity recommenders at least at this time. I think one of the potential pitfalls of an evaluating-the-evaluators project is that people might draw inaccurate inferences from the absence of a major organization from your list. So you might say something like: "Note that we do not evaluate organizations (such as TLYCS) that recommend a broad range of charities without a conclusion that the recommended charity is plausibly the most effective option for donors."

This is a really insightful question! 

I think it’s fair to characterise our evaluations as looking for the “best” charity recommendations, rather than the best charity evaluators, or recommendations that reach a particular standard but that are not the best. Though we’re looking to recommend the best charities, we don’t think this means that there’s no value in looking into “great-charity evaluators” as you called them. We don’t have an all-or-nothing approach when looking into an evaluators’ work and recommendation and can choose to only include the recommendations from that evaluator that meet our potentially higher standard. This means, so long as it’s possible some of the recommendations of a “great-charity evaluator” are the best by a particular worldview, we’d see value in looking into them.

In one sense, this increases the bar for our evaluations, but in another it also means an evaluator’s recommendations might be the best even if we weren’t particularly impressed by the quality of the work. For example, suppose there was a cause area for which there was only one evaluator, the threshold for this evaluating being the best may well be: they are doing a sufficiently good job that there is a sufficiently plausible worldview by which donating via their recommendations is still their best option (i.e., compared to donating to the best evaluator in another area).

It’s too early to commit to how we will approach future evaluations, however, we currently lean towards sticking with the core idea of focusing on helping donors “maximise” expected cost-effectiveness, rather than “maximising” the number of donors giving cost-effectively / providing a variety of “great-but-not-best” options. 

You might also explicitly state that you don't intend to evaluate great-charity recommenders at least at this time.

As above, we would see value in looking at charity evaluators who take an approach of recommending everything above a minimum standard, but we would only look to follow the recommendations we thought were the best (...by some sufficiently plausible worldview). 

but would recommend making it clear in posts and webpages that you are evaluating best-charity evaluators under standards appropriate for best-charity evaluators

I’d be interested in where you think we could improve our communications here. Part of the challenge we’ve faced is that we want to be careful not to overstate our work. For example, “we only provide recommendations from the best evaluators we know of and have looked into”, is accurate, but “we only provide recommendations from the best evaluators” is not (because there are evaluators we haven’t looked into yet). Another challenge is to not overly qualify everything we say, to the point of being confusing and inaccessible to regular donors. Still, after scrolling through some of our content, I think we could find a way to thread this needle better as it is an important distinction to emphasise — we also don’t want to understate our work!

What are all the measurements of charities(e.g., QALYs per dollar, lives saved per dollar, CO2 removed from the atmosphere per dollar, etc.) that you have publically available?

Hi wes R, I'll answer your questions in this comment!

The impact measurements greatly varied by evaluator. For example, GW makes decisions using its “moral weights” (which primarily measures consumption and health outcomes, but I don’t believe in a way that neatly reduces to QALYs). Meanwhile, HLI uses “WELLBYs”. Other evaluators used different measurements at different times, or relied on subjective scores of cost-effectiveness. You can read more about these in our evaluations (linked to here). 

I’m not sure we have much in the way of a generalised view of which metrics we think should be used or not. In general:

  • These metrics should help support making more cost-effective recommendations and grants.
  • To the extent they do, we’re happy to see them!
  • In some cases, metrics might end up forcing over-precision in a way that is not particularly helpful. In these cases, we think it could be more sensible to take a more subjective approach.

Hope that helps!

Where can I find them(Please be specific)?

Which ones do you measure that are not publically available?

Why aren't they publicly available?

Do you plan to change what you measure? 

If so, how and why?

 How do these measurements impact your decisions?

(I separated these questions so that people can upvote and downvote each question separately)

Do you plan to change how these measurements impact your decisions?

How did you choose the set of evaluators to evaluate -- for instance, why evaluate LTFF and LLF over FP's GCR fund? Were there other evaluators considered for the process but not evaluated?

Thanks for your question! We explain the general principles we used to choose which evaluator to investigate here, and go into our specific considerations for each evaluator in their evaluation reports.

For FP's GCR Fund compared to LTFF and LLF specifically, some of the main considerations were (1) our donors had so far been donating most to the LTFF, so the stakes were higher there, and (2) Longview was one of the most-named options by other effective giving organisations as an evaluator they weren't relying on yet but were interested in learning more about.

And yes there are other evaluators we've considered and are considering for future evaluations, some of which we mention throughout the reports. See here for an overview of the impact-focused evaluators making publicly available recommendations that we are currently aware of, and which we may consider in our next iterations of this project.

Can you say more precisely what it means for a fund to be recommended? For instance, how should a donor compare giving to one of the "recommended funds" to giving to a specific charity or project directly? (and by extension one of GWWC's new funds over a specific charity)

We explain more how we view funds vs charities more generally here.

And for the GWWC cause area funds we answer your question for each individual fund on their page, e.g. here for the Global Health and Wellbeing Fund, under "How does donating to this fund compare to similar giving opportunities?".

[comment deleted]5mo1
0
0
Curated and popular this week
Relevant opportunities