We’re the research team at Giving What We Can:
Ask us anything!
We’ll be answering questions Monday the 27th from 2pm UTC until Tuesday the 28th at 9pm UTC.
Update 28 November 6.20pm UTC: thank you for all the great questions! We've answered most of them by now, and plan continue to answer questions for a bit longer, probably until tomorrow morning ~5am UTC.
Please post your questions as comments to this post, to the post on our evaluations of evaluators, or to the post on our recommendations and cause area funds. And please upvote the questions you’d like us to answer most. We’ll do our best to answer as many as we can, though we can’t guarantee we’ll be able to answer all of them.
In addition to discussing our new reports, recommendations and funds, we are happy to answer any questions you may have about our research plans for next year, about the impact evaluation we did earlier this year, about GWWC more broadly, or about anything else you are interested in!
(The actual question is in bold; the rest is background before and a potential recommendation on how to communicate your project's scope after).
Regarding "evaluating the evaluators": it seems to me that there are two main types of charity evaluators out there. Some seek to identify the single best use of donor funds ("best-charity evaluators"). The funds are the pure example of this, but GiveWell fits well into this camp after it removed GiveDirectly from being a top charity and eliminated the standout charities. I think GW would probably say that a donation to any of the four top charities could plausibly be the highest-impact use of one's donations, depending on circumstances and imprecision in models.
Others organizations seek to present donors with a wider range of high-effectiveness options, without implying that each could plausibly be the best possible use of donors' money ("great-charity evaluators"). Donors will need to consult their values and do more of their own research. These organizations often serve the important person of making more donations tax-deductible and saving effective charities the hassle of incorporating and applying in their country of operation. The Life You Can Save is the most obvious example, although it's not clear to me how much their listings are based on their own evaluations vs. deference to trusted evaluators.
In my view, both types of recommenders play an important role in the effective giving ecosystem -- but I appreciate why focusing on best-charity evaluators is consistent with GWWC's goals for this project.
A footnote in your HLI report makes it sound like you are mainly evaluating against the standards of a best-charity evaluator. Is that an accurate characterization? The quote is:
I have no concerns with that approach, but would recommend making it clear in posts and webpages that you are evaluating best-charity evaluators under standards appropriate for best-charity evaluators. You might also explicitly state that you don't intend to evaluate great-charity recommenders at least at this time. I think one of the potential pitfalls of an evaluating-the-evaluators project is that people might draw inaccurate inferences from the absence of a major organization from your list. So you might say something like: "Note that we do not evaluate organizations (such as TLYCS) that recommend a broad range of charities without a conclusion that the recommended charity is plausibly the most effective option for donors."
This is a really insightful question!
I think it’s fair to characterise our evaluations as looking for the “best” charity recommendations, rather than the best charity evaluators, or recommendations that reach a particular standard but that are not the best. Though we’re looking to recommend the best charities, we don’t think this means that there’s no value in looking into “great-charity evaluators” as you called them. We don’t have an all-or-nothing approach when looking into an evaluators’ work and recommendation and can choose to only in... (read more)