We’re the research team at Giving What We Can:
Ask us anything!
We’ll be answering questions Monday the 27th from 2pm UTC until Tuesday the 28th at 9pm UTC.
Update 28 November 6.20pm UTC: thank you for all the great questions! We've answered most of them by now, and plan continue to answer questions for a bit longer, probably until tomorrow morning ~5am UTC.
Please post your questions as comments to this post, to the post on our evaluations of evaluators, or to the post on our recommendations and cause area funds. And please upvote the questions you’d like us to answer most. We’ll do our best to answer as many as we can, though we can’t guarantee we’ll be able to answer all of them.
In addition to discussing our new reports, recommendations and funds, we are happy to answer any questions you may have about our research plans for next year, about the impact evaluation we did earlier this year, about GWWC more broadly, or about anything else you are interested in!
How large is the comparison class for charity evaluators? When you are defining best practice for a charity evaluator, do you have a small set of charity evaluators in mind, or do you find examples in analogous evaluation projects and platforms (perhaps some that are nothing to do with EA, or nothing to do with charity in general)?
There is a relatively small comparison class here; we often say we’re focused on “impact-focused” evaluators. Here is our database of evaluators we know of that we might consider in this reference class. In the medium to long run, however, we could imagine there being value in investigating an evaluator completely outside EA, with a very different approach. There could be some valuable lessons of best-practice, and other insights, that could make this worthwhile. I expect we probably won’t prioritise this until we have looked into more impact-focused evaluators.