The public effective giving ecosystem now consists of over 40 organisations and projects. These are initiatives that either try to identify publicly accessible philanthropic funding opportunities using an effective-altruism-inspired methodology (evaluators), or to fundraise for the funding opportunities that have already been identified (fundraisers), or both.
Over 25 of these organisations and projects are purely fundraisers and do not have any research capacity of their own: they have to rely on evaluators for their giving recommendations, and in practice currently mainly rely on three of those: GiveWell, Animal Charity Evaluators and Founders Pledge.
At the moment, fundraisers and individual donors have very little to go on to select which evaluators they rely on and how to curate the exact recommendations and donations they make. These decisions seem to be made based on public reputation of evaluators, personal impressions and trust, and perhaps in some cases a lack of information about existing alternatives or simple legacy/historical artefact. Furthermore, many fundraisers currently maintain separate relationships with the evaluators they use recommendations from and with the charities they end up recommending, causing extra overhead for all involved parties.
Considering this situation and from checking with a subset of fundraising organisations, it seems there is a pressing need for (1) a quality check on new and existing evaluators (“evaluating the evaluators”) and (2) an accessible overview of all recommendations made by evaluators whose methodology meets a certain quality standard. This need is becoming more pressing with the ecosystem growing both on the supply (evaluator) and demand (fundraiser) side.
The new GWWC research team is looking to start filling this gap: to help connect evaluators and donors/fundraisers in the effective giving ecosystem in a more effective (higher-quality recommendations) and efficient (lower transaction costs) way.
Starting in 2023, the GWWC research team plan to evaluate funding opportunity evaluators on their methodology, to share our findings with other effective giving organisations and projects, and to promote the recommendations of those evaluators that we find meet a certain quality standard. In all of this, we aim to take an inclusive approach in terms of worldviews and values: we are open to evaluating all evaluators that could be seen to maximise positive impact according to some reasonably common worldview or value system, even though we appreciate the challenge here and admit we can never be perfectly “neutral”.
We also appreciate this is an ambitious project for a small team (currently only 2!) to take on, and expect it to take us time to build our capacity to evaluate all suitable evaluators at the quality level at which we'd like to evaluate them. Especially in this first year, we may be limited in the number of evaluators we can evaluate and in the time we can spend on evaluating each, and we may not yet be able to provide the full "quality check" we aim to ultimately provide. We'll try to prioritise our time to address the most pressing needs first, and aim to communicate transparently about the confidence of our conclusions, the limitations of our processes, and the mistakes we are inevitably going to make.
We very much welcome any questions or feedback on our plans, and look forward to working with others on further improving the state of the effective giving ecosystem, getting more money to where it is needed most, and ultimately on making giving effectively and significantly a cultural norm.