Should we make a grant to a meta-charity?
By Daniel May. We're centralising all discussion on the Effective Altruism forum, so please comment here.
I introduce the concept of meta-charity, discuss some considerations for OxPrio, and look into how meta-charities evaluate their impact, and the reliability of these figures for our purposes (finding the most cost-effective organisation to donate £10,000 today). I then look into the room for more funding for a few meta-charities, and finally conclude that these are worth seriously pursuing further.
What is Meta?
What is Meta? Why is it important for the Oxford Prioritisation Project?
There are two types of meta, of which we will focus on the first in this post:
- Promoting effective altruism, such as by raising awareness of the pros of effective giving, by researching impactful career choices and encouraging people to move into these areas, or providing a community for people interested in effective altruism to be part of, to keep each other motivated. This includes organisations such as Giving What We Can, 80,000 Hours, Effective Altruism Outreach (each part of the Centre for Effective Altruism), Charity Science: Outreach and Raising for Effective Giving.
- Cause prioritization, i.e. comparing the importance of causes in a neutral way, such as the value of work in AI safety versus promoting the development of artificial meat. Organisations such as the Global Priorities Project, Open Philanthropy Project, and the Future of Humanity Institute are in this area. Organisation might also do prioritization work within an area, such as GiveWell specializing in global poverty charities and interventions, or Animal Charity Evaluators.
This is in contrast to the object level, which might include direct work for an organisation within a cause, such as carrying out the AI safety research, or earning-to-give to donate to effective charities.
Meta organisations might be more impactful to donate to if it causes even more net resources to be moved into object level causes; instead of working as an AI safety researcher yourself, you might earn-to-give and donate to an organisation like 80,000 Hours if, roughly speaking, it caused more than one person of similar skill to move into the area.
Initially, I looked into relevant posts written by the effective altruism community, coming across a few related posts by Peter Hurford, Robin Shah, and Ben Todd, which we discussed in our meetings. The general sense from these was that our greatest uncertainty (and most important for deciding where OxPrio should donate) was in how meta organisations assessed their impact, since the counterfactuals seemed tricky to estimate, and the case for whom should be credited for somebody changing career or donating more money as a result of interacting with the effective altruism community seemed prone to double-counting and sometimes overdetermined, since many different factors might play into the decision. Relatedly, another concern which appeared was that if these organisation did not exist, would the community just fill the gap (for example, by providing even more of their own research into effective careers, offering additional Skype calls to help people figure out where they can have the greatest impact, or by hosting an alternative platform to Giving What We Can to publicize donations and encourage donations of 10%)?
My sense is that the other traps are less concerning to OxPrio’s goal, since (as Ben Todd points out), these seem to be about being aware of the optimal balance between meta and object-level work, but the balance does not yet seem to be pushed too far in favour of meta (it still seems to have some room to scale). These would become more concerning if we found evidence, while researching the counterfactual question, that moved the ratio of money given to meta charities to money raised for effective object-level charities closer to one, it seems better for us to simply ask which is better on the margin. The other considerations include:
Object-level work may in fact be the best thing for movement growth, for example by proving to outsiders that we can achieve concrete things. Some evidence for this is that GiveWell historically did not put many resources into outreach, focusing instead on producing high-quality research.
There may be more learning value in object-level work; working directly in an area provides a more expert inside view, which can in turn help with prioritisation.
Using meta as an excuse for indecisiveness. The OxPrio group is wary of being biased in ways that favour making others do the work for us.
In the next sections, we will look deeper into the concerns OxPrio had, including the impact of meta-orgs, and whether they have room for more funding, and to scale.
This section will explore how meta-charities are currently measuring their impact, and note any concerns we might have with these estimates.
What impact are we looking for?
It is worth saying firstly that a meta-charity with a multiplier greater than one where OxPrio fully agreed with the model and estimate would not necessarily mean that it is the most cost-effective for us to donate to, because, for example, Giving What We Can members might donate to a variety of top charities of differing cost-effectiveness, and 80,000 Hours promotes careers in a variety of areas, depending on which seem to be the best fit for a particular person. Instead, we should compare our most cost-effective object-charity with the best meta-charity, where the model for the latter takes this into account (e.g. weighs the estimate by the proportion given to each top charity and our estimate of their cost-effectiveness, in the case of Giving What We Can).
Secondly, the numbers below are guides to the average impact of these charities, and we are only interested in the marginal impact (this may not in fact be the case - see the note after this), that is how much additional value would be generated if OxPrio were to donate to them. It is possible that the charities will have less capacity to scale with additional funding, so the returns will tail off, although the opposite is also possible. This means it will be worth investigating later on how these charities plan to scale. Also, using ratios can be misleading, in a similar way to focusing on overhead: what matters is the absolute impact (or, for us, the marginal impact), and a meta-charity which raised £500k for a cost of £100k (5:1) seems better than one which raises £50k for £5k (10:1) but has no room to scale.
Since writing this post, I have come across this post on the 80,000 Hours blog, which seems convincing and very important to keep in mind, arguing that we should not necessarily be focused on the marginal impact at this stage in the development of these charities, and should instead focus on growth, i.e. asking “does the project have a high growth rate, large total market (address an important problem), good product (have an effective solution to this problem) and great team?” The post gives a few reasons for this in terms of problems focusing on marginal impact might be causing, including:
It makes growth a possible disincentive, if e.g. Giving What We Can is overly worried about its funding ratio (ratio money moved to top charities versus staff costs) due to donors relying on it too much, it may be reluctant to do things which will have a larger absolute impact in the long run.
“Starting an overly narrow range of projects.” If funding relies on being able to prove that projects have strong marginal impact, this disincentives projects things that might take longer to come up with provable results, but could have much greater returns.
Therefore, if the meta-charities we consider have concrete plans in their fundraisers to take these kinds of risks, it still seems worth strongly considering them despite the lack of figures as evidence of their impact.
How is (and should) impact be calculated?
This following is a quick summary of how meta-organisations have estimated their effectiveness in the past, and what these have been.
https://www.givingwhatwecan.org/impact/; https://80000hours.org/2016/12/metrics-report-2016/; https://reg-charity.org/reg-annual-transparency-report-2016/
Giving What We Can
Giving What We Can’s main metrics are the number of people who have signed up to the pledge, the total estimated amount pledged, and the amount they have directed to charities so far.
The LB (6:1) estimate uses the ratio of costs to donations to top charities only, which had already been made up until the end of 2014. The donations are multiplied by a counterfactual donation rate of 0.51, determined by asking members when they sign the pledge how much they would have donated if they hadn’t joined Giving What We Can, as a percentage so that especially large donors do not skew the results.
The RB (104:1) estimate is more complicated (diagram here, copyable spreadsheet which calculates the estimate here), and uses the ratio of cost to pledged donations to top charities, accounting for a number of factors:
Membership attrition rate (leavers, silent members, those not giving)
Discount rate for future donations (e.g. due to economic growth causing diminishing marginal returns)
Difference between members pledged percentage and actual percentage given so far
From the Giving Review 2015, the top five charities donated to by percentage were the Against Malaria Foundation (22.3%), GiveWell (21.3%), the Schistosomiasis Control Initiative (13.8%), Project Healthy Children (9.7%), and Deworm the World Initiative (9.3%), accounting for 76.4% of the total. These percentages change year by year, so it is worth finding more up to date figures if we take account of this.
It seems very plausible that after multiplying these proportions through (e.g. approximately by looking at GiveWell’s estimates for QALYs/$) and comparing the result to our most cost-effective object-level charity, Giving What We Can would would come out on top, even using more pessimistic figures than those in the RB’s model (for example, an effectiveness ratio of 0.5 seems likely too high to me). However, we would also need to make adjustments for whether Giving What We Can could continue to scale at these ratios (what would they do with additional funding?), since the marginal impact may be lower.
Another useful input to help determine the marginal impact would be knowing how much of the the growth (and sticking with) in pledges is in part due to outreach efforts by Giving What We Can itself (the counterfactual question asks only how much one would have donated if they had not taken the pledge), instead of things like word-of-mouth, since this would indicate how much value it gets from hiring additional staff, rather than fixed costs such as developing and hosting a website which people can be referred to. It seems possible that much of Giving What We Can’s impact comes from staff dealing administratively with pledges, or by answering questions and offering video calls, although the number of staff required may grow significantly in future. Some figures here suggest “CEA activities were at least partially responsible for something like 70% of new pledges in September.” However, it also seems possible that since Giving What We Can’s staff costs are currently quite low, the impact ratio would increase greatly with the funding required to hire additional staff focusing on marketing and outreach, and allow staff to specialize further.
Ben Todd argues against diminishing marginal returns being in play here. “Some have suggested GWWC may already be at the point of strongly diminishing returns. The version of this argument I hear most often is that all the pledges comes from the initial impact of setting up GWWC (i.e. making the website, defining the pledge, getting press coverage) and that activity created a stream of benefits that’s being collected today. [...] If that were true, however, we’d expect the growth rate to be decreasing. Instead, the growth has increased – over 2013 it was 42%, while over 2014 it was 117%.” However, this does not seem true to me; as Paul Graham writes, my (very limited) understanding of startups is that even the best products start or launch very small, and require some sort of marketing push (including word-of-mouth) and grow over time as users are excited about, and share, the product, i.e. I do not think large numbers of potential and keen pledgers would have found Giving What We Can and signed up extremely early on. It is very possible that I misunderstood Ben’s line of argument here.
I find the RCT under the marginal impact calculation section on page 29 of Giving What We Can’s 2015 fundraising prospectus to be more promising, though it is not clear how interested in effective altruism the prospective members were already, and I would be interested in any more studies that have been conducted along these lines. “Of course, all of these estimates are just indications, but they build a reasonable case for the marginal cost of signing up an extra member falling between $300 and $1000. If the cost of attracting a new member is $1,000 and they give an additional $90,000 to top charities as a result, this implies a fundraising ratio of 90, while if the cost of an extra member is $300, the multiplier would be as high as 300.”
80,000 Hours focuses on the number of impact-adjusted significant plan changes (IASPC), but it also keeps track of the number of unique visitors to the site, and newsletter signups, though these are not directly factored into the cost-effectiveness estimate.
Significant plan changes (SPC) are defined in detail here. A person’s plan is counted “if they say they have changed their credence in pursuing a certain mission, cause or next step by 20% or more; they attribute this change to 80,000 Hours, and there’s a plausible story about how engaging with 80,000 Hours caused this change”, although “in practice, if someone told us they changed their best guess option, then we counted that as a shift of greater than 20%.”
An impact-adjusted significant plan change (IASPC), defined here, is an SPC scored “with a value of 0.1, 1 or 10. The score is meant to represent how much extra counterfactual impact will result from a plan change.
“A typical plan change that scored 1 is someone who has taken the Giving What We Can pledge or decided to earn to give in a medium income career. We also award 1s to people who want to work on the most pressing problems and who switch to build better career capital in order to do this, for instance doing quantitative grad studies or pursuing consulting; people who have become much more involved in the effective altruism community in a way that has changed their career, and people who switch into policy or research in pressing problem areas.”
From 2011 up to the end of 2016, 80,000 Hours had tracked 1504.8 IASPCs, with over 910.9 of these in 2016 alone, and costs of around £250,000 (or £350,000 – £500,000 with opportunity costs if staff were instead earning-to-give). Additionally, the growth in terms of IASPCs has more than tripled each year, and about ⅓ of these plan changes have come from workshops (and ½ from the website).
As one of my concerns with 80,000 Hours was also that most of the gains might come from simply having a website and carrying out some initial research (which seems likely to have diminishing returns - I find it unlikely that they will continue finding new career paths as impactful as e.g. AI safety research, or earning-to-give, at a similar rate), this last fact seems especially promising as it suggests one way in which they could scale, through things like providing more workshops.
For the counterfactual issue (would these people have pursued similar careers anyway? Would they have come across similar research from the effective altruism community or elsewhere if 80,000 Hours did not exist?), one idea OxPrio had was for 80,000 Hours to randomly select people whose plans had changed, and publish their thoughts or decisions in more detail so that outsiders could judge how convincing they were. I found that they had already tried this here, although I would be interested to see a more up-to-date version (since it seems like many of the participants here were already into effective altruism, which may not be the case any more since they have grown).
Overall, my sense is that 80,000 Hours could also beat out any object-level charities we find. As an initial step, it seems worth calculating a value for 1504.8 medium-income Giving What We Can pledges and comparing that to their funding, as well as looking into how they will use more money (found in Scale).
There is a breakdown of what the plan changes actually consisted of here.
Raising for Effective Giving
From the Annual Transparency Report 2016, the top five charities donated to were Against Malaria Foundation (63.1%), Schistosomiasis Control Initiative (15.3%), Machine Intelligence Research Institute (8.3%), GiveDirectly (3.5%), Foundational Research Institute (3.1%), making up 93.3% of the total donations, which was $1,462,450.
Michael Dickens writes more about the impact of REG, and the strength of evidence. He argues that “REG’s case here looks much better than the other EA movement-building charities I’ve considered. REG focuses its outreach on poker players who were previously uninvolved in EA for the most part. Even if they were going to donate substantial sums prior to joining REG, they almost certainly would have given to much less effective charities.” This seems positive for the counterfactual worry.
Room for Growth (Scale, Funding)
Giving What We Can
From the CEA fundraising page:
“The Centre for Effective Altruism was accepted into Y Combinator's nonprofit program. Y Combinator has an extremely impressive track record of helping young organizations successfully scale. We intend to use this opportunity to refine our model for growing and strengthening the effective altruism community.”
On the funding gap for CEA as a whole. “For 2017, the minimum we’re looking to raise is £2.5 million. We believe we could spend much more than that before hitting strongly diminishing returns: we could spend £5.1 million in our growth scenario and £7.3 million in our stretch growth scenario. In both of these latter two scenarios, we would regrant a significant amount of money to smaller projects in the effective altruism community.”
It seems like Giving What We Can has a very large potential to scale, as Ben Todd writes: "I think taking the GWWC pledge is plausibly about as demanding as becoming vegetarian. About 1% of the developed world is vegetarian – about ten million people – whereas only 1,000 people have taken the GWWC pledge. This suggests GWWC has only reached 0.001% of its addressable market. [...] This makes strong concerns about diminishing returns look a bit petty. Indeed, it’s not clear the marginal cost-effectiveness ratio over the next year matters much at all. Rather, what matters is the chance that GWWC can one day figure out a way to reach and convince a much larger fraction of the audience in a cost-effective way.”
In December 2016, 80,000 Hours had a funding gap of at least £700,000 out of a total of £1.6m they would like to raise, “which will let us
Cover our existing commitments to 6 full-time staff over 2017.
Maintain at least 12 months’ reserves over 2017.
Increase salaries to match comparable organisations and attract better staff.
Hire two additional entry-level staff members to work on writing career reviews, giving workshops, or design (or a smaller number of senior staff and freelancers).
Create a £150,000 budget for marketing.”
In general, it seems like 80,000 Hours, like Giving What We Can, has reached only a small fraction of its potential market so far.
Raising for Effective Giving
In 2016, REG were hoping “to add several full-time equivalents to our team in order to expand in poker, start projects in other industries that we believe are particularly promising and explore other areas to find new opportunities”. In particular, they are looking into daily fantasy sports professional gaming, and finance, but I do not know how much progress has been made in these areas, though they have websites here, here, and here.
Beyond this, I have not found much information on room-for-more-funding besides Michael Dickens’ post on the Effective Altruism Forum, saying REG “plans on spending $120K in 2017, and expects to raise about $40-80K”. Like Michael, I am concerned about the counterfactual (how many of REG-attributed donations would have happened anyway?) which also applies to Giving What We Can, as well as the possibility (raised in the first of the two links) that a donation to REG is actually a donation to the Effective Altruism Foundation (EAF). I am also, and have heard, concerns that they might be reaching diminishing returns in the poker community, having reached out to their most promising connections first, such that it will be more and more difficult to get new people on board.
Update: Since writing this, REG has published a year in review post. Some snippets:
On whether to continue expanding into other areas. “Based on our experiences in 2016, we’ve concluded that we should focus more heavily on the poker community, rather than actively trying to expand to other areas. We believe that we succeeded in poker due to our strong initial connections. Such connections are hard to replicate in other fields. We will continue pursuing highly promising leads in other industries that might facilitate a broader entry. However, we will commit less overall resources to such activities.”
On talent constraints. “We’ve had a difficult time hiring people who are outgoing, persuasive, and still deeply familiar with the literature on effective giving. [...] For all our projects in the last year, being talent-constrained was a significant issue, and we believe that many projects would have gone better if we had more qualified people working for us.”
My recommendation is to keep meta-charities under consideration, since I think it is likely that they have room for funding (although Raising for Effective Giving is less clear on this) and move many times their cost to top-rated charities, or equivalent for 80,000 Hours, though I am still very uncertain about the numbers due to the counterfactual issues. I also find Ben Todd's post on focusing on growth rather than upfront provable marginal impact to be promising and convincing, since it is possible that if meta-charities were given more funding to take risks, they might find even more cost-effective ways to spread their ideas and research.
I plan to do some naive calculations over the coming days so that we have rough figures to compare with our current object-level charity best guess, but I especially welcome ideas for more complex models for thinking about the counterfactuals and taking them into account.