Hide table of contents

Post co-written by Joey Savoie and Vaidehi Agarwalla 

CE has historically researched cause areas focused on a specific set of interventions directly affecting beneficiaries (such as farmed animal welfare or mental health). However, we have always been highly involved in the Effective Altruism meta (EA meta) space and believe it is an impactful area to make charity recommendations in. 

What is Effective Altruism meta?

EA meta is typically defined as charities that are a step removed from direct impact. Meta charities include those focused on a single cause area, such as Animal Advocacy Careers (AAC), which helps animal advocates have higher impact careers; and cross-cutting charities such as Charity Entrepreneurship, which incubates charities across a number of causes. Meta EA also includes organizations focused directly on improving the EA community, like the Centre for Effective Altruism

We believe meta EA charities are uniquely positioned for both direct impact and improving the EA movement. It seems that there are many untapped opportunities and that 2021 is a good time for various reasons, including CE’s own track record. While we have some concerns regarding impact and evaluation, we are optimistic that the charities we incubate will have a strong commitment to careful measurement and evaluation. 

This post goes over why we think Effective Altruism meta could be highly impactful, why CE is well-positioned to incubate these charities, why 2021 is a good time, differences in handling EA meta compared to other causes, and potential concerns. We finish by introducing our three top recommendations for new charities in the space: exploratory altruism, earning to give +, and EA training.

1. Why EA meta is an important cause area

The main reason we consider any area is because we think it is highly impactful. The same is true of EA meta. 

Direct impact

EA meta charities cover a large range of direct impact causes as they are broader and cross-cutting. 

We think that this broadness is a key advantage of meta charities since it allows for more possible pathways to impact. Some areas such as policy, fundraising and career advice seem particularly conducive to this. For example, a policy charity could work on tractable interventions in both the animal and poverty space; a fundraising charity could advocate for many different cause areas that have room for more funding. 

One of our main uncertainties is that measurability of impact varies greatly within meta charities. On one end of the spectrum, charities that fundraise for GiveWell top charities can pretty easily measure the money they affected compared to the money they spent (1). For other areas, such as careers, this measurement can be more challenging but we still think it's possible to do, as we have observed from AAC. When we have seen careful impact measurements of meta charities several have stood out as highly effective – GiveWell, for example, is on par with the best direct charities we have seen.

We are happy with the progress of meta charities we have incubated and advised. We are also pleased with CE’s process (we are a meta charity as well), and will publish our annual review in the coming months. 

In general, we are optimistic that well-founded and well-run meta organizations can create metrics that approximate impact closely enough to get a sense of their progress (even if we lack some precision). 

Impact on the EA movement

We believe that the EA movement is one of the most promising social movements and has a high ability to impact the world. EA has already identified several neglected and promising cause areas, and has succeeded in counterfactually directing millions of dollars to these causes (2). It has also created a community of thousands of engaged members with strong norms of truth-seekingness, cause impartiality, and a focus on impact. 

However, we think there is still room for improvement for the EA movement to have more impact, both with regards to research and promoting effective actions for individuals to take.  

We believe that meta organizations (even those not directly improving the EA community) are uniquely positioned to have a positive impact on the movement. For example, new charities (such as the ones CE incubates) can create opportunities for EAs to get involved in highly impactful career paths. These paths have historically been limited, causing a lot of frustration (3).

2. Why is CE well-positioned to have an impact?

Although EA meta has always been on our radar we have not worked on the area before. What changed this year that made us more confident in the space?

Proof of concept

Perhaps the biggest factor was that we wanted to run CE in a few areas that are easier to evaluate progress on before starting more speculative charities. As we have founded more charities and see our older charities progress we are more confident in the CE model and thus more willing to take on harder to measure areas. 

We have recommended a few animal charity ideas that were in both the animal and meta space (4). The oldest of these that we have incubated is AAC. We have been pleased with their overall progress and commitment to measurement. When implementing their career advice program, for example, AAC pre-registered a study to evaluate the effects of career advising calls on animals because of a lack of systematic evidence for this intervention (5). This study would not only help AAC evaluate their impact, but provide valuable data to other EA meta charities. We also incubated the Happier Lives Institute, which conducts research in the mental health and subjective well-being space, and have been pleased with their progress. We feel as though there are more opportunities that could be executed in this way. 

In general, we think meta charities are often not held to a high enough standard but have gained confidence through experience that our entrepreneurs will hold themselves to high standards of impact. 

CE’s experience & knowledge

Our team has had a hand in directly founding or advising half a dozen EA meta charities in the past, several of which are currently seen as highly successful. Meta EA is also one of the cause areas in which our team has the strongest level of baseline knowledge and experience. Most of our team has been involved in the EA movement for a number of years, including founding EA chapters and working for other EA organizations (6). We feel fairly confident in our ability to incubate new charities in this space and mitigate risks that may arise. 

Momentum & support

The EA movement is dynamic and changes every year, but we generally believe that the timing in 2021 is good for new organizations. 

Risk of stagnation: In particular, it seems the movement is fairly stable with established meta organizations in a number of areas, but may be at some risk of stagnating (7). 

New spaces opening up: New promising spaces have opened up by some of the larger meta EA organisations clarifying what ground they intend to cover in the near future (8). These spaces are a low-hanging fruit as there are a growing number of EA community members not being served by the existing organisations. 

Stable funding & support: CE has a strong connection to a large community of EAs who are highly supportive of the idea of new EA meta charities being founded – even our location in London was partly chosen due to the benefits of proximity to a strong EA community. As a result, we also have confidence in the funding space of EA meta. 

3. Differences between EA meta and other cause areas

EA meta is an unusual cause area. It's both broader and has less preexisting content than other cause areas we have investigated. For this reason, our research process differs from previous cause areas, and the content we publicly publish will also be different. 

Research process

We use the same systems and techniques of broad research, iterative depth and systematic consideration. We started by speaking to a number of experts (over 40) and pulling out key flaws and possible charity ideas within the EA movement. From there we are writing shallow reports for ~10 of the ideas and deeper reports on the top recommended ideas. You can see a summary of our research process here

Published research

We have published a summary of the survey, but we only plan on publishing deeper reports on the top ideas we recommend. Due to the lack of preexisting research, we would be less confident in the robustness of the shallow reports. We think it’s quite likely we will find exciting-sounding but ultimately less impactful ideas, and do not want people to found charities based on our reports on non-recommended areas. 

4. Concerns with EA meta

We do have some major worries about working in the EA meta space, mostly around impact and measurement and evaluation. 

Risk of a meta loop 

There is some risk that EA meta charities would fall into a loop where they encourage more meta EA activities, rather than direct impact activities. Thus, they would fail to achieve direct impact (9). This is especially true for interventions to improve the EA community. 

Charities would need to have a very clear theory of change, demonstrate through measurement the direct impact of the work they were doing, and try to minimize the number of steps between their actions and the impact they are trying to have. 

Issues with metrics

We have some major concerns about quantifying the impact of certain meta EA charities whose impact is harder to measure. Charities with the following attributes are of greater concern: programs that are multiple steps removed from direct impact, long feedback timelines, and complex interventions where many other variables are present. Some examples include policy and career advising charities. 

Many meta charities may need to use proxy metrics to estimate their impact. This means they may be at risk of using vanity metrics. Vanity metrics don’t measure what actually matters, but instead poorly chosen proxies. They can lead to an overestimation of the expected impact. For example, if you were advising someone you might ask them whether they liked the advising session. However, this does not track whether they changed their actions as a result of your advice (10).

Long feedback timelines also increase the uncertainty about the impact (or attributable impact) of an intervention. Overall, it seems important that meta charities provide thoughtful estimates of impact in their early stages while they are still establishing a track record. 

5. Top EA meta charity ideas

For 2021 we currently recommend three meta ideas: exploratory altruism, earning to give +, and EA training (11). This section intends to give a sense of what the final ideas will look like for applicants to the Incubation Program while we finalize the deeper reports for publication. 

We may accept people into the program with their own EA meta idea. However, we expect EA meta charities launched in 2021 to implement ideas from among these recommendations.. 

Exploratory altruism

Description: Effective altruism is the question of how to do the most good. As the movement is relatively young, many areas and causes still have not been examined in depth. The idea of a cause X is that highly impactful areas could still be undiscovered, so finding a new cause X would be highly effective. The EA movement currently has no organization dedicated full time to exploring and making a strong case for new cause areas. Given the amount of unexplored ground, it’s likely that multiple highly promising causes are not yet on the EA radar, or that more work is needed to systematically evaluate them against other options. This organization would focus long term on making the case for new areas (both known unknowns and unknown unknowns) rather than choosing one cause and focusing exclusively on that.

Personal fit: An ideal co-founder for a project like this would be highly informed about the EA movement and its current causes. They would be excited about multiple cause areas within the movement and open to the possibility that more areas with equal or greater impact could be discovered. The team would include at least one good writer and at least one good researcher. Both founders would be good thinkers with a strong background in epistemology and good judgment when it comes to cause comparison.

Earning to give +

Description: Earning to give (E2G) has been long considered by effective altruists. Earning to give + follows the same idea but includes more elements: in particular, bringing lessons from the E2G field into EA (e.g. management practices, communications strategies, and decision making methodologies) and bringing EA insights into the E2G workplace (e.g. fundraisers, donation matching, and EA movement building). 

E2G has historically lacked an organization focused specifically on providing support, community, and advice to those going down this path. The additions encapsulated in E2G+ make the career path even more impactful and more connected to the EA movement. A new organization would also address two of the largest concerns with the EA movement: the small number of impactful opportunities available; and EA’s insularity, which often leads to  reinventing the wheel. E2G+ can be a highly impactful career path able to absorb a large number of impact-focused individuals and could strengthen the EA movement, both financially and through introducing best practices and ideas from outside. 

Personal fit: Ideal co-founders would have experience with earning to give as well as a high interest in helping those later in their career path. They would have experience in community building or event running (such as being part of an EA chapter), and be generally comfortable in helping teach EAs how to talk to coworkers about EA concepts. Communication skills – particularly those applicable outside of the EA movement – would be particularly important for founders of this charity.

EA training

Description: The most important talent gaps in EA often change more quickly than the time it takes for many to skill up in an area or for mass outreach to successfully target groups with that skill set. This organization would identify talent gaps (e.g. through surveys) and then address these through ~quarterly training and mentorship programs. For example, one round could focus on operations skills if that were determined to be a major bottleneck; the next could focus on communications skills or burnout prevention, etc. This organization would be built flexibly to adapt to the highest area of need and quickly upskill people in this area. Our best guess based on our research is that the organization would run a training program 2-4 times a year, focusing on a different topic in each program, and conduct a survey once or twice a year. We expect a lot of the organization's activities to involve gathering resources and mentors (similar to WANBAM). Training would be relatively short (e.g. a few weeks), and roughly half of the content would consist of preexisting materials rather than those created internally. 

Personal fit: A large part of this organization would entail synthesizing useful content, acquiring mentors, and quickly learning and passing on knowledge about new skills. Experience in more generalist roles (such as early stage organizations) or in generally teaching and organizing content would make someone a strong fit for an organization like this. The ability to quickly sort and prioritize many different books or courses on a given topic would also be highly important.


If you are keen on any of these ideas you should apply to the CE Incubation Program!

We are happy to talk about these ideas in depth with applicants for the CE Incubation Program who reach the second round interview. For more about CE's research into EA meta ideas (including detailed methodology and other ideas considered), see our recent EA forum post.
 

Footnotes

 1. A number of meta EA charities are focused on fundraising such as One For the World, Giving What We Can, and Founders’ Pledge, which have been very successful in getting thousands of people to donate significant portions of their income to charity. You can see some write-ups of their evaluations in numerous EA Infrastructure Fund payout reports. (Note: Charity Entrepreneurship has also received funding from the Infrastructure Fund.)
2. Impact
3. There have been several discussions on the challenges for individual members pursuing high impact career paths within the EA community on the EA Forum, and this is a widely accepted issue. A few examples include several in-depth experiences that sparked community-wide discussions, including a post in early 2019 by an anonymous community member and more recently My mistakes on the path to impact by Denise Melchin. There have also been a number of constraints identified including specific skills, vetting, and network constraints that contribute to a sense of frustration. 
4. Animal Research Report
5. Pre-registration: The Effects of Career Advising Calls on Expected Impact for Animals
6. Archived CE Team Page from December 31st 2020.
7. An EA Forum post outlines the case for intellectual stagnation in EA, and points to a lack of cause prioritization research (research to identify new promising cause areas). Many EA organizations that started from a more general position narrowed focus, and other organisations did not continue that original research. In the 2019 EA Survey, stagnation was the 5th most mentioned reason for decreased interest in the movement.
8. What will 80,000 Hours provide (and not provide) within the effective altruism community?
9. Peter Hurford provides an illustrative example of this.
10. 80,000 Hours changed their entire metric of evaluation in 2019 as a result of updating feedback from career advisees from previous years.
11. Please keep in mind that these are the names and descriptions of the broad areas, not of the charities that will be founded in these areas.

Comments20
Sorted by Click to highlight new comments since: Today at 10:34 PM

Here is a past list of EA charity ideas/cause candidates I collected from a previous project, organized by whether they are meta or partially meta (i.e., "a step removed from direct impact"), and then by my subjective promisingness. You can see more information by clicking on the "expand" button near each cause area. They come from a recent project some other people have linked to.

Some of the ones which I think you might think are competitive with your top three ideas are:

Additionally, some of the ones which I think might be competitive with your top three ideas (but about which you might disagree) are:

You can see more in the link above. I also have "Discovering Previously Unknown Existential Risks", which is pretty related to your "Exploratory altruism" cause, and "Effective Animal Advocacy Movement Building" (in which cause CE has incubated Animal Advocacy careers).

Joey
3y11
0
0

Thanks for this – I checked out the full list when the post went up. 

We will be researching increasing development aid and possibly researching getting money out of politics and into charity as our focus moves to more policy-focused research for our 2022 recommendations. 

We also might research epistemic progress in the future, but likely from a meta science-focused perspective.

We definitely considered non-Western EA when thinking through EA meta options, but ended up with a different idea for how to best make progress on it (see here).

For-profit companies serving emerging markets I see as a very interesting space but a whole different research year from EA meta. Maybe even outside of CE’s scope indefinitely.

I do not expect us to research Patient Philanthropy, Institutions for Future Generations, Counter-Cyclical Donation Timing or Effective Informational Lobbying in the near future. 

In general, I do not expect our research on EA meta to be exhaustive given the scope. I would be excited to see more ideas for EA meta projects, particularly ones with quick and clear feedback loops.

Thanks for sharing this! All three of these seem valuable.

A couple questions about the EA training one:

  1. You give the examples of operations skills, communication skills, and burnout prevention. These all seem valuable but not differentially valuable to EA. Are you thinking that this would be training for EA-specific things like cause prioritization or that they would do non-EA-specific things but in an EA way? If the latter, could you elaborate why an EA-specific training organization like this would be better than people just going to Toastmasters or one of the other million existing professional development firms?
  2. Sometimes when people say that they wish there were more EA's with some certain skill, I think they actually mean that they wish there were more EA's who had credibly demonstrated that skill. When I think of EA-specific training (e.g. cause prioritization) I have a hard time imagining a 3 week course[1] which substantially improves someone's skills, but it seems a little more plausible to me that people could work on a month-long "capstone project" which is evaluated by some person whose endorsement of their work would be meaningful. (And so the benefit someone would get from attending is a certification to put on their resume, rather than some new skill they have learned.) Have you considered "EA certification" as opposed to training?

  1. I think there are weeks long courses like "learn how to comply with this regulation" which are helpful, but those already exist outside EA. ↩︎

  1. Your intuitions are right here that these skills are not unique to EA, and I am generally thinking of skills that are not exclusive to EA. I would expect this training organization not to create a ton of original content so much as to compile, organize and prioritize existing content. For example, the org might speak to ten people in EA operations roles, and based on that information find the best existing book and online course that if absorbed would set someone up for that role. So I see the advantage as being, more time to select and combine existing resources than an individual would have. I also think that pretty small barriers (e.g. price of a professional course, not having peers who share the same goals, lack of confidence that this content is useful for the specific jobs they are aiming for) currently stop people from doing professional training. And that the many common paths to professional training (e.g. PhD programs) are too slow to readily adapt to the needs of EA. I would generally expect the gaps in EA to move around quite a bit year to year. 
  2. I think certification or proof of ability is a non-trivial part. The second half of our Incubation Program puts the earlier training into action through working on projects that are publicly shareable and immediately useful for the charity. I would guess that a training focused organization would also have a component like a capstone project at the end of each course. 

I would also note that I think just giving EAs the ability to coordinate and connect with each other while learning seems pretty valuable. A lot of EAs are currently ruled out of top jobs in the space due to not being “trusted” or known by others in the EA movement. I think providing more ins for people to get connected seems quite valuable and would not happen with e.g. a local Toastmasters.

I love all three ideas and I hope to see them come to life in the coming years :)

Regarding Exploratory altruism, I want to make explicit one (perhaps obvious) failure case - the explored ideas might not be adequately taken up by the community. 

There seem to have been many proposals made by people in the community but a lack of follow up with deeper research and action in these fields. Further down the line, Improving Institutional Decision Making has existed as a promising cause area for many years and there are various organizations working within that cause, but only recently begun an effort to improve coordination and develop a high-level research agenda. 

Both of these seem potentially good - it might make sense that most early ideas are discarded quickly, and it might make sense that a field needs a decade to find a common roof. However, these raise more opportunities for meta-work which might be better than focusing on generating a new cause-X, and might suggest that a lot of value for such an organization could come from better ways of engaging with the community. 

A different concern, related to "The Folly of EA Should", is that there could be too much filtering out of cause areas. I think that it might be the case that a set up like CE's funnel from 300 ideas to a few that are most promising might discourage the community from (supporting people who are) working in weeded-out causes, which might be a problem if we want to allow and support a highly diverse set of world-views and normative views. 

(I'm sure that these kinds of concerns would rise (or perhaps already had) while developing the report further and when the potential future founders would get to work on fleshing out their theory of change more explicitly, but I think that it might be valuable to voice these concerns publicly [and these kind of ideas are important to me to understand more clearly, so I want to write them up and see the community's reaction])

Indeed these sorts of issues will be covered in the deeper reports but it’s still valuable to raise them!

A really short answer to an important question: I would expect the research to be quite a bit deeper than the typical proposal – more along the lines of what Brian Tomasik did for wild animal suffering or Michael Plant did for happiness. But not to the point where the researchers found an organization themselves (as with Happier Lives Institute or Wild Animal Initiative). E.g. spending ~4 FT researcher months on a given cause area. 

I agree that a big risk would be that this org closes off or gives people the idea that “EA has already looked into that and it should not longer be considered”. In many ways, this would be the opposite of the goal of the org so I think would be important to consider when it’s being structured. I am not inherently opposed to researching and then ruling out ideas or cause areas, but I do think the EA movement currently tends to quickly rule out an area without thorough research and I would not want to increase that trend. I would want an org in this space to be really clear what ground they have covered vs not. For example, I like how GiveWell lays and out and describes their priority intervention reports.

We have published a summary of the survey, but we only plan on publishing deeper reports on the top ideas we recommend. Due to the lack of preexisting research, we would be less confident in the robustness of the shallow reports. We think it’s quite likely we will find exciting-sounding but ultimately less impactful ideas, and do not want people to found charities based on our reports on non-recommended areas. 

I think it'd be reasonable to not publish the shallow reports because of associated time costs, but I'm not sure I really see the logic for not posting due to downside risk. 

  • I think you could just have strong caveats, clear epistemic statuses, reasoning transparency, etc. 
  • And then to a decent extent, you can trust readers to update their beliefs and behaviours appropriately. 
    • If they update wildly inappropriately, this may also suggest what they would've done otherwise isn't great either (maybe they're just not great at choosing careers), so the "lost impact" is lower (though actual downsides caused by what they do could still be important).
  • I don't think people will found charities at the drop of a hat - that's a big commitment, and essentially requires funding or large savings, so they'll presumably want and have to talk to people for advice, funding, hires, etc. 
  • Your shallow reports might also suggest why an idea doesn't seem promising, and there's at least some chance someone else would've spent time thinking about the idea themselves later.

Again, I'm not necessarily advocating you try to publish those shallow reports, nor even necessarily saying I'd read them. I'm just not sure I understand/buy the argument that publication would be net negative due to the chance that (a) the idea is bad, (b) you fail to realise or convey this, and (c) someone actually takes the big step of founding a charity due to this, and this is notably worse than what they would've done otherwise.

Thanks, I found this post interesting.

When implementing their career advice program, for example, AAC pre-registered a study to evaluate the effects of career advising calls on animals because of a lack of systematic evidence for this intervention (5). This study would not only help AAC evaluate their impact, but provide valuable data to other EA meta charities.

Yeah, this sounds useful. Currently my independent impression is that it's weird that EA orgs don't do (lower-effort) versions that sort of thing more often - e.g., for a subset of applicants to receive advice or go through a program or whatever, randomise who gets it, then send out surveys afterwards to both groups. (I think this has been discussed - maybe including by me? - in various places before, but I can't remember where.) 

But maybe there are good reasons I'm not aware of. And I know EA orgs do often at least do things like sending out surveys, just not necessarily hearing from people they didn't end up giving the "treatment" to with (for comparison) nor randomising who gets the "treatment". 

I like these charity ideas, especially the 'Exploratory altruism' one.

I don't know if you've seen this but Nuno Sempere and Ozzie Gooen mention here that they may look to evaluate existing cause area ideas to determine if they should be researched further. Seems closely linked to the 'Exploratory altruism' idea.

Indeed I have seen that post. I would be keen for more than one group to research this sort of area. I can also imagine different groups coming at it from different epistemic and value perspectives. I expect this research could be more than a full-time job for 5+ people.

Thanks for the link, it's definitely relevant. We'll be compiling a comprehensive list of resources that the org could use to start their research, and posts like this will be part of that resource list.

OK, that's great, thanks.

Just to clarify, I agree that the post as a whole is definitely relevant, but I also think this part is too:

We —Ozzie Gooen of the Quantified Uncertainty Research Institute and I— might later be interested in expanding this work and eventually using it for forecasting —e.g., predicting whether each candidate would still seem promising after much more rigorous research. At the same time, we feel like this list itself can be useful already. 

See the comments for more of an explanation as to what this might entail (I asked).

This seems like the sort of analysis that an 'Exploratory Altruism' charity would do, so it may be worth contacting Ozzie/Nuno to discuss avoiding potential duplication of effort, or to enquire about collaborating with them. It's possible the latter approach is preferable as they certainly have some highly interesting methodological ideas about how to assess the cause areas (for example see here and here).

Thanks for that extra context and further resources! We will be doing research on counterfactual replaceability. 

Well, it's not like we're each adversarially trying to maximize our own counterfactual impact, rather than impact as community :P

Hi Nuno, sounded adversarial, that was not my intention.

What I meant to say is we will be speaking with who the other people and organisations working in this space are, and see if there's a good chance that another actor would start a similar charity or do similar work. This may make starting an additional charity in this space less impactful for the whole movement if those resources were used elsewhere.

What is this supposed to link to?

Whoops I have fixed it now

I think all of the meta charity ideas are great, especially the first one. Exploratory altruism would address a problem that I have: 80,000 Hours now lists 17 problems that could be as valuable to work on as the top problems that they've fully evaluated. I share much of their worldview, so I am very confused about which of these would be the best for me to work on. It would be very helpful if I had a way to compare these problems, especially tentative answers to these questions:

  1. How do these problems compare to the current top problems in terms of scale, neglectedness, and solvability?
  2. How do these problems relate to each other?

"The EA movement currently has no organization dedicated full time to exploring and making a strong case for new cause areas. " 

Isn't this what open phil does? 

Open Phil did some work on researching potential cause areas in their early years, but their primary focus is grant making.

More from Joey
Curated and popular this week
Relevant opportunities