Hide table of contents

Summary

  • One of CEA’s goals is for people who are highly engaged with effective altruism to stay highly engaged.
  • In order for us to pursue this goal, we need some way of measuring the retention rate. In this document, I calculate retention rates using engagement with CEA's projects as a proxy.
  • I find that 50-70% of people who engaged with CEA's projects in 2020 also engaged with one of our projects so far in 2021, using a naïve method of matching people (mostly looking at email addresses).
  • I further manually classify all EAGxVirtual attendees who self-reported being employed by an EA organization. I find that 95.3% of them were retained by at least one of several proxies, which is almost identical to the 95.6% retention estimate given by Ben Todd in his analysis last year.
  • Note: I expect this post is only interesting to a small number of people who are highly engaged with EA, so I haven't spent a lot of time cleaning it up. Please feel free to comment or reach out to me with any questions you might have.

Data sources

I considered the following data sets:

  1. Everyone who read a post on the EA Forum in 2020 or 2021. Note that this requires the user to have been logged in when they read the post.
  2. Everyone who donated on EA Funds in 2020 or 2021. I did not filter by which organization they donated to.
  3. Events: attendees of EAGxVirtual, EAG Reconnect, or the EA Picnic. I further filtered by:
    1. Whether they self-reported as having taken "significant action" in their EAGxVirtual application (this includes having taken the Giving What We Can pledge, currently working at an EA organization, having previously worked at an EA organization, having spent 100 hours on an EA project, or having changed their career due to EA considerations). Note that this is self-reported information, and does not perfectly correlate with whether an expert judge might evaluate them as having taken significant action.
    2. Whether they self-reported working for an EA organization in their EAGxVirtual application. The list of organizations I used can be found in an appendix. Note that this excludes people who work at non-EA organizations for EA reasons (and may include people who work at EA organizations for non-EA reasons).
  4. EA Survey: everyone who responded to the EA survey in 2020 and consented to sharing information with CEA
  5. Everyone who attended one of CEA’s virtual programs (VP)

Previous Work

  • Previous retention rate estimates have ranged from 85% to 99.6% annual retention. These have generally required manually evaluating whether or not individuals in some population have stayed engaged.
  • Peter Wildeford has done the largest non-manual retention analysis I know, which looked at the percentage of people who answered the EA survey using the same email in multiple years. He found retention rates of around 27%, but cautioned that this was inaccurate due to people using different email addresses each year.
  • Over the past six months, CEA has moved to unify our login systems. As of this writing, event applications, the EA Forum, and EA Funds/GWWC all use the same login system. This means that we are less likely to have issues with people using different emails.

Matching algorithm

  • All data sources provided (encrypted) email addresses, which was what I primarily used for matching.
  • I additionally used name and LinkedIn information to match events and survey data.
  • Note on privacy: set intersections are performed using encrypted information, where relevant. This lets us e.g. calculate the percentage of Forum users who donated on Funds, while not actually knowing the email addresses of any Funds users.

Results

PopulationPopulation SizeAttended event in 2021Read a post on the Forum in 2021Answered the 2020 EA SurveyDonated on EA Funds in 2021Attended VP in 2021Event or ForumEvent, Forum or SurveyEvent, Forum, Survey or FundsEvent, Forum, Survey, Funds or VP
All EAGxVirtual Attendees

1091

40%

26%

22%

5%

5%

50%

54%

56%

56%

EAGxVirtual Attendees who took significant action

568

40%

27%

22%

5%

6%

52%

56%

57%

58%

EAGxVirtual Attendees who worked for an EA organization

129

57%

40%

29%

1%

1%

68%

70%

70%

70%

Read Forum post in 2020

2347

21%

63%

21%

6%

4%

66%

67%

68%

68%

The final column is the most relevant one. This indicates that, depending on the population, 50-70% of the individuals who engaged in 2020 also engaged in some way in 2021.

This is substantially higher than the 27% rate found by Peter using EA Survey data, but is still substantially lower than what I expect the true rate to be. 

Manual classification

Since automated classification was unable to classify a large fraction of the population, I manually classified the remaining attendees who worked for an EA organization. For each of them, I used LinkedIn and the organization's website to see if they were still listed as staff. These were the results:

ClassificationNumber of people
Kept job listed in EAGx application

16

Personally known by me to still be involved

8

Seems to have genuinely left their employer and not started a new EA position

6

Got a new job judged by me to be EA

5

Weren't actually originally employed by EA organization (e.g. were just a volunteer)

3

Couldn't find any information

1

In summary, approximately six of the 129 EAGxVirtual attendees who took significant action (= 4.7%[1]) seem to have genuinely left working for an EA employer, and did not otherwise engage with any of CEA's projects. 

Ben Todd estimated a five-year dropout rate of 20% for people engaged at the level of working at an EA organization, which implies a 95.6% annual retention rate. This is almost identical to the 95.3% retention rate found here.

Power Analysis

It would be nice if we could regularly track retention rates and notice if things are changing. Based on these results, I believe it would require a fairly large data set and substantial manual effort to do this.

For example, to detect a change in retention rate from 95% to 90%, we need a sample of 185 individuals.[2] This would be a massive doubling of the dropout rate, but still requires a larger sample than I evaluated here.

Given this, CEA is evaluating alternative metrics. Our current top choice is to focus on people who use our products, instead of those who are "engaged" with EA in a more subjective sense. This allows us to analyze larger populations, improving the power of our tests.

Appendix – EA Organizations

This list was created by looking at the employers reported by EAGxVirtual attendees and filtering for ones which seemed EA-related in my subjective opinion. It is definitely the case that some employees of these organizations do not qualify as "highly engaged EA’s”, and thatmany highly engaged EAs work for none of these organizations.

The SQL query I used to classify people can be found here.

Footnotes

  1. Arguably the volunteers should be removed from the denominator, meaning the dropout rate is 6/126 = 4.7%.
  2. Using the standard type I error rate of 5% and type II error rate of 20%, and this calculator
Comments1


Sorted by Click to highlight new comments since:

Peter Wildeford has done the largest non-manual retention analysis I know, which looked at the percentage of people who answered the EA survey using the same email in multiple years. He found retention rates of around 27%, but cautioned that this was inaccurate due to people using different email addresses each year.


Thanks for citing me, and I'm excited for the new data sources you are looking at.

One thing you might want to add is that I looked at two different approaches. You quote the first approach, but the second approach - which I think is more accurate, and is based on comparing the year people say they joined EA versus the survey take rate for that year - shows that roughly ~60% of EAs still stay around after 4-5 years.

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 31m read
 · 
James Özden and Sam Glover at Social Change Lab wrote a literature review on protest outcomes[1] as part of a broader investigation[2] on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that's what I'm going to do today. I reviewed the evidence on protest outcomes, focusing only on the highest-quality research, to answer two questions: 1. Do protests work? 2. Are Social Change Lab's conclusions consistent with the highest-quality evidence? Here's what I found: Do protests work? Highly likely (credence: 90%) in certain contexts, although it's unclear how well the results generalize. [More] Are Social Change Lab's conclusions consistent with the highest-quality evidence? Yes—the report's core claims are well-supported, although it overstates the strength of some of the evidence. [More] Cross-posted from my website. Introduction This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review. Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews: 1. Animal Charity Evaluators (2018), Protest Intervention Report. 2. Orazani et al. (2021), Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis. 3. Social Change Lab – Ozden & Glover (2022), Literature Review: Protest Outcomes. 4. Shuman et al. (2024), When Are Social Protests Effective? The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018). Orazani et al. (2021)[3] is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters' cause. But what people say in a lab setting mig
Relevant opportunities