I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team.
The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We're currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.
The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:
Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.
My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.
Survey methodology and data analysis.
Thanks Arden!
I also agree that prima facie this strategic shift might seem worrying given that 80K has been the powerhouse of EA movement growth for many years.
That said, I share your view that growth via 80K might reduce less than one would naively expect. In addition to the reasons you give above, another consideration is our finding is that a large percentage of people get into EA via 'passive' outreach (e.g. someone googles "ethical career" and finds the 80K website', and for 80K specifically about 50% of recruitment was 'passive'), rather than active outreach, and it seems plausible that much of that could continue even after 80K's strategic shift.
Our framings will probably change. It's possible that the framings we use more going forward will emphasise EA style thinking a little less than our current ones, though this is something we're actively unsure of.
As noted elsewhere, we plan to research this empirically. Fwiw, my guess is that broader EA messaging would be better (on average and when comparing the best messaging from each) at recruiting people to high levels of engagement in EA (this might differ when looking to recruit people directly into AI related roles), though with a lot of variance within both classes of message.
A broader coalition of actors will be motivated to pursue extinction prevention than longtermist trajectory changes... For instance, see Scott Alexander on the benefits of extinction risk as a popular meme compared to longtermism.
This might vary between:
Though our initial work does not suggest this.
I agree this is a potential concern.
As it happens, since 2020, the community has continued to age. As of the end of last year, it's median 31, mean 22.4, and we can see that it has steadily aged across years.
It's clear that a key contributor to our age distribution is the age at which people first get involved with EA, which is median 24, mean 26.7, but the age at which people first get involved has also increased over time.
I think people sometimes point to our outreach focusing on things like university groups to explain this pattern. But I think this is likely over-stated, as this accounts for only a small minority of our recruiting, and most of the ways people first hear about EA seems to be more passive mechanisms, not tied to direct outreach, which would be accessible to people at older ages (we'll discuss this in more detail in the 2024 iteration of this post).
That said, different age ranges do appear to have different levels of awareness of EA, with highest awareness seeming to be at the 25-34 or 35-44 age ranges. (Though our sample size is large, the number of people who we count as aware of EA are very low, so you can see these estimates are quite uncertain. Our confidence in these estimates will increase as we run more surveys). This suggests that awareness of EA may be reaching different groups unevenly, which could partly contribute to lower engagement from older age groups. But this need not be the result of differences in our outreach. It could result from different levels of interest from the different groups.
Matthew Yglesias has written more critically about this tendency (which he thinks is widely followed in activist circles, but is often detrimental). For example, here he describes what he refers to as "activist chum", which is good for motivating and fundraising (very important for the self-interest of (those leading) movement), but can lead to focusing on "wins" that aren't meaningful and may be unhelpful.
The chum comes from the following political organizing playbook that is widely followed in progressive circles:
- Always be asking for something.
- Ask for something of someone empowered to give it to you.
- Ask for something from someone who cares what you think.
That's interesting, but seems to be addressing a somewhat separate claim to mine.
My claim was that that broad heuristics are more often necessary and appropriate when engaged in abstract evaluation of broad cause areas, where you can't directly assess how promising concrete opportunities/interventions are, and less so when you can directly assess concrete interventions.
If I understand your claims correctly they are that:
I generally agree that applying broad heuristics to broad cause areas is more likely to be misleading than when you can assess specific opportunities directly. Implicit in my claim is that where you don't have to rely on broad heuristics, but can assess specific opportunities directly, then this is preferable. I agree that considering whether a specific intervention has been tried before is useful and relevant information, but don't consider that an application of the Neglectedness/Crowdedness heuristic.
I think this depends crucially on how, and to what object, you are applying the ITN framework:
On the whole, it seems to me that the further you move aware from abstract evaluations of broad cause areas, and more towards concrete interventions, the less it's necessary or appropriate to depend on broad heuristics and the more you can simply attempt to estimate expected impact directly.
Thanks for the comment!
I agree these would be interesting things to include.
We can assess this empirically, to some extent, by looking at changes in respondents who we can track over time (i.e. those who logged in to their EA account to pull their previous responses or provided their email). This allows us both to compare both how many individuals with different political views in 2022 changed to different views in 2024 and how many people with different views in 2022 may have dropped out (because we can't track them in 2024[1]).
TLDR:
You can see the total flows between categories on this Sankey diagram.[2]
Due to the low numbers, it can be hard to compare the results for different categories though.
First, we can compare how many people switched to NA (i.e. were not tracked in 2024 who were in 2022).
We can see that there's very little difference between the Left (77%), Center Left (73%) and Center (74%). We do, however, see that Center right (85%) and Right (83%) respondents seem more likely to not appear in the 2024 dataset, though these are small groups due to the very low number of right-of-center respondents.[3]
Switch to NA | |||
Left | NA | 785 | 77.49% |
Center left | NA | 788 | 72.76% |
Center | NA | 155 | 73.81% |
Center right | NA | 52 | 85.25% |
Right | NA | 15 | 83.33% |
Libertarian | NA | 109 | 71.71% |
Other | NA | 144 | 75.39% |
Prefer not to answer | NA | 165 | 83.76% |
Did not answer | NA | 74 | 85.06% |
Did not view | NA | 182 | 97.33% |
Then, we can compare changes in political views, setting aside the large number of NA-in-2024 responses, to see the changes more clearly. We can see quite a strong shift (27%) among Left (in 2022) respondents to the Center left (this compares to the total drop in Left respondents being 28.8%). Among the center left, we see a smaller but still noticeable switch to the Center, and for the Center, we see most respondents staying the same.[4]
Non-NA switches | |||
Left | Left | 154 | 67.54% |
Left | Center left | 61 | 26.75% |
Left | Center | 4 | 1.75% |
Left | Center right | 1 | 0.44% |
Left | Right | 0 | 0.00% |
Left | Libertarian | 1 | 0.44% |
Left | Other | 7 | 3.07% |
228 | |||
Center left | Left | 7 | 2.40% |
Center left | Center left | 246 | 84.25% |
Center left | Center | 32 | 10.96% |
Center left | Center right | 2 | 0.68% |
Center left | Right | 0 | 0.00% |
Center left | Libertarian | 3 | 1.03% |
Center left | Other | 2 | 0.68% |
292 | |||
Center | Left | 1 | 1.85% |
Center | Center left | 1 | 1.85% |
Center | Center | 46 | 85.19% |
Center | Center right | 3 | 5.56% |
Center | Right | 0 | 0.00% |
Center | Libertarian | 3 | 5.56% |
Center | Other | 0 | 0.00% |
54 |
These results aren't suggestive of leftists being particularly likely to drop out. But there is some evidence of Left respondents in 2022 switching in quite large numbers (similar to the total change in % Left) to the Center left in 2024.
Caveat: a respondent might not appear in 2024 because they dropped out of EA or they didn't take the survey or because they took the survey but didn't log in or provide the same email address as last time. Differential disappearance across groups might still be suggestive, however, since seem to be few innocent explanations for why leftists/rightists would systematically become less likely to take the survey or provide their email address across time.
It's also important to bear in mind that the sub-sample of respondents who logged in or provided their email address might differ from the total sample (e.g. they might be more engaged or more satisfied). But just over 90% of respondents in both 2022 and 2024 fit into this category, so it does not seem a very selective sub-sample.
NAs in 2022 include only people who did not reach the politics question of the survey, because this sub-sample includes only people who were tracked in the 2022 survey.
It makes sense that the various other categories for not answering the question would also show high rates of not answering the email question or not logging in.
Center right and Right not shown because, excluding NAs in 2024, this was only 9 and 3 respondents respectively (though most stayed in the their prior category).
I definitely agree that would eventually become the case (eventually all the older non-AI articles will become out of date). I'm less sure it will be a big factor 2 years from now (though it depends on exactly how articles are arranged on the website and so how salient it is that the non-AI articles are old).
I also think this is true in general (I don't have a strong view about the net balance in the case of 80K's outreach specifically).
Previous analyses we conducted suggested that over half of Longtermists (~60%) previously prioritised a different cause and that this is consistent across time.
You can see the overall self-reported flows (in 2019) here.