Hide table of contents
3 min read 9

11

Summary: Data about where people joining the EA Facebook group first heard of EA.


EA Facebook group moderators Claire Zabel and I, with some help from Julia Wise, have been sending a welcome message to every new member when we add them. By doing so we partly aim to set a good first impression, make people feel welcome, and provide a point of contact for questions.

We also ask them to tell us where they first heard about EA. By doing this, we gather data about where new EAs are formed. Hopefully this data will be helpful for future marketing efforts. As joining the group requires moderator approval, this means we reached substantially every member who joined over the past 31 days. Obviously we're only sampling a subset of EAs - those that join the facebook group - but we are sending the message to ~100% of that subset, and no one else. This compares with the big EA census, which sampled from a much wider group, but it was less clear how representative the sample was.

 

The data

Between 2015/06/22 and 07/22, 375 people joined the group, bringing us to 6478 members.

Of the ~371 people we messaged*, 216 people responded. This is a 58% response rate. We then tried to fit their response into a broad category like 'Facebook' or 'LessWrong'. Here is the data in chart and table form:

28%    Friend
16%    Facebook
14%    Other
12%    Peter Singer
12%    LessWrong/CFAR/Eliezer/HPMOR/SSC/MIRI
10%    Media Article
4%    80k
4%    GiveWell
3%    Colleague
3%    Philosophy
3%    EAG
3%    Animal Rights
2%    Local Grouo
2%    NA
2%    Will
1%    Akilnathan logeswaran
1%    Family
1%    Christianity
1%    GWWC

 

Some notes on the data

  • The categories are not mutually exclusive.
  • 'Friend' sometimes referred to people the new member knew in person, and sometimes to an online friend - often it was unclear. Sometimes they just gave a name, and if we didn't recognise the name I often assumed they were a friend.
  • 'Facebook' tends to refer to the 'recommended groups' feature on facebook.
  • 'Other' is very broad.
  • Peter Singer does well, mainly from his TED talk and his book.
  • I grouped together LessWrong, CFAR, Eliezer, HPMOR, SlateStarCodex and MIRI.
  • I think Media Articles were often discussing the EA Global Event, but people were sometimes ambiguous, or gave a result that would have required too much investigation - e.g. 'NYT article', 'magazine article'.
  • 80k includes blog posts and talks.
  • GiveWell includes Holden
  • EAG includes Tyler Altman
  • Local Groups includes EA groups, LW groups, Philosophy groups etc.

We also have a constant problem with spam in the group. A large fraction (Jacy once estimated 80%, but I think more like 30% of the new joiners) of the accounts who attempt to join are fake accounts. In total (since the group began many years ago, and before the current moderators) 403 facebook accounts have been blocked, virtually all of which for spam. This means we must be sceptical of the 6478 members number, as many of these are probably fake accounts using the EA page to make themselves appear more credible. The new members appear to be more legitimate, but we tend to approve new members if uncertain, so there are probably fake accounts among the 375. Presumably these did not respond to our greeting.

 

How do these results compare to the EA census?


Some discrepancies are to be expected, as an artifact of the data collection technique. For example, we had many more people naming ‘Facebook’. The census had LessWrong as the number one source, which probably reflected the prominent link to the survey on LessWrong. However, LessWrong is still a major source in our data, suggesting that the strong result in the census was not just an artifact of disproportionate sampling.

Notably, ‘Peter Singer’ was a pretty major source in our data - much higher than in the census data. Conversely, ‘GWWC’ was a much more major source in the census than in our data. Perhaps indicating the recent bout of attention around EA global, ‘Media Article’ does well in our data, but does not appear in the survey data.

Friendship proves its worth in both data sets.

 

Should we change what we're doing?


  • Sending these messages and compiling the answers is somewhat time consuming for Claire and I.
  • Is this data worth gathering? Sending the messages also has other benefits, as we answer people’s questions and make them welcome. But we could save time recording the data.
  • Would it be better to send a link to an online survey, with standardised response options?
  • What other data would be most valuable? Bear in mind that we don’t want to overwhelm people!
  • What else can we do better?





----

Thanks to Claire to reading a draft of this post. Any errors are, of course, my own.

---

* A very small number of profiles do not allow messages from non-friends

11

0
0

Reactions

0
0

More posts like this

Comments9


Sorted by Click to highlight new comments since:

Thanks for writing this up! It's very useful to be able to compare this to census data. Did you use the same/similar message for everyone? If so, I'd be interested to see what it was. This sort of thing would also be useful to a/b test to refine it. There is also the option to add people manually, bypassing the need for admin approval; did you contact these people too?

We used:

"Hey, welcome to the Effective Altruism facebook group! If you have a moment, would you mind telling us where you first heard about EA?

Thanks!

Claire

(moderator)"

We are considering a/b testing some new questions, and would love suggestions on different phrasing.

And when someone in the group adds a new member, we still have to approve them. We messaged them as well.

We are considering a/b testing some new questions, and would love suggestions on different phrasing.

How about using different questions from the EA Census? There was already a big community discussion on what suggestions to include on that, on this forum I think.

Interesting post. This data seems helpful, but it's probably not worth gathering constantly - maybe on an annual basis or something. Of course, then there are representativeness issues. I would think an online survey would be less effective, but there might be a way to automate this using some software. I don't think Hootsuite can do it but some other app for automating Facebook posts and messages could help.

Thank you for sharing! I think a/b testing this seems like a really good idea. Even just testing the way you are phrasing the question opposed to testing other questions. A static online survey would definitely cut down on the time investment since it will collect all the data for you, however it will definitely cut into your response rate (more clicks = more work).

It seems like continuing to gather this information over the course of all the EA Global meetings and the launch of Will's book would be valuable due to the likelihood of continued rapid growth. Past that it would be more useful to focus on using the data opposed to collecting it.

Right now across both surveys it looks like LW and word of mouth are the best recruiting tools. Continuing to enacting marketing strategies across those two platforms seems like the best course of action. Meaning we should probably be encouraging new members to tell their friends and invite them to meetings. It also seems like a great idea to keep messaging some people to reinforce the welcoming feeling and also because people that have been referred by word of mouth will likely appreciate and need a one-on-one interaction to stay interested or motivated.

A static online survey would definitely cut down on the time investment since it will collect all the data for you, however it will definitely cut into your response rate (more clicks = more work).

It'd be interesting to test that - one factor which will cut the other way is that some people are more comfortable answering an online survey (often selecting preset answer options) rather than getting into a discussion with another human being.

Tying this into the EA census sounds like a good idea as it'd provide a helpful additional subsample, at least for some subset of questions.

Thanks for posting this. I compared this to the data we got from a random sample of the EA Facebook group myself as well. I don't know whether your comparisons above are to our overall results or just the random sample of Facebook we did?

First points re. comparison:

  • Our data is slightly different from each others because you allowed people to select more than 1 place as "where you first heard about EA" whereas we allowed only 1. (We allowed more than 1 option for our "which helped get you more involved in EA question) This might skew things a bit. -It also meant that I had to adjust your numbers a bit to be able to compare %s. -Comparing our numbers is also quite difficult to do because our categories don't line up. For example, nearly 40% of your responses were "Other" which didn't come up at all in ours (mostly options with quite small numbers too). -To make our data a bit more easily comparable I shoved a few categories together (for example, I included "colleague" and "family" in "friend") -Our results agree on ACE/AR being very low numbers

Comments on the actual data: I basically agree that there were some significant differences but overall not enormous divergence.

  • LessWrong, as you note, is not the biggest point of divergence. (~10% vs ~20%) -GWWC is probably the biggest difference: 14% for us and basically a complete absence for you (<1%). I don't think that can be explained by any simple sampling bias (of the kind people posited for LW) -Relatedly we had double the 80K responses you did. -You had twice as many Singer responses; but our category was (TED) Singer, so I think some of our general Singer responses may have ended up in TLYCS (or Other, maybe), so our Singer+TLYC scores are pretty similar 8% v 9%.

What to make of the GWWC/CEA wipeout? (For us CEA and LW had basically equal influence, for you the ratio was 2:1 in favour of LW). I would guess the most likely explanation is timing. There was only year between our survey's (results being published), but we were randomly sampling members of the FB group, whereas you were only asking new members. So we'd have sampled a lot of people who're members of the group and have been EAs for a few years, whereas you are sampling solely new people. People involved in EA from close to the beginning will plausibly be much more likely to have heard of it from GWWC. New people, it seems, much less so. (Even if you include EAG and local groups and Will and Tyler personally all in CEA- which would be unreasonable anyway- the numbers don't jump that much). So it seems plausible that CEA is much less of an influence, as a proportion, than it was in the early years. It will be interesting to see if this trend continues and is reflected in our new survey.

Let me know if you're expecting a surge of Facebook joins (as a result of the Doing Good Better book launch and EA Global) and want help messaging people.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr