I think this post mostly stands up and seems to have been used a fair amount.
Understanding roughly how large the EA community seems moderately fairly, so I think this analysis falls into the category of 'relatively simple things that are useful to the EA community but which were nevertheless neglected for a long while'.
One thing that I would do differently if I were writing this post again, is that I think I was under-confident about the plausible sampling rates, based on the benchmarks that we took from the community. I think I was understandably un... (read more)
It seems plausible that we should assign weight to what past generations valued (though one would likely not use survey methodology to do this), as well as what future generations will value, insofar as that is knowable.
Summary: I think the post mostly holds up. The post provided a number of significant, actionable findings, which have since been replicated in the most recent EA Survey and in OpenPhil’s report. We’ve also been able to extend the findings in a variety of ways since then. There was also one part of the post that I don’t think holds up, which I’ll discuss in more detail.
The post highlighted (among other things):
Fwiw, I think that both moral uncertainty and non-moral epistemic uncertainty (if you'll allow the distinction) both suggest we should assign some weight to what people say is valuable.
Thanks for collating these different ideas!Fwiw, I think that it might be better if you were to simply drop the "Strength of effect: %" column from your sheet and not rank interventions according to this.
As an earlier commenter pointed out this is comparing "% reductions" in very different things (e.g. percentage reduction in cortisol vs percentage change in stress scores). But it also seems like this is going to be misleading in a variety of other ways. As far as I can tell, it's not only comparing different metrics for different interventions... (read more)
There's been a fair amount of discussion of this in the academic literature e.g. https://www.diva-portal.org/smash/get/diva2:1194016/FULLTEXT01.pdf and https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00699/full
We have a sense of this from questions we asked before (though only as recently in 2019, so they don't tell us whether there's been a change since then).
At that point 36.6% of respondents included EA non-profit work (i.e. working for an EA org) in their career plans. It was multiple select, so their plans could include multiple things, but it seems plausible that often EA org work is people's most preferred career and other things are backups.
At that time 32% of respondents cited too few job opportunities as a barrier to their involvement in EA. This... (read more)
Not that many people respond to surveys, so the total EA population is probably higher than 2k, but it's difficult to say how much higher.
We give an estimate of the total population engaged at levels 3-5/5 here, which suggests ~2700 (2300 to 3100) at the highest levels of engagement (5000-10000 overall).
We then estimate that the numbers of the most engaged have increased by ~15% between 2019 and 2020 (see the thread with Ben Todd here and the discussion in his EA Global talk.
This suggests to me there are likely 3000 or more highly engaged EAs at pres... (read more)
can I just ask what WAW stands for? Google is only showing me writing about writing, which doesn't seem likely to be it...
"WAW" = Wild Animal Welfare (previously often referred to as "WAS" for Wild Animal Suffering).
And how often does RP decide to go ahead with publishing academia?
I'd say a small minority of our projects (<10%).
Thanks for asking. We've run around 30 survey projects since we were founded. When I calculated this in June we'd run a distinct survey project (each containing between 1-7 surveys), on average, every 6 weeks.
Most of the projects aren't exactly top secret, but I err on the side of not mentioning the details or who we've worked with unless I'm certain the orgs in question are OK with it. Some of the projects, though, have been mentioned publicly, but not published: for example, CEA mentioned in their Q1 update that we ran some surveys for them t... (read more)
One major factor that makes some research questions more suited to academia is requiring technical or logistical resources that would be hard to access or deploy in a generalist EA org like RP (some specialist expertise also sometimes falls into this category). Much WAW research is like this, in that I don't think it makes sense for RP to be trying to run large-scale ecological field studies.
Another major factor is if you want to promote wider field-building or you want the research to be persuasive as advocacy to certain audiences in the way that sometime... (read more)
I'm particularly glad you note this since the survey team's research in particular is almost exclusively non-public research (basically the EA Survey and EA Groups Survey are the only projects we publish on the Forum), so people understandably get a very skewed impression of what we do.
Regarding magnesium, the specific supplement you happened to link to was magnesium oxide. There's some evidence that magnesium oxide is less bioavailable than other forms of magnesium (1,2,3) It's true that magnesium oxide is cheaper, but magnesium citrate is still exceptionally cheap (a few pence per dose). So, even if you are uncertain about the benefits of other forms over oxide, I think it's still probably reasonable to err in favour of these other forms.
I think it's also worth thinking about glycine. There are a few papers suggesting that glycine impr... (read more)
Thanks! It's cool they have done a study on the 'full-room' approach.
I think full-room approaches are worth people looking into, but it's worth noting that they are usually less bright than using SAD lamps (and this goes for the setup described in the pre-print too). As noted, in the pre-print, they put out more light, but because you are usually much further away from the lightbulbs distributed around the room than you would be from a light box on your desk, the mean illuminance at eye level was 1433-1829 lux. By comparison, I have three of the light boxe... (read more)
The simplest method is to purchase a SAD lamp that emits 10,000 lux and place this on your desk, maximising exposure while working. However, the light levels received from a SAD lamp can decrease significantly if placed too far from the face, while the lamp’s light offers minimal benefit when doing non-desk based activities.
This really bears emphasising, since most SAD lamps (accurately) marketed as "10,000 lux" are 10,000 lux only at distances much shorter than most people might expect or might be able to achieve with their desk setup (see Sco... (read more)
A useful comparison point might be how many EAs are members of a local group. A priori, one might think that being a member of an in-person group is a higher bar/more demanding than being a member of an EA Forum, but historically that has not been the case.
One other thing that may be of interest is that we don't see much of difference in the increase in EA group EA forum membership between EAS 2019 and EAS 2020. But, apparently, there's been a lot more support for groups too, so perhaps that's not surprising. (One other possible thing of note (not sh... (read more)
Among respondents to the EA Survey, in 2020, 38% of respondents were EA Forum members. In 2019 it was 30%. In 2018 it was 20%.
Those numbers are doubtless inflated though, because EA Forum members (a very disproportionately highly engaged group: >80% are levels 4-5 out of 5 in self-reported engagement) are more likely to take the survey. The question is how many less engaged (who are less likely to be on the Forum) there are, which is less easy to estimate, although there is a model in this post.
Past surveys (e.g. Open Phil’s survey) suggest that connections between individuals are the key source of impact from our events. So we focus on the number of new connections we make at our events.
I'd be curious which survey result you're thinking of here. Aside from a couple of qualitative responses , I don't remember a question in the OP survey that I would think addresses this.
To my recollection (which may be mistaken) the OP survey didn't include the question which more explicitly addresses this, which the EA Survey did.
See the questi... (read more)
I think it depends a lot on the specifics of your survey design. The most commonly discussed tradeoff in the literature is probably that having more questions per page, as opposed to more pages with fewer questions, leads to higher non-response and lower self-reported satisfaction, but people answer the former more quickly. But how to navigate this tradeoff is very context-dependent.
All in all, the optimal number of items per screen requires a trade-off:More items per screen shorten survey time but reduce data quality (item nonresponse) and respondent
I think all of the following (and more) are possible risks:
- People are tired/bored and so answer less effortfully/more quickly
- People are annoyed and so answer in a qualitatively different way
- People are tired/bored/annoyed and so skip more questions
- People are tired/bored/annoyed and dropout entirely
Note that people skipping questions/dropping out is not merely a matter of quantity (reduced numbers of responses), because the dropout/skipping is likely to be differential. The effect of the questions will be to lead to precisely those respondents ... (read more)
Thanks for the post. I think most of this is useful advice.
"Walkthroughs" are a good way to improve the questions
In the academic literature, these are also referred to as "cognitive interviews" (not to be confused with this use) and I generally recommend them when developing novel survey instruments. Readers could find out more about them here.
Testers are good at identifying flaws, but bad at proposing improvements... I'm told that this mirrors common wisdom in UI/UX design: that beta testers are good at spotting areas for improvement, but bad (or ov
Would it be helpful to put some or all of the survey data on a data visualisation software like google data studio or similar? This would allow regional leaders to quickly understand their country/city data and track trends. It might also save time by reducing the need to do so many summary posts every year and provide new graphs on request.
We are thinking about putting a lot more analyses on the public bookdown next year, rather than in the summaries, which might serve some of this function. As you'll be aware, it's not that difficult to generate th... (read more)
~15 months isn't necessary a target for the future. I think we could actually increase the gap to ~1.5 years going forward. But yes, the reasons for that would be to get the best balance between getting more repeated measurements (which increases the power, loosely speaking, of our estimates), being able to capture meaningful trends (looking at cross-year data, most things don't seem to change dramatically in the course of only 12 months), and reducing survey fatigue. That said, whatever the average frequency of the survey going forward, I expect there to ... (read more)
Thanks for the question. We're planning to release the next EA Survey sometime in the middle of 2022. Historically, the average length of time between EA Surveys has been ~15 months, rather than every 12 months, and last year's survey was run right at the end of the year, so there won't be a survey within 2021 (the last time this happened was 2016).
That makes sense. Reference numbers even for things like race is surprisingly tricky. We've previously considered comparing the percentages for race within the EA Survey to baseline percentages. But although this works passably well for the US (EAS respondents are more white) and UK (EAS respondents are less white)- without taking into account the fact that EAS respondents are disproportionately rich, highly educated and young and therefore should not be expected to represent the composition of the general population- for many other major countries t... (read more)
Here are the countries with the highest EAs per capita. Note that Iceland, Luxembourg and Cyprus, nevertheless have very low numbers of EA (<5) respondents. This graph doesn't leave out any countries with particularly high numbers of EAs, in absolute terms, though Poland and China are missing despite having >10.
We have reported this previously in both EAS 2018 and EAS 2019. We didn't report it this year because the per capita numbers are pretty noisy (at least among the locations with the highest EAs per capita, which tend to be low population countries). But it would be pretty easy to reproduce this analysis using this year's data.
To get another reference point I coded the "High Standards" comments and found that 75% did not seem to be about "perceived attitudes towards others." Many comments explicitly disavowed the idea that that they think EAs look down on others, for example, but still reported that they feel bad because of demandingness considerations or because 'everyone in the community is so talented' etc.
Not sure about the jump from 2014 to 2015, I'd expect some combination of broader outreach of GWWC, maybe some technical issues with the survey data (?) and more awareness of there being an EA Survey in the first place?
I think the total number of participants for the first EA Survey (EAS 2014) are basically not comparable to the later EA Surveys. It could be that higher awareness in 2015 than 2014 drives part of this, but there was definitely less distribution for EAS2014 (it wasn't shared at all by some major orgs). Whenever I am comparing num... (read more)
I do think there are some similarities between all these points that I'd maybe categorise under "elitist" (although I don't want to because I think that term has different connotations for people). But perhaps something like "EAs are perceived as being better than non-EAs" an this is expressed as the items I mentioned.
I think there's something of a family resemblance, but that it still wouldn't be possible to categorise them all as one thing. For example, I don't think disliking "high standards", necessarily entails disliking a "perceived... (read more)
it seems like this could be read as a negative (e.g. people don't feel welcome by the existing community, while the latter sounds quite positive - people are happy with the way the community influences and want more of it?
A lot of the ratings/comments were ambivalent in this way. This was in response to the question "Why did you give the two ratings [1-10] above?" rather than something like "Why did you give a positive/negative rating?" A lot of comments, were of the form "The community is great, but it should do more..."
Mean satisfaction... (read more)
Thanks! This is interesting to see.
A few caveats/comments:
For "new to EA" and "peripheral": people would often say things like "I'm new to EA, so I don't really know" or "I'm only peripherally engaged with the community, so I don't really know" to explain their ratings.
"More community/influence" captured comments saying they wanted to EA community to do more, particularly involving becoming more of a community or a larger community or influencing people more.
Thanks for asking.
"Politics" included responses saying that EA was too woke/left and too capitalist (roughly twice as many in the former camp as the latter, but these are very small numbers so that ratio is inexact), and a very small number of mentions of too much politics, too little politics.
For this question, people could mention any number of things in principle, i.e. they could write literally anything they wanted, but each response was only coded as representing a single category that was thought to best reflect that comment.
... we seem to have surveyed a lot of people who were meaningfully affected by influences before mid-2017; on average, the people we surveyed say they first heard about EA/EA-adjacent ideas in 2015.So I think there’s often a delay of 2-4 years between when a survey respondent first hears about EA/EA-adjacent ideas and when they start engaging in the kind of way that could lead our advisors to recommend them.
... we seem to have surveyed a lot of people who were meaningfully affected by influences before mid-2017; on average, the people we surveyed say they first heard about EA/EA-adjacent ideas in 2015.
So I think there’s often a delay of 2-4 years between when a survey respondent first hears about EA/EA-adjacent ideas and when they start engaging in the kind of way that could lead our advisors to recommend them.
I think this is what one would predict given what we've reported previously about the relationship between time since joining EA and engagement (... (read more)
Thanks for the suggestion(s)! I was waiting for us to have more of the posts in the series out. But, now that there are only a couple left, I agree it's time.
Our respondents also look about 2x more likely than EA Survey respondents to have been introduced via a class at school, though I’m not sure how much of this is noise. (4% of our respondents gave this answer, vs. 2% for the EA Survey.)
For reference, I think the 95% CI for your figures would be about 2.1-7.7%, and 1.3-2.5% for the EAS.
Doing a quick eyeball of the two [OP and EAS] charts, they look pretty similar insofar as they’re comparable. “Peter Singer’s work” doesn’t appear, but it’s because they didn’t have that category — that would fall under
We don’t do significance testing or much other statistical analysis in this analysis... Our sense was that this approach was the best one for our dataset, where the n on any question was at maximum 217 (the number of respondents) and often lower, though we’re open to suggestions about ways to apply statistical analysis that might help us learn more.
Because you have so much within-subjects data (i.e. multiple datapoints from the same respondent), you will actually be much better powered than you might expect with ~200 respondents. For exam... (read more)
Thanks for the post. It looks like useful data!
Definition of a highly engaged EACEA defines an engaged EA as someone who takes significant actions motivated by EA principles (we sometimes also use the term “impartially altruistic, truth-seeking principles). In practice, this can look like selecting a job or degree program, donating a substantial portion of one’s income, working on EA-related projects, and so on. [italics added]
Definition of a highly engaged EA
CEA defines an engaged EA as someone who takes significant actions motivated by EA principles (we sometimes also use the term “impartially altruistic, truth-seeking principles). In practice, this can look like selecting a job or degree program, donating a substantial portion of one’s income, working on EA-related projects, and so on. [italics added]
Just to clarify my understanding, are you defining/taking yourself to be looking at "highly engaged" EAs or just "engaged" EAs? For... (read more)
Do you have more information about how personal/family finance as a bottleneck for impact is to be understood?
Unfortunately, the majority of people's open comments didn't provide more detail beyond something like "financial constraints" or "low income." Among the minority of comments which did offer more detail, the specific thing most often mentioned was simply that people could donate more if they had more money. Freedom to explore different options, switch career, spend more time on high-impact work, and stress related to money were all mentioned by ~ a couple of people.
Yet, if Dominic Cummings’ word is anything to go by, the UK government still has a long way to go in terms of long-term policymaking.
Apologies if you already linked to this and I missed it, but Dominic Cummings is also writing a series about Singapore right now: https://dominiccummings.substack.com/p/high-performance-startup-government
I think the figures for highly engaged EAs working in Mental Health, drawn from EA Survey data, will be somewhat inflated by people who are working in mental health, but not in an EA-relevant sense e.g. as a psychologist. This is less of a concern for more distinctively EA cause areas of course.
Among people who, in EAS 2019, said they were currently working for an EA org, the normalised figures were only ~5% for Mental Health and ~2% for Climate Change (which, interestingly, is a bit closer to Ben's overall estimates for the resources going to those areas)... (read more)
If it seems worth it (i.e., more people than me care!), you could potentially add a closed ended 'other potential cause areas' item. These options could be generated from the most popular options in the prior year's open ended responses. E.g., you could have IIDM and S-risk as close ended 'other options' for that question next year
Yeh that seems like it could be useful. It's useful to know what kinds of things people find valuable, because space in the survey is always very tight.
I agree it's quite possible that part of this observed positive association between engagement and longtermism (and meta) and negative association with neartermism is driven by people who are less sympathetic to longtermism leaving the community. There is some evidence that this is a factor in general. In our 2019 Community Information post, we reported that differing cause preferences were the second most commonly cited reason for respondents' level of interest in EA decreasing over the last 12 months. This was also among the most commonly cited factors i... (read more)
I can't speak for others, but I don't think there's any specific theoretical conception of the categories beyond the formal specification of the categories (EA movement building and Meta (other than EA movement building). Other people might have different substantive views about what does or does not count as EA movement building, specifically.I think the pattern of results this year, when we split out these options, suggests that most respondents understood our historical "Meta" category to primarily refer to EA movement building. As noted, EA movement bu... (read more)
In future, I'd like to see changes in the 'other causes' over time and across engagement level, if possible. For instance, it would be interesting to see if causes such as IIDM or S-risk are becoming more or less popular over time, or are mainly being suggested by new or experienced EAs.
Yeh, I agree that would be interesting. Unfortunately, if we were basing it on open comment "Other" responses, it would be extremely noisy due to low n, as well as some subjectivity in identifying categories. (Fwiw, it seemed like people mentioning S-risk were a... (read more)
The context here was that we've always asked about "Meta" since the first surveys, but this year an org was extremely keen that we ask explicitly about "EA movement building" and separate out Meta which was not movement building.
In future years, we could well move back to just asking about Meta, or just ask about movement building, given that non-EA movement building meta both received relatively low support and was fairly well correlated with movement building.
I think there's definitely something to this.
As is suggested by this report, even donors who are very proactive, are often barely reflecting about where they should give at all. They are also, often, thinking about the charity sector in terms of very coarse-grained categories (e.g. my country/international charities, people/animal charities). On the other hand, they often are making sense of their donations in terms of causes and an implicit hierarchy of causes (including particular, personal commitments, such as to heart disease because a family mem... (read more)
Thanks!Incidentally your comment just now prompted me to look at the cross-year cross-cohort data for this. Here we can see that in EAS 2019, there was a peak in podcast recruitment closer to 2016 (based on when people in EAS 2019 reported getting involved in EA). Comparing EAS 2019 to EAS 2020 data, we can see signs of dropoff among podcast recruits among those who joined ~2014-2017 (and we can also see the big spike in 2020).
These are most instructive when compared to the figures for other recruiters (since the percentage of a cohort recruite... (read more)
This is also reflected very clearly in EA Survey data.
Here's the breakdown of which specific podcasts people cited in EAS 2020, for where they first heard about EA.
You can also get a sense of the magnitude of Sam Harris' podcast compared to other things like Doing Good Better from looking at the total number of mentions across response categories. (Respondents were asked to first indicate where they first heard about EA from a list of broad categories like 'Book', 'Podcast', and then asked to provide further details (e.g. what book or podcast) in an ... (read more)