The short answer is simply that the vast majority of projects requested of us are highly time sensitive (i.e. orgs want them completed within very fast timeline), so we need to have the staff already in place if we’re to take them on, as it’s not possible to hire staff in time to complete them even if they are offering more than enough funding (e.g. 6 or 7 figures) to make it happen.
This is particularly unfortunate, since we want to grow our team to take on more of these projects, and have repeatedly turned down many highly skilled applicants who could do valuable work, exclusively due to lack of funding.
Still, I would definitely encourage people to reach out to us to see whether we have capacity for projects.
Thanks! It's a pity, because I'm a big fan of house plants, and the heavy blackout blinds I use prevent getting fresh air via windows at night, so this would have been convenient if true.
This seems very plausible to me. Personal connections repeatedly appear to be among the most important factors for promoting people's continued involvement in and increased engagement with EA (e.g. 2019, 2020).
That said very few EAs appear to have any significant number of EAs who they would "feel comfortable reaching out to to ask for a favor" (an imperfect proxy for "friend" of course).
Anecdotally, EAs I speak to are usually surprised by how low these numbers are. (These are usually highly engaged EAs with lots of connections, who therefore likely have a... (read more)
I think (though, again, I only read it quickly) the paper includes estimates for both carbon sequestered and biomass growth for the plants.
I believe the plant which they used as the reference for that rough figure above increased in biomass by 132.5g but sequestered 56.4g carbon over several weeks, and the 0.8g carbon fixed per day comes from that latter figure.
one study found several common houseplants reduced CO2 concentration in a room by 15-20%. It seems reasonable to assume that placing several houseplants in a room would significantly increase this effect, though likely with diminishing returns.
Unfortunately, I am quite a bit less optimistic about this. (Caveat: I only looked into this very briefly)
From quickly looking at the conference paper you cite, it seemed that the plants were in 1 cubic meter chambers and reduced CO2 by ~50-100ppm for the most part (~5-12% reduction) based on table ... (read more)
I believe it's to mean "vegetarian or vegan", rather than to censor "vegan."
we found a relatively weak correlation between what we call "expansive altruism" (willingness to give resources to others, including distant others) and "effectiveness-focus" (willingness to choose the most effective ways of helping others)
I don’t think we can infer too much from this result about this question.
The first thing to note, as observed here, is that taken at face value, a correlation of around 0.243 is decently large, both relative to other effect sizes in personality psychology and in absolute terms.
However, more broadly, measures ... (read more)
Thanks for the comment!
Is this survey going to be run again? It seems there isn't a 2021 survey? (Or at least the results are published yet)
Yeh, I noted in my reply to your earlier comment, there wasn't an EA Survey run in 2021, but we are planning to run one this year. (That is assuming that you are referring to the EA Survey, not the Groups Survey).
Thanks for your suggested questions as well. Unfortunately, space is very limited in the EA Survey, and there are a lot of requests from other orgs, so it may not be possible to add any new questions.&nb... (read more)
Many thanks! This will make the Forum a lot more usable for me.
The biggest disaster of the last 30 years has been the adoption of horizontalist dogma. The notion that you should not have leaders, hierarchies or clear structures.
I think this is a fairly common/prominent concern in left circles e.g. The Tyranny of Structurelessness.
Thanks for your comment! (Just to clarify, this is a post about our separate EA Groups survey, but I assume you're asking about the EA Survey).
The EA Survey is distributed through a variety of different channels or 'referrers' (including e-mails and social media from the main EA orgs, the EA Forum, e-mailing past survey takers, and local groups). The vast majority of responses come from a relatively small number of those referrers though (80,000 Hours, EA Forum, Local Groups, e-mail to past respondents and the EA newsletter being the main ones). You can se... (read more)
Thanks for your question. We've addressed this in a number of different ways over the years.
In 2020 we asked respondents how far they agreed with an explicit statement of longtermism. Agreement was very high (68.5% agreement vs 17.7% disagreement).
Responses to this (like our measures of longtermism and neartermism reported above and here) vary across engagement levels (see below). Since we sample a larger proportion of highly engaged people than less engaged people, the true number of people not agreeing with longtermism in the broader community is l... (read more)
Thanks for the reply!
We didn't dwell on the minimum plausible number (as noted above, the main thrust of the post is that estimates should be lower than previous estimates, and I think a variety of values below around 2% are plausible).
That said, 0.1% strikes me as too low, since it implies a very low ratio between the number of people who've heard of EA and the number of moderately engaged EAs. i.e. this seems to suggest that for every ~50 people who've heard of EA (and basically understand the definition) there's 1 person who's moderately engaged w... (read more)
Thanks for spotting! Edited.
I find the proportion of people who have heard of EA even after adjusting for controls to be extremely high. I imagine some combination of response bias and just looking up the term is causing overestimation of EA knowledge.
Just so I can better understand where and the extent to which we might disagree, what kind of numbers do you think are more realistic? We make the case ourselves in the write-up that, due to over-claiming, we would we generally expect these estimates to err on the side of over-estimating those who have heard of and have a rough f... (read more)
I think something like 0.1% of the population is a more accurate figure for how you coded the most strict category. 0.3% for the amount I would consider to have actually heard of the movement. These are the figures I would have given before seeing the study, anyway.
It's hard for me to point to specific numbers that have shaped my thinking, but I'll lay out a bit of my thought process. Of the people I know in person through non-EA means, I'm pretty sure not more than a low-single-digit percent know about EA, and this is a demographic that is way more likely... (read more)
Peter Singer seems to be higher profile than the other EAs on your list. How much of this do you think is from popular media, like The Good Place, versus from just being around for longer?
Interesting question. It does seem clear that Peter Singer is known more broadly (including among those who haven’t heard of EA, and for some reasons unrelated to EA). It also seems clear that he was a widely known public figure well before ‘The Good Place’ (it looks like he was described as “almost certainly the best-known and most widely read of all contempo... (read more)
For considering "recruitment, retention, and diversity goals" I think it may also be of interest to look at cause preferences across length of time in EA, across years. Unlike in the case of engagement, we have length of time in EA data across every year of the EA Survey, rather than just two years.
Although EAS 2017 messes up what is otherwise a beautifully clear pattern*, we can still quite clearly see that:
Fwiw, my intuition is that EA hasn't been selecting against, e.g. good epistemic traits historically, since I think that the current community has quite good epistemics by the standards of the world at large (including the demographics EA draws on).
I think it could be the case that EA itself selects strongly for good epistemics (people who are going to be interested in effective altruism have much higher epistemic standards than the world or large, even matched for demographics), and that this explains most of the gap you observe, but also that some action... (read more)
Thanks for the nice comment!
Do you have data on the trends over time? I’m interested to know if the three attributes are getting closer together or further apart at both ends of the engagement spectrum.
We only have a little data on the interaction between engagement and cause preference over time, because we only had those engagement measures in the EA Survey in 2019 and 2020. We were also asked to change some of the cause categories in 2020 (see Appendix 1), so comparisons across the years are not exact.
Still, just looking at differences between those two... (read more)
EA is less focused on longtermism than people might think based on elite messaging. IIRC this is affirmed by past community surveys
This is somewhat less true when one looks at the results across engagement levels. Among the less engaged ~50% of EAs (levels 1-3), neartermist causes are much more popular than longtermism. For level 4/5 engagement EAs, the average ratings of neartermist, longtermist and meta causes are roughly similar, though with neartermism a bit lower. And among the most highly engaged EAs, longtermist and meta causes are dramaticall... (read more)
This is a really helpful chart and has updated my model of the community more than any of the written comments.
For a community of data nerds, it’s surprising that we don’t use data visualisations in our Forum comments more regularly.
My hypothesis is that the attributes will be getting closer together at low levels of engagement and getting further apart at the higher levels.
Back when LEAN was a thing we had a model of the value of local groups based on the estimated # of counterfactual actively engaged EAs, GWWC pledges and career changes, taking their value from 80,000 Hour $ valuations of career changes of different levels.
The numbers would all be very out of date now though, and the EA Groups Surveys post 2017 didn't gather the data that would allow this to be estimated.
I also agree this would be extremely valuable.
I think we would have had the capacity to do difference-in-difference analyses (or even simpler analyses of pre-post differences in groups with or without community building grants, full-timer organisers etc.) if the outcome measures tracked in the EA Groups Survey were not changed across iterations and, especially, if we had run the EA Groups Survey more frequently (data has only been collected 3 times since 2017 and was not collected before we ran the first such survey in that year).
One other thing I'd flag is that, although I think it's very plausible that there is a cross-over interaction effect (such that people who are predisposed to be positively inclined to EA prefer the "Effective Altruism" name and people who are not so predisposed prefer the "Positive Impact" name), it doesn't sound like the data which you mention doesn't necessary suggest that.
i.e. (although I may be mistaken) it broadly sounds like you asked people beforehand (many of whom liked PISE) and you later asked a different set of people who alrea... (read more)
Taking the question literally, searching the term ‘social justice’ in EA forum reveals only 12 mentions, six within blog posts, and six comments...I worry EA is another exclusive, powerful, elite community, which has somehow neglected diversity.
Taking the question literally, searching the term ‘social justice’ in EA forum reveals only 12 mentions, six within blog posts, and six comments...
I worry EA is another exclusive, powerful, elite community, which has somehow neglected diversity.
I think it's worth distinguishing discussions of "social justice" from discussions of "diversity." Diversity in EA has been much discussed, and there is also a whole facebook group dedicated to it. There has been less discussion of "social justice" in those terms, partly, I suspect, because it's not natural fo... (read more)
Yes, it would be easy and natural to include measures of EA inclination when "examin[ing] the effects of different characteristics of the students", which I mentioned.
Thanks for running this experiment!
It seems like this would be relatively easy to test with an online experiment using a student-only sample.
This would have the advantage that we could test the effect of the different names without experimenting with an actual EA group by changing its name. On the other hand, this might miss any factors particular to that specific group of students (if there are any such factors), though it would be possible with the larger sample size that this would allow to examine the effects of different characteristics of the students or the university they attend. This would also allow us to test multiple additional names at the same time.
There are two broad reasons why I would prefer the ACSI items (considered individually) over the NPS (style) item:
This depends on what you are trying to measure, so I’ll start with the context in the EAS, where (as I understand it) we are trying to measure general satisfaction with or evaluation of the EA community.
Here, I think the ACSI items we used (“How well does the EA community compare to your ... (read more)
Cool! Glad to see this, I've been harping on about the NPS for some time (1, 2, 3, 4).
We usually do this because we don’t want to take people’s time up by asking three questions. I haven’t done a very rigorous analysis of the trade-offs here though, and it could be that we are making a mistake and should use ACSI instead.
As you may have considered, you could ask just one of the ACSI items, rather than asking the one NPS item. This would have lower reliability than asking all three ACSI items, but I suspect that one ACSI item would have higher validity than... (read more)
I think this post mostly stands up and seems to have been used a fair amount.
Understanding roughly how large the EA community seems moderately fairly, so I think this analysis falls into the category of 'relatively simple things that are useful to the EA community but which were nevertheless neglected for a long while'.
One thing that I would do differently if I were writing this post again, is that I think I was under-confident about the plausible sampling rates, based on the benchmarks that we took from the community. I think I was understandably un... (read more)
It seems plausible that we should assign weight to what past generations valued (though one would likely not use survey methodology to do this), as well as what future generations will value, insofar as that is knowable.
Summary: I think the post mostly holds up. The post provided a number of significant, actionable findings, which have since been replicated in the most recent EA Survey and in OpenPhil’s report. We’ve also been able to extend the findings in a variety of ways since then. There was also one part of the post that I don’t think holds up, which I’ll discuss in more detail.
The post highlighted (among other things):
Fwiw, I think that both moral uncertainty and non-moral epistemic uncertainty (if you'll allow the distinction) both suggest we should assign some weight to what people say is valuable.
Thanks for collating these different ideas!Fwiw, I think that it might be better if you were to simply drop the "Strength of effect: %" column from your sheet and not rank interventions according to this.
As an earlier commenter pointed out this is comparing "% reductions" in very different things (e.g. percentage reduction in cortisol vs percentage change in stress scores). But it also seems like this is going to be misleading in a variety of other ways. As far as I can tell, it's not only comparing different metrics for different interventions... (read more)
There's been a fair amount of discussion of this in the academic literature e.g. https://www.diva-portal.org/smash/get/diva2:1194016/FULLTEXT01.pdf and https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00699/full
We have a sense of this from questions we asked before (though only as recently in 2019, so they don't tell us whether there's been a change since then).
At that point 36.6% of respondents included EA non-profit work (i.e. working for an EA org) in their career plans. It was multiple select, so their plans could include multiple things, but it seems plausible that often EA org work is people's most preferred career and other things are backups.
At that time 32% of respondents cited too few job opportunities as a barrier to their involvement in EA. This... (read more)
Not that many people respond to surveys, so the total EA population is probably higher than 2k, but it's difficult to say how much higher.
We give an estimate of the total population engaged at levels 3-5/5 here, which suggests ~2700 (2300 to 3100) at the highest levels of engagement (5000-10000 overall).
We then estimate that the numbers of the most engaged have increased by ~15% between 2019 and 2020 (see the thread with Ben Todd here and the discussion in his EA Global talk.
This suggests to me there are likely 3000 or more highly engaged EAs at pres... (read more)
can I just ask what WAW stands for? Google is only showing me writing about writing, which doesn't seem likely to be it...
"WAW" = Wild Animal Welfare (previously often referred to as "WAS" for Wild Animal Suffering).
And how often does RP decide to go ahead with publishing academia?
I'd say a small minority of our projects (<10%).
Thanks for asking. We've run around 30 survey projects since we were founded. When I calculated this in June we'd run a distinct survey project (each containing between 1-7 surveys), on average, every 6 weeks.
Most of the projects aren't exactly top secret, but I err on the side of not mentioning the details or who we've worked with unless I'm certain the orgs in question are OK with it. Some of the projects, though, have been mentioned publicly, but not published: for example, CEA mentioned in their Q1 update that we ran some surveys for them t... (read more)
One major factor that makes some research questions more suited to academia is requiring technical or logistical resources that would be hard to access or deploy in a generalist EA org like RP (some specialist expertise also sometimes falls into this category). Much WAW research is like this, in that I don't think it makes sense for RP to be trying to run large-scale ecological field studies.
Another major factor is if you want to promote wider field-building or you want the research to be persuasive as advocacy to certain audiences in the way that sometime... (read more)
I'm particularly glad you note this since the survey team's research in particular is almost exclusively non-public research (basically the EA Survey and EA Groups Survey are the only projects we publish on the Forum), so people understandably get a very skewed impression of what we do.
Regarding magnesium, the specific supplement you happened to link to was magnesium oxide. There's some evidence that magnesium oxide is less bioavailable than other forms of magnesium (1,2,3) It's true that magnesium oxide is cheaper, but magnesium citrate is still exceptionally cheap (a few pence per dose). So, even if you are uncertain about the benefits of other forms over oxide, I think it's still probably reasonable to err in favour of these other forms.
I think it's also worth thinking about glycine. There are a few papers suggesting that glycine impr... (read more)
Thanks! It's cool they have done a study on the 'full-room' approach.
I think full-room approaches are worth people looking into, but it's worth noting that they are usually less bright than using SAD lamps (and this goes for the setup described in the pre-print too). As noted, in the pre-print, they put out more light, but because you are usually much further away from the lightbulbs distributed around the room than you would be from a light box on your desk, the mean illuminance at eye level was 1433-1829 lux. By comparison, I have three of the light boxe... (read more)
The simplest method is to purchase a SAD lamp that emits 10,000 lux and place this on your desk, maximising exposure while working. However, the light levels received from a SAD lamp can decrease significantly if placed too far from the face, while the lamp’s light offers minimal benefit when doing non-desk based activities.
This really bears emphasising, since most SAD lamps (accurately) marketed as "10,000 lux" are 10,000 lux only at distances much shorter than most people might expect or might be able to achieve with their desk setup (see Sco... (read more)
A useful comparison point might be how many EAs are members of a local group. A priori, one might think that being a member of an in-person group is a higher bar/more demanding than being a member of an EA Forum, but historically that has not been the case.
One other thing that may be of interest is that we don't see much of difference in the increase in EA group EA forum membership between EAS 2019 and EAS 2020. But, apparently, there's been a lot more support for groups too, so perhaps that's not surprising. (One other possible thing of note (not sh... (read more)
Among respondents to the EA Survey, in 2020, 38% of respondents were EA Forum members. In 2019 it was 30%. In 2018 it was 20%.
Those numbers are doubtless inflated though, because EA Forum members (a very disproportionately highly engaged group: >80% are levels 4-5 out of 5 in self-reported engagement) are more likely to take the survey. The question is how many less engaged (who are less likely to be on the Forum) there are, which is less easy to estimate, although there is a model in this post.
Past surveys (e.g. Open Phil’s survey) suggest that connections between individuals are the key source of impact from our events. So we focus on the number of new connections we make at our events.
I'd be curious which survey result you're thinking of here. Aside from a couple of qualitative responses , I don't remember a question in the OP survey that I would think addresses this.
To my recollection (which may be mistaken) the OP survey didn't include the question which more explicitly addresses this, which the EA Survey did.
See the questi... (read more)
I think it depends a lot on the specifics of your survey design. The most commonly discussed tradeoff in the literature is probably that having more questions per page, as opposed to more pages with fewer questions, leads to higher non-response and lower self-reported satisfaction, but people answer the former more quickly. But how to navigate this tradeoff is very context-dependent.
All in all, the optimal number of items per screen requires a trade-off:More items per screen shorten survey time but reduce data quality (item nonresponse) and respondent
I think all of the following (and more) are possible risks:
- People are tired/bored and so answer less effortfully/more quickly
- People are annoyed and so answer in a qualitatively different way
- People are tired/bored/annoyed and so skip more questions
- People are tired/bored/annoyed and dropout entirely
Note that people skipping questions/dropping out is not merely a matter of quantity (reduced numbers of responses), because the dropout/skipping is likely to be differential. The effect of the questions will be to lead to precisely those respondents ... (read more)
Thanks for the post. I think most of this is useful advice.
"Walkthroughs" are a good way to improve the questions
In the academic literature, these are also referred to as "cognitive interviews" (not to be confused with this use) and I generally recommend them when developing novel survey instruments. Readers could find out more about them here.
Testers are good at identifying flaws, but bad at proposing improvements... I'm told that this mirrors common wisdom in UI/UX design: that beta testers are good at spotting areas for improvement, but bad (or ov
Would it be helpful to put some or all of the survey data on a data visualisation software like google data studio or similar? This would allow regional leaders to quickly understand their country/city data and track trends. It might also save time by reducing the need to do so many summary posts every year and provide new graphs on request.
We are thinking about putting a lot more analyses on the public bookdown next year, rather than in the summaries, which might serve some of this function. As you'll be aware, it's not that difficult to generate th... (read more)