The 2022 EA Survey is now live at the following link: https://rethinkpriorities.qualtrics.com/jfe/form/SV_1NfgYhwzvlNGUom?source=eaforum
We appreciate it when EAs share the survey with others. If you would like to do so, please use this link (https://rethinkpriorities.qualtrics.com/jfe/form/SV_1NfgYhwzvlNGUom?source=shared) so that we can track where our sample is recruited from.
We currently plan to leave the survey open until December the 1st, though it’s possible we might extend the window, as we did last year. The deadline for the EA Survey has now been extended until 31st December 2022.
What’s new this year?
- The EA Survey is substantially shorter. Our testers completed the survey in 10 minutes or less.
- We worked with CEA to make it possible for some of your answers to be pre-filled with your previous responses, to save you even more time. At present, this is only possible if you took the 2020 EA Survey and shared your data with CEA. This is because your responses are identified using your EffectiveAltruism.org log-in. In future years, we may be able to email you a custom link which would allow you to pre-fill, or simply not be shown, certain questions which you have answered before, whether or not you share your data with CEA, and there is an option to opt-in to this in this year’s survey.
Why take the EA Survey?
The EA Survey provides valuable information about the EA community and how it is changing over time. Every year the survey is used to inform the decisions of a number of different EA orgs. And, despite the survey being much shorter this year, this year we have included requests from a wider variety of decision-makers than ever before.
Prize
This year the Centre for Effective Altruism has, again, generously donated a prize of $1000 USD that will be awarded to a randomly selected respondent to the EA Survey, for them to donate to any of the organizations listed on EA Funds. Please note that to be eligible, you need to provide a valid e-mail address so that we can contact you.
Thanks! We think about this a lot. We have previously discussed this and conducted some sensitivity testing in this dynamic document.
The difficulty here is that it doesn't seem to be possible to actually randomly sample from the EA population. At best, we could randomly sample from some narrower frame (e.g. people on main newsletter mailing list, EA Forum users), but these groups are not likely to be representative of the broader community. In the earliest surveys, we actually did also report results from a random sample drawn from the main EA Facebook group. However, these days the population of the EA Facebook group seems quite clearly not representative of the broader community, so the value of replicating this seems lower.
The more general challenge is that no-one knows what a representative sample of the EA community should look like (i.e. what is the true composition of the EA population). This is in contrast to general population samples where we can weight results relative to the composition found in the US census. I think the EA Survey itself represents the closest we have to such a source of information.
That said, I don't think we are simply completely in the dark when it comes to assessing representativeness. We can test some particular concerns about potential sources of unrepresentativeness in the sample (and have done this since the first EA Survey). For example, if one is concerned that the survey samples a disproportionate number of respondents from particular sources (e.g. LessWrong), then we can assess how the samples drawn from those sources differ and how the results for the respondents drawn from those sources differ. Last year, for example, we examined how the reported importance of 80,000 Hours differed if we excluded all respondents referred to the survey from 80,000 Hours, and still found it to be very high. We can do more complex/sophisticated robustness/sensitivity checks on request. I'd encourage people to reach out if they have an interest in particular results, to see what we can do on a case by case basis.