The 2022 EA Survey is now live at the following link: https://rethinkpriorities.qualtrics.com/jfe/form/SV_1NfgYhwzvlNGUom?source=eaforum
We appreciate it when EAs share the survey with others. If you would like to do so, please use this link (https://rethinkpriorities.qualtrics.com/jfe/form/SV_1NfgYhwzvlNGUom?source=shared) so that we can track where our sample is recruited from.
We currently plan to leave the survey open until December the 1st, though it’s possible we might extend the window, as we did last year. The deadline for the EA Survey has now been extended until 31st December 2022.
What’s new this year?
- The EA Survey is substantially shorter. Our testers completed the survey in 10 minutes or less.
- We worked with CEA to make it possible for some of your answers to be pre-filled with your previous responses, to save you even more time. At present, this is only possible if you took the 2020 EA Survey and shared your data with CEA. This is because your responses are identified using your EffectiveAltruism.org log-in. In future years, we may be able to email you a custom link which would allow you to pre-fill, or simply not be shown, certain questions which you have answered before, whether or not you share your data with CEA, and there is an option to opt-in to this in this year’s survey.
Why take the EA Survey?
The EA Survey provides valuable information about the EA community and how it is changing over time. Every year the survey is used to inform the decisions of a number of different EA orgs. And, despite the survey being much shorter this year, this year we have included requests from a wider variety of decision-makers than ever before.
Prize
This year the Centre for Effective Altruism has, again, generously donated a prize of $1000 USD that will be awarded to a randomly selected respondent to the EA Survey, for them to donate to any of the organizations listed on EA Funds. Please note that to be eligible, you need to provide a valid e-mail address so that we can contact you.
Thanks for your comment. A lot of the questions are verbatim requests from other orgs, so I can't speak to exactly what the reasons for different designs are. Another commenter is also correct to mention the rationale of keeping the questions the same across years (some of these date back to 2014), even if the phrasing isn't what we would use now. There are also some other practical considerations, like wanting to compare results to surveys that other orgs have already used themselves.
That said, I'm happy to defend the claim that allowing respondents to select only a single option is often better than allowing people to select any number of boxes. People (i.e. research users) are often primarily interested in the _most_ important or _primary_ factors for respondents, for a given question, rather than in all factors. With a 'select all' format, one loses the information about which are the most important. Of course, ideally, one could use a format which captures information about the relative importance of each selected factor, as well as which factors are selected. For example, in previous surveys we've asked respondents to rate the degree of importance of each factor , as well as which factors they did not have a significant interaction with. But the costs here are very high, as answering one of these questions takes longer and is more cognitively demanding than answering multiple simpler questions. So, given significant practical constraints (to keep the survey short, while including as many requests as possible), we often have to use simpler, quicker question formats.
Regarding politics specifically, I would note that asking about politics on a single scale is exceptionally common (I'd also add that single-select format for religion is very standard e.g. in the CES). I don't see this as implying a belief that individuals believe in a single "simple political identity or political theory." The one wrinkle in our measure is that 'libertarian' is also included a distinct category (which dates back to requests in 2014-2015 and, as mentioned above, the considerations in favour of keeping questions consistent across years are quite strong). Ideally we could definitely split this out so we have (at least) one scale, plus a distinct question which captures libertarian alignment or more fine-grained positions. But there are innumerable other questions which we'd prioritise over getting more nuanced political alignment data.