A few months ago, before the results of the 2018 EA survey came out, I asked if people would be interested in making predictions about the answers, and 35 people completed a survey of the survey. I think that the majority of people answering this were from the group organisers group on Facebook and so probably have a better idea of what the EA community looks like in general, and hopefully by testing our predictions we can improve our calibration.
For each question the respondent was supposed to give lower and upper bounds for which they thought there is an 80% chance that the answer is inside those bounds. All but the first question would have answers as a percentage between 0 and 100.
Here is the program, run by guided track. If you want to see the raw results data for the survey of the survey you can email me using david@ealondon.com.
A few things that might be interesting
- There was generally overconfident answers with people getting an average of 60% correct whilst aiming for 80%, with only 2 out of 35 people under confident
- The question with the most correct responses was on the proportion of people identifying as male, 91% got this one right.
- Most respondents predicted there would be many more people saying they were politically centrist, right or far right than the results suggested
- It looks like most people thought that the individual cause priorities would be less popular, or that there was more exclusivity between the choices rather than people choosing multiple causes as a top or near top priority. For example the median answer for global poverty was 45% whereas on the 2018 EA survey 66% said that it was a top or near top priority, and the other selected causes also had much lower predictions than the actual results
- Unsurprisingly, people with wider intervals generally got more answers correct. I haven’t made a score combining these two but that would probably give a better idea of how well individuals are calibrated
The chart below is showing the correct answer percentages and average interval ranges for the 35 participants.
Here is a table looking at each question and seeing what percentage of the 35 gave a correct answer to that question.
Here is a table looking at the median midpoint answer for each question and how that compares to the correct answer, with a positive difference suggesting people thought that the answer would be higher and a negative difference suggesting they thought it would be lower (on average).
Do your "correct answer" numbers correct for the people who put something like "no answer" or "prefer not to answer"?
I'd guess that most survey respondents were actually guessing something like "percentage of people who give an answer, and for whom the answer is X", even if they were supposed to be guessing "percentage of all people who answer X".
"Correct answer" is maybe not the best wording, it means the answer that Rethink Charity used to describe their results, and is also consistent with how they reported the results in previous years. I should have pointed this out at the beginning of the survey.
In terms of the political questions, that is also shifted by a large response from people supporting libertarian or other political views.