Last year, our team in Czech priorities organized a large forecasting tournament, OPTIONS focused on policy-relevant predictions in the Czech Republic. The project was funded by the Technology Agency of the Czech Republic and was run with generous support from Cultivate Labs on their platform. As a result, we built a 60-person “expert forecasting team”, started a wider local forecasting community, and put together a methodology for public administration on how to organize forecasting tournaments.
The main goal of this post is to introduce the English version of our “Methodology for using judgment forecasting in public decision-making”, which contains practical step-by-step advice on how to organize forecasting tournaments with examples from our pilot implementation. The main mission of Czech Priorities - an NGO that we spun off from the local Effective Altruist group in 2018 - is to advance the principles of using high-quality evidence in policymaking, and we hope that this methodological manual can serve as a useful resource for other similar efforts internationally.
The project was initially inspired by the lack of reliable forecasts regarding the COVID-19 pandemic. Because of this scarcity, we focused on creating predictions that could be useful in crisis situations requiring a rapid policy response (public health crises, cyberattacks, international conflicts, etc.).
The tournament ran in 2021 and consisted of 4 rounds, each lasting 10 days. Out of 848 people registered, 534 passed a mandatory 1.5h online calibration session. Of these, 238 participants then completed all 4 rounds of the tournament. 60% of participants were under 35 years old and 67.2% held a MA or higher degree from diverse educational backgrounds. Participants predicted and commented pseudo-anonymously, using nicknames (names of world cities).
Our forecasters answered 24 short-term (resolution in <4 months) questions and several bonus questions. To make the predictions maximally useful, we defined the questions with the assistance of The National Cyber and Information Security Agency, The National Institute of Mental Health, The National Monitoring Centre for Drugs and Addiction, and other public institutions.
The financial incentives were set to motivate early predicting, frequent updating, and sharing of information. The top 30 forecasters earned $100 - $1,500 for the best overall Brier score and the forecasters who left the 20 best comments (most added value as selected by an expert board), were awarded $100 each. Altogether, more than 10,000 forecasts were made and 946 person-days spent predicting. On average, forecasters spent 1 hour on each question, which we consider quite remarkable.
Current state and forthcoming projects
We compiled the methodology manual (download HERE, top of the page) in a way to be understood by analysts in public administration without any prior knowledge of forecasting. With this in mind, we will be grateful for any suggestions for clarity or content improvements.
In 2022 and 2023, we will be working in cooperation with the forecasting platform Metaculus on scaling up our activities to work even more closely with policymaking institutions, leading them through the whole process of applying forecasting in their decision-making, aggregating, and generalizing all the insights from this groundwork. Having already identified a group of outstanding forecasters, we are now able to focus on longer-term, strategic questions as well.
We are currently in the process of designing these new tournaments, as well as consolidating and enlarging our forecasting community and preparing to establish new partnerships with governmental institutions. If you think you could point us to any useful evidence, share your experience, or give any kind of advice, please let us know. We would be happy to consult or cooperate. You can also meet our team this weekend at the EAGx Prague conference, where we will present the main takeaways from our research and have a discussion.