This tournament is co-sponsored by Cambridge EA (coordinated by Hannah Erlebach) and UChicago EA (coordinated by Henry Josephson). Thanks to Henry Tolchard for clarifying details about the tournament, and to Will Aldred for feedback on this post.
INFER, a forecasting program funded by a grant from Open Philanthropy to generate valuable signals for U.S. Government policymakers, is hosting an intercollegiate forecasting tournament for EA groups. Any EA group affiliated with a university is welcome to participate in the tournament. The top three individual forecasters will be offered paid positions on the Pro Forecaster Program.
The tournament is scheduled to run April 1 - July 31, 2022.
For full details, see INFER’s tournament page.
How to sign up
You can also find the full instructions here.
Getting started should take no more than a couple of minutes; it requires individuals to create an account, and a designated team host to create a team by submitting the usernames of team members.
(For all team members) How to sign up
- Sign up at https://infer-pub.com. You will be asked to choose a username.
- Let your team host know your username.
- Start practising on any of the forecasting questions currently on the site; during the tournament, it will be specified which of the questions qualify for tournament scoring.
- You will be submitting forecasts individually for these questions, and the forecasts made by your team members will then be aggregated into a team forecast for scoring.
(For the team host) How to create your team
- Click the Create a Team link under My Team.
- Enter a team name (which clearly indicates which university group you’re from) and the usernames of your team members. You can always add more teammates later.
- Submit your request, and an Admin will accept it within 24 hrs.
(For all team members) How to interact with your team
You will be able to interact with members of your team in different areas of the site, for example:
- Go to My Team on the top navigation to see other users on your team, their activity, and start general discussions.
- Within any question’s page (below the forecasting interface), go to the My Team tab where you can start Team Discussions which only members of your team can see, and view Current Forecasts and Forecast History.
- From the Leaderboards in the top navigation bar, you can see rankings of all the teams on INFER (including those not part of the EA tournament), which are updated anytime a question is scored. Note that because INFER’s team leaderboard ranking takes into account all scored questions – not just the questions scored for the EA tournament – INFER will be calculating EA team rankings separately.
Forecasting is currently in 80,000 Hours’ list of ten top-recommended career paths (as of date of publishing). You can read their review of forecasting and related research and implementation here.
Quantitative forecasting can substantially improve our ability to predict the future, as compared with subjective judgement calls. Better predictions about important future events can lead to better decisions, especially in the complex, high-stakes situations in which governments and other institutions frequently find themselves. Better foresight could also maybe improve our ability to solve some of the world’s most pressing problems.
What makes forecasting particularly accessible is that:
- Being an exceptional forecaster (‘superforecaster’) does not require exceptional prerequisites, and
- There are reliable ways to improve forecasting accuracy, such as through calibration training and regular practice.
The best forecasters come from a wide range of backgrounds, often from fields which aren’t related to predicting the future. Indeed, the bar for being a top forecaster at present may be surprisingly low; while some intelligence, numerical skill and general knowledge is helpful, it is often open-mindedness, carefulness and an ability to update one’s beliefs which appear more critical.
The power of forecasting platforms (the likes of INFER, Metaculus, Good Judgment Open, Hypermind and Metaforecast) lies in crowd forecasts: aggregating the predictions of many forecasters, typically with more weight on the forecasts of those individuals with the best track records. Indeed, according to one study described in Superforecasting (Tetlock & Gardner, 2019), aggregating the forecasts of several hundred ‘ordinary people’ on geopolitical events led to better predictions than professional intelligence analysts who had access to classified information. For a more nuanced discussion of this study and others, see Comparing top forecasters and domain experts.
An important part of developing forecasting skills is calibration training: being well calibrated as a forecaster means that your predictions match the rates of outcomes actually occurring, e.g., the events to which you assign an 80% probability actually resolve positively around 80% of the time. Good calibration doesn’t tend to come naturally, but can be substantially improved through practice.
Some recommended tools for beginners to get started with calibration training include:
- The Metaculus tutorials, which introduce you to the basics of forecasting, followed by some calibration exercises
- Calibrate Your Judgement, a web app by OpenPhilanthropy which has thousands of questions and will chart your improvement over time
If you want to learn more about forecasting, you might like to look at:
- Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post
- The first four chapters of Superforecasting by Tetlock and Gardner (alternatively, this book review)
- This four-part Intro to Forecasting series on YouTube
- The Ten Commandments for Aspiring Superforecasters
- NunoSempere's forecasting newsletter
The tournament is taking place April 1 - July 31, 2022.
Teams can still sign up after April 1, but should be aware that they are required to have 5+ active forecasters every month (April, May, June, July) in order to be eligible for a prize.
Who can take part?
The tournament is between teams of 5 or more people affiliated with an EA university group. (There can be multiple teams per university, as long as each team has at least 5 members.) Note that you don’t need to be a student in order to participate, as long as you have some affiliation with the EA uni group whose team you join.
What will I do as a participant?
INFER regularly releases new questions on the platform. Forecasters are also invited to submit their own, and can vote on others’, in the Question Lab.
Questions are currently divided into six topics:
- Microelectronic technologies
- Global competitiveness in AI
- Russia-Ukraine conflict
- China politics, relations and technology
- Science and technology
A subset of the questions on the platform will be scored as part of the tournament, typically those that can be resolved over the course of the tournament. These will be indicated on the platform. (The questions used for the tournament will all be posted publicly on the site, so non-tournament participants can also forecast on them.)
New tournament questions will be released fairly evenly throughout the tournament, and there are expected to be around 15-20 questions in total which will be scored for the tournament (there’ll also be non-tournament questions, which you can still forecast on but won’t be scored!)
Your task as a participant is to submit your forecasts individually. The individual forecasts made by your team members will then be aggregated into a team forecast score. You will also be able to update your forecasts over time by entering new forecasts. The best forecasters tend to update their forecasts gradually as new information becomes available.
To make a forecast on INFER, select a question, read through the background information, and simply enter the probability (0-100%) you think an event will occur. You’ll also be asked to enter a rationale to explain why you entered that particular probability before you submit your forecast. This gives you a chance to explain the reasoning behind your forecast and/or provide any links that may support your position, which provides useful data for policymakers. Here is a helpful article with tips on how to approach your first forecast.
To give a flavour for the kinds of questions you’ll be forecasting on, here are some already on INFER which may be included in the tournament:
- When will the end of day closing value for the Russian Ruble against the US Dollar drop below 75 Rubles to 1 USD?
- Will the United States have the world's fastest supercomputer in June 2022?
- On 30 June 2022, how much funding will Crunchbase’s “Hub for Artificial Intelligence Companies Founded in the Last Year” report that those companies have raised?
- Will the World Health Organization declare a new Public Health Emergency of International Concern between August 1, 2021 and July 31, 2022?
Scoring and prize rules
Scoring will be completed through INFER and use a relative brier scoring system. The three most accurate teams (those with the lowest relative brier score) will be rewarded with prize money.
This prize money can be awarded as sponsorship to the university group or to a cause of your choice (in accordance with INFER’s charitable guidelines).
In order to be eligible for a prize, teams need to have 5+ active forecasters per month.
In addition, the more active forecasters participate in the tournament each month, the higher the pot for the winning teams:
- The reward is scaled based on the total number of active forecasters participating across every month (April, May, June, and July) of the tournament.
- 50 active forecasters per month → Prizes total $2,500, for example:
- First place: $1,000
- Second place: $800
- Third place: $700
- 100 active forecasters per month → Prizes total $5,000
- 200+ active forecasters per month → Prizes total $10,000
- 50 active forecasters per month → Prizes total $2,500, for example:
Benefits to EA uni groups
We think that entering a team for your EA university group can be beneficial in many ways, not just in terms of prizes, but in creating a community around forecasting. Some of these include:
- Accessible way to test and build forecasting skills for members of your group through an existing, established platform;
- Chance to win prize money which can be awarded as sponsorship to your university group or to a cause of your choice;
- Creating a culture of forecasting at your university group; and
- Becoming known as a forecasting hub if you are a winning team, attracting more attention and resources for forecasting for your university group.
Benefits to individual participants
The top three individual forecasters from the INFER Tournament will be invited to join the Pro Forecasters Program for the remainder of the 2022 season (i.e., through December). As a Pro Forecaster, you will be paid for dedicating a certain amount of time every month to forecasting. Some highlights of what the program will entail this season include:
- Earned compensation of $200 per month for dedicating 8-10 hours per month making forecasts, with new opportunities for additional Pro rewards throughout the year;
- Exclusive training opportunities to improve forecasting and rational thinking skills;
- Direct access to the INFER program team to provide feedback and beta test new technologies; and
- Invitations to special events curated only for Pros, such as ‘Ask Me Anything’ sessions with current and former high-level U.S. Government officials.
In addition, all forecasters on INFER are eligible to complete challenges to earn badges and win rewards; there are currently four active reward challenges, with prizes totalling over $12K for accuracy, monthly participation, research efforts and more.
Other benefits to participants in the INFER Tournament include:
- Building rational thinking skills through practice and feedback;
- Trying out forecasting in an environment which welcomes those with less experience;
- Improving forecasting accuracy and calibration;
- Joining a team of forecasters and interacting with other EA groups from around the world;
- Invitations to INFER events with current and former US Government officials and researchers; and
- Having a direct influence on US Government policymaking.
More about INFER
INFER, short for INtegrated Forecasting and Estimates of Risk, is a forecasting program designed to generate valuable signals and early warning about the future of critical science and technology trends and events for U.S. Government policymakers. INFER empowers scientists, researchers, analysts, and hobbyists from inside and outside the U.S. Government to have a direct impact on policy and decision-making.
INFER is run by the Applied Research Laboratory for Intelligence and Security (ARLIS) at the University of Maryland and Cultivate Labs. Funding for this program has been provided by a grant from Open Philanthropy.
INFER works by operating as a continuous, 4-step lifecycle between policymakers and those who are best positioned to make relevant forecasts about the future.
- As initial input, policymakers identify priority areas (e.g., AI competitiveness) and strategic questions within those priority areas (e.g., “How will AI impact the strategic balance between the U.S. and its competitors?”) where guidance, regulation, or clarification is needed to strengthen our competitive position or operate more effectively.
- We then define what aspects of future ground truth will need to be understood to make optimal judgments on a path forward.
- Using those factors, we publish falsifiable forecast questions across multiple online crowdsourced forecasting platforms, some in the public domain, some only accessible by government employees. The platform this EA tournament will be run on is already live at infer-pub.com.
- Finally, INFER generates consensus, probabilistic forecasts and accompanying qualitative data to both report on the individual forecast questions and create a rendering of which scenario is most likely to play out in response to policymaker’s strategic questions. The policymaker can then use this input to optimize decisions about new regulations and resource allocations and “skate to where the puck will be” as Wayne Gretzky famously said.
If you have any more questions, email email@example.com.