This is an idle thought: maybe there's value for competitions that incentivise people to submit good instructions for making forecasts, instead of directly making forecasts.

Consider a forecasting competition with a collection of questions that are clustered and then only some common feature of the clusters are made public (e.g. question type "who will win {election of some kind} in {unknown country}?" - contains 3 questions). Instead of submitting forecasts, forecasting participants submit methods for making forecasts on each question. These methods are implemented by a number of randomly selected teams of implementers, and the full question details are then revealed. Implementers are scored by how well they agree with other implementers on the same method while forecasters are ranked by the success of their methods.

Why might this be interesting? Currently, we can use forecasting competitions to answer particular questions, but the questions actually asked might not end up being as relevant to decision making because they were too specific, and it may not be practical to anticipate which questions need answers long enough before the answers are needed for them to be added to the competition. In these situations, robust forecast procedures could be more helpful than robust forecasts.

10

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since: Today at 8:58 PM

This sounds interesting. Alternatively, you could have the procedure-makers not know what questions will be forecasted, and their procedures given to people or teams with some stake in getting the forecast right (perhaps they are paid in proportion to their log-odds calibration score).

After doing enough trials, we should get some idea about what kinds of advice result in better forecasts.

Curated and popular this week
Relevant opportunities