After reading this post, I found myself puzzling over the following question: why is Tetlock-style judgmental forecasting so popular within EA, but not that popular outside of it?
From what I know about forecasting (admittedly not much), these techniques should be applicable to a wide range of cases, and so they should be valuable to many actors. Financial institutions, the corporate sector in general, media outlets, governments, think tanks, and non-EA philanthropists all seem to face a large number of questions which could get better answers through this type of forecasting. In practice, however, forecasting is not that popular among these actors, and I couldn't think of/find a good reason why.[1]
The most relevant piece on the topic that I could find was Prediction Markets in The Corporate Setting. As the title suggests, it focuses on prediction markets (whose lack of success is similarly intriguing), but it also discusses forecasting tournaments to some extent. Although it does a great job at highlighting some important applicability issues for judgmental forecasting and prediction markets, it doesn't explain why these tools would be particularly useful for EAs. None of the reasons there would explain the fact that financial institutions don't seem to be making widespread use of forecasting to get better answers to particularly decision-relevant questions, either internally or through consultancies like Good Judgment.
Answers to this question could be that this type of forecasting is:
- Useful for EA, but not (much) for other actors. This solution has some support if we think that EAs and non-EAs are efficiently pursuing their goals. If this is true, then it suggests that EAs should continue supporting research on forecasting and development of forecasting platforms, but should perhaps focus less on getting other actors to use it.[2] My best guess is that this is not true in general, though it is more likely to be true for some domains, such as long-run forecasting.
- Useful for EA and other actors. I think that this is the most likely solution to my question. However, as mentioned above, I don't have a good explanation for the situation that we observe in the world right now. Such an explanation could point us to what are the key bottlenecks for widespread adoption. Trying to overcome those bottlenecks might be a great opportunity for EA, as it might (among other benefits) substantially increase forecasting R&D.
- Not useful. This is the most unlikely solution, but is still worth considering. Assessing the counterfactual value of forecasting for EA decisionmaking seems hard, and it could be the case that the decisions we would make without using this type of forecasting would be as good as (or maybe even better than) those we currently obtain.
It could be that I'm missing something obvious here, and if so, please let me know! Otherwise, I don't know if anyone has a good answer to this question, but I'd also really appreciate pieces of evidence that support/oppose any of the potential answers outlined above. For example, I would expect that by this point we have a number of relatively convincing examples where forecasting has led to decisionmaking that's considerably better than the counterfactual.
- ^
This is not to say that forecasting isn't used at all. For example, it is used by the UK government's Cosmic Bazaar, The Economist runs a tournament at GJO, and Good Judgment has worked for a number of important clients. However, given how popular Superforecasting was, I would expect these techniques to be much more widely used now if they are as useful as they appear to be.
- ^
Open Philanthropy has funded the Forecasting Research Institute (research), Metaculus (forecasting platform) and INFER (a program to support the use of forecasting by US policymakers).
The replies so far seem to suggest that groups outside of EA (journalists, governments, etc) are doing a smaller quantity of forecasting (broadly defined) than EAs tend to.
This is likely correct but it is also the case that groups outside of EA (journalists, governments, etc) are doing different types of forecasting than EAs tend to. There is less "Tetlock-style judgmental" forecasting and more use of other tools such as horizon scanning, scenario planning, trend mapping, etc, etc.
(E.g. see the UK government Futures Toolkit, although note the UK government also has a more Tetlock-style Cosmic Bazaar)
So it also seems relevant to ask: why does EA focuses very heavily on "Tetlock-style judgmental forecasting", rather than other forecasting techniques, relative to other groups?
I would be curious to hear people's answers to this half of the question too. Will put my best guess below.
– –
My sense is that (relative to other futures tools) EA overrates "Tetlock-style judgmental" forecasting a lot and that the world underrates it a bit.
I think "Tetlock-style" forecasting is the most evidence based, easy to test and measure the value of, futures technique. This appeals to EAs who want everything to be measurable. Although it leads to it being somewhat undervalued by non-EAs who undervalue measurability.
I think the other techniques have been slowly developed over decades to be useful to decision makers. This appeals to decision makers who value being able to make good decisions and having useful tools. Although it leads to them being significantly undervalued by EA folk who tend to have less experience and a "reinvent the wheel" approach to good decision making to the extent that they often don’t even notice that other styles of forecasting and futures work even exist!