The forecasting community is failing because it is not focused on how the world actually works.
Who am I to make such an outlandish claims?
- I worked at HM Treasury as senior policy advisor across multiple policy areas, including international development, relations, and I led the team for HMTs economic and financial response to the war in Ukraine.
- I have an MSc in Cognitive and Decision Sciences from UCL, where I led experimental research into how to improve forecasting ability of policy makers and analysts in central government.
- I started a consultancy with one of the world leading researchers in computational psychology and causal modelling to improve predictive reasoning.
- I am now the Executive Director of the Swift Centre for Applied Forecasting, where I have led numerous forecasting workshops and small projects with government teams, including the Cabinet Office, HM Treasury, Department for Science, Innovation and Technology. I have also led projects and workshops with organisations working on AI capabilities, risks, and policy, including GovAI and frontier labs.
Through all of this I have met numerous organisations, including policy, strategy, and risk teams in the worlds largest and most powerful organisations, non-profits, tech firms, banks and financial institutions to discuss the use of forecasting.
1. We’re Obsessing Over Question Wording But Not The Question
The forecasting community has a fetish for resolution criteria. We spend weeks debating the exact definitions of words but spend far less time understanding what exact issues organisations need to grapple with.
When I speak to senior government officials, they often don’t even know which risks they should be looking at. They are operating in a fog of war where the primary challenge isn’t predicting the outcome of a well-defined event but instead it’s identifying which events even matter or they should be focused on.
We are providing high-precision answers to questions that decision-makers haven’t asked and often don’t care about. So much time has been spent forecasting headline geopolitical events or AI capabilities and risks - all interesting and academically engaging, but are so far away from the actual questions and issues decision makers are trying to weigh up.
To emphasise this, I have spoken to people (even in government), who are building AI forecasting tools. What they say after they’ve build a semi-reliable tool is all the same: “we’ve found people don’t know what issues they should be focusing on, and rather than a probability estimate, they want help to identify the most prescient questions”.
2. Transparency is the Real Value, Yet Everyone Focuses on the Accuracy Status Game
To a forecaster their probability is everything - as it should be. It’s how you prove you’re worth, it’s how you become a “Superforecaster” or get a job at a hedge fund.
But decision-makers do not care if you are 2% more accurate than the next guy. When it comes to actual decision making, the value of a probability is its ability to force transparency and to expose differences. Sure, it can’t be wildly wrong, but no one is fighting over single digit percentages.
In a standard policy meeting, people hide behind imprecise words like perhaps, likely, or could. Numbers strip that away and once you build comfort with using them, real value in the decision making process can be unlocked - better reasoning transparency, more efficiency in the decision making process, more effective options to achieve the objective you want.
The real value of forecasting is in the moment you realise two people in the same room have forecasts 40% apart. That is where the benefit occurs. But the community is so obsessed with maximising Brier scores that it ignores the fact that their quest for the most accurate predictions are often sapping time and effort away from utilising the most valuable element of forecasting: transparency.
3. The Clearance Filter
There is a naive, almost arrogant assumption that if we just give a Minister an accurate percentage, they will make a better decision.
I worked with Ministers who couldn’t read a graph properly. If you put a raw percentage into a submission for a Secretary of State, it will likely be intercepted by their Private Office or a Senior official during clearance and sent back for being too technical. If it does make it to their desk, they likely won’t know what to do with it or how it’s beneficial to them. A lot of political and organisation decision making is not based on how accurately you’ve predicted the world. Sad, but true.
Pure forecasting has a place, but it is a niche compared to what has been pushed and funded. The real win is a better-reasoned policy memo. If the final advice looks the same but the process of getting there involved structured reasoning and the exposure of hidden risks, that is a victory. The community’s refusal to understand the existing bureaucratic workflow is why it hasn’t been adopted.
4. Misallocation of Resources: Researching the Problem to Death
This part may come across as jaded or resentful. I don’t think that’s completely unfounded, but it comes from a place of truly caring about improving institutional decision making. I’ve personally spent thousands and have taken many risky career moves to work on it. I think without considerably better institutional decision making we will never navigate the risks of AI or avoid catastrophic events. So given that, and my experience as a HMT spending policy lead, I am disappointed when I see the misallocation of scarce resources.
I’ve watched funders pour tens of millions of dollars into forecasting platforms and large-scale research reports that practically no decision maker reads (at least not enough of them to justify the cost).
I have spoken to dozens of policy officials about these reports. Most give me a laugh and say they don’t have time. Others ask me what those platforms even are. Even after the UK closed its internal forecasting market and the US intelligence agency ended theirs, funders doubled down.
Meanwhile, those working on actual implementation - within the very organisations and institutions we claim to want to be using forecasting to improve decision making - struggled.
Crude example, but a couple of years ago I had the interest of the UK’s Policy Profession training team (covering 50,000 officials) and the Bank of England. We couldn’t secure funding to provide the workshops they wanted, or to even cover the six-months runway we’d need to get through their procurement process. A year later, I ended up working at the Swift Centre to help them deliver some research funding they got to investigate the blockers to forecasting. Blockers that they, and I, knew about (and I had already somewhat overcome with the policy profession etc. as above). But the default comfort was to fund further research, rather than actual delivery. We did that work, delivered record-breaking engagement, and still had to fight for a continuation while tens of millions were funnelled into more research and platforms.
5. Even the Believers Never Really Jumped Aboard
If you look at organisations across the Effective Altruism movement (the very people who champion forecasting and the core premise of reasoned decision making), you’ll see they struggle to use it in their own decision-making.
I’ve seen organisations in this space ignore the fundamentals of reasoning transparency and structured forecasting when it comes to their own organisational decisions and grantmaking.
Many in the community like to read the forecasts, or take part in tournaments, but how many actually make tangible changes to their decisions based on them?
Until we stop treating forecasting as a intellectual status symbol and start treating it as a messy, difficult integration problem for the world’s most powerful (and busy) people, we are just talking to ourselves.

Trying to connect the “forecasting to decisions” or “evidence to decisions” pipeline with the “Pivotal questions”: an Unjournal trial initiative and our workshops https://uj-pq-workshops.netlify.app/
But the challenges you mention are real and this tracks my experience. Upvotes.