substack = nwprtnarrative.substack.com
Executive Director of the Swift Centre for Applied Forecasting (led projects with U.K. Gov., Google DeepMind, and on AI security and capability risks).
Co-founder of ‘Looking for Growth’ - a political movement for growth in the U.K.
CTO of Praxis - a AI led assessment platform for schools
Former Head of Policy at ControlAI (co-authored ‘A Narrow Path’)
Former Director of Impactful Government Careers
Former Head of Development Policy at HM Treasury
Former Head of Strategy at the Centre for Data Ethics and Innovation
Former Senior Policy Advisor at HM Treasury, leading on the economic and financial response to the war in Ukraine, and the modelling and allocation of the UK's 'Official Development Assistance' budget.
MSc in Cognitive and Decision Sciences from UCL, my dissertation was an experimental study using Bayesian reasoning to improve predictive reasoning and forecasting in U.K. public policy officials and analysts
I am looking for individuals and groups that are interested in improving institutional decision making, whether that's within the typical high-power institutions such as governments/civil services, multilateral bodies, large multinational corporations, or smaller EA organisations that are delivering high-impact work.
I have a broad range of experience, but can probably be of best help on the topics of:
This obviously assumes Marcus has a sufficient level of experience to justify the claims. Which I think, given other comments, can be adequately challenged.
It would be good to know what metric/threshold/examples would be taken as forecasting delivering adequate impact to justify funding. From examples in this thread alone, we can see senior government decision makers in both the U.K. (including Ministerial teams and critical committees) and US, frontier labs safety teams, and philanthropic funds moving tens of millions of dollars a year) have utilised forecasting (either the process or the outputs) to inform their decisions.
The argument of it only shifting a decision 1-2% is totally fair. But to keep consistent I’d expect the same people who make that argument to also be highly sceptical of the vast majority of research funding.
(Caveat - I read the premises and skimmed the rest)
Yes - AI research is useful and does help highlight specific advancements or potential risks. However, I fear it is being focused on by many because of personal interest in the topic, rather than the best route to reduce catastrophic and existential risks.
For better or worse, advocacy, policy, and communications are the most likely routes to reduce p(doom) - unless you believe alignment is a plausible and concrete thing.
We could “forecast” the likelihood of that haha.
I can’t get into specifics. But if you believe activities like evaluations of models to test for dangerous behaviour etc. is net negative, then that may give credence to your assumption. As an extra data point of whether we’d do work we thought was net negative, I was Head of Policy at ControlAI and co-authored narrowpath.co, and our forecasters have done numerous AI safety focused projects (with and outside of the Swift Centre, including AI 2027).
Sort of, but that also doesn’t capture the significant accuracy and efficiency benefits the process of structured reasoning and communication that forecasting enables. There’s substantial risks and issues of “just looking into an issue yourself” - especially when you are more confident in your judgement (because that’s a clear risk of confirmation bias/overconfidence).
The main use of forecasting is in utilising the core scientific benefits it can bring as above into, to help real world decision makers. But fundamentally, that hasn’t been funded - instead we’ve funded tournaments and research.
I don’t disagree with some of the fundamentals of this post. Before diving into that, I want to correct a factual error:
“the Swift Centre have received millions of dollars for doing research and studies on forecasting and teaching others about forecasting”
The Swift Centre for Applied Forecasting has not received millions in funding. The majority of our earnings have been through direct projects with organisations who want to use forecasting to inform their decisions.
On your wider argument. I think forecasting has probably received too much funding and the vast majority of that has misallocated on platforms and research. I believe some funding (hundreds of thousands) to maintain core platforms like Metaculus as a public good of information. Though, services like Polymarket can probably fill most of this need in the future (but many useful, informative markets would never reach the necessary volume to be reliable).
Where I think we disagree most is in the application of forecasting and some of the achievements. We’ve worked with frontier AI labs to inform their decisions, are currently advising a U.K. Minister’s team on a central piece of their policy, and are about to start a secondment where I will be advising one of the most influential decision making committees in the country to help improve their scenario analysis and forecasting. Forecasting, and specifically, the science of decision making that it is built on, has the ability to structurally improve decisions in institutions. Significantly better than asking two or three of your smartest friends. That was just never funded, so instead we conclude forecasting is not useful.
I think this is a clear sign the community hasn’t been able to communicate its use case well at all. This is one reason I often use “predictive reasoning” as a more general concept when talking to people, interestingly especially if they are already aware of forecasting (as they’ve been conditioned to think it means prediction markets, tournaments, and resolution criteria).
Take your example of animal welfare, I don’t know the exact use case best aligned to you, but fundamentally i’m confident 95%+ of the decisions an animal nonprofit will make are based on two predictions:
1) What will the world be like in the future (insert timeline)?
2) What interventions will most impact/change that future closer to what you’d like it to be?
Forecasting, or more specifically, the processes that underpins the science of forecasting, can be used to increase the accuracy and efficiency of those two predictions.
Once you do that, a better estimation of the future world + a better estimation of the efficacy of your actions, will occur.
Some excellent reflections here. Across all advocacy, aid, animal welfare and especially AI, I often see errors in understanding the actual incentives and interests of those with power in Government.
A lot of time is also spent advocating to people who have little to no tangible influence. Which leads to organisations claiming they’ve delivered a lot of “direct advocacy” but actually they’ve just spoken to a lot of people, very few of which have the autonomy or power to enact any change.
As someone who worked at the very centre of the U.K. government for c.5 years on international development and finance policy, and time on AI policy (including roles as Head of Development Policy and Senior Policy Advisor for ODA Strategy and Spending at HM Treasury, and Head of Strategy at the CDEI), I’m always happy to share my thoughts directly if ever helpful.
Great post. This is why I’ve mentioned before that there should be dedicated therapy or counselling support org./network funded for those working in AI x-risk.
Considering many large organisations in the space specifically have a very generous “wellbeing” budget for each role, this feels quite easily fundable? Right now it doesn’t seem an issue of money but of directing it in a more efficient and effective way.
I agree with the premise but we shouldn’t be using philanthropic funds to try to patch over what is a market problem.
The route here should be projects that enable less friction for trade and investment, rather than creating a company that tries to bypass the fundamental issues. Philanthropic funding here should focus on systemic change to have compounded impact.