Bio

Participation
2

I have particular expertise in:
- Developing and implementing policy
- Improving decision making within organisations, mainly focused on improving people’s reasoning process (e.g. predictive/forecasting) that underpins how they make and communicate judgments. 
- AI Safety policy

This has been achieved through being:

1) Director of Daymark Decision Insights, a company which provides consultative services and tailor made workshops related to improving decision making and reasoning processes to high-impact organisations (https://www.daymark-di.com/). More recently I’ve provided specific consultative services on policy development and advocacy to a large-scale AI safety organisation.

2) Director of Impactful Government Careers - an organisation focused on helping individuals find, secure, and excel in high impact civil service roles.

3) I spent 5 years working in the heart of the UK Government, with 4 of those at HM Treasury. With roles including:

- Head of Development Policy, HM Treasury
- Head of Strategy, Centre for Data Ethics and Innovation
- Senior Policy Advisor, Strategy and Spending for Official Development Assistance, HM Treasury

These roles have involved: advising UK Ministers on policy, spending, and strategy issues relating to international development; assessing the value for money of proposed high-value development projects; developing the 2021 CDEI Strategy and leading the organisational change related to this.

4) I’ve completed an MSc in Cognitive and Decision Sciences at UCL, where I have focused my research on probabilistic reasoning and improving individual and group decision-making processes. My final research project involved an experimental study into whether a short course (2 hours) on Bayesian reasoning could improve individual's single-shot accuracy when forecasting geopolitical events.

How others can help me

I am looking for individuals and groups that are interested in improving institutional decision making, whether that's within the typical high-power institutions such as governments/civil services, multilateral bodies, large multinational corporations, or smaller EA organisations that are delivering high-impact work.

How I can help others

I have a broad range of experience, but can probably be of best help on the topics of:

  • Improving institutional decision making, particularly through embedding decision science methods to deliver more accurate, efficient, and inclusive reasoning processes.
  • Reasoning under uncertainty, particularly how to improve predictive reasoning (both causal inference, i.e. 'will x action lead to y outcome', and forecasting, i.e. 'will x event happen').
  • Working in the UK Civil Service, particularly central Government and opportunities for maximising impact in such roles.
  • Getting things done in Government - how to best utilise your role, skills, and those of your teams/stakeholders, to support the decision-making process and deliver high-impact results.
  • Changing career (as someone that has done two large career changes)

On the side, a colleague and me run a small project helping to improve predictive reasoning which can be found here: https://www.daymark-di.com/. If you are interested in finding out more, feel free to drop me a message.

Posts
2

Sorted by New
2
JamesN
· · 1m read
2
JamesN
· · 1m read

Comments
30

I tend to agree with you, though would rather people were more on the “close early” side of the coin than the “hold out”. Simply because the sunk cost fallacy and confirmation bias in your own idea is incredibly strong and I see no compelling reason for how current funders in the EA space help counteract these (beyond maybe being aware of them more than the average funder).

In an ideal system the funders should be driving most of these decisions by requiring clear milestones and evaluation processes for who they fund. If the funder did this they would be able to identify predictive signals of success and help avoid early or late closures (e.g. “we see on average policy advocacy groups that have been successful have met fewer/more comparable milestones and recommend continued/stopping funding”). This can still allow the organisation to pitch for why they are outside of the average, but the funder should be in the best position to know what is signalling success and what isn’t.

Unfortunately I don’t see such a system and I fear the incentives aren’t aligned in the EA ecosystem to create it. The organisations getting funded enjoy the looser, less funder involved setup. And funders de-risk their reputational risk by not properly evaluating what is working and why, and they can continue funding projects they are personally interested in but have questionable causal impact chains. *noting I think EA GHD has much less of this issue mainly because funders anchor on GiveWell assessments which is to a large degree delivering the mechanism I outline above.

JamesN
43
14
0
3
3

The science underpinning this study is unfortunately incredibly limited. For instance, there isn’t even basic significance testing provided. Furthermore, the use of historic events to check forecasting accuracy, and the limited reporting of proper measures to prevent the model utilising knowledge not available to the human forecasters (with only a brief mention of the researchers providing the search link with pre-approved) is also very poor.

I’m all for AI tools improving decision making and have undertaken several side-projects myself on this. But studies like this should be highlighted for their lack of scientific standards and thus, we should be sceptical of how much we use them to update our judgments of how good AI should be at forecasting currently (which to me is quite low given they struggle to causally reason)

What are some of the best (relatively) small podcasts on AI and/or policy that people would recommend? I know of all the big ones, but keen to see if there are any nascent ones that are worth sharing with others.

Thanks for posting this. Sounds like an exciting project.

There have been a number of experimental studies done on improving individual and group forecasting, and a couple of organisations currently provide training in a classroom/workshop setting. From my knowledge the framework being proposing is quite different to these (albeit I respect it can only explain it at a high level here). What’s the main evidence base guiding this approach and what’s the expected increase in accuracy attendees can expect the course to have?

I think this is a really important area, and it is great someone is thinking whether EAIF could expand more into this sort of area. 

To provide some thoughts of my own having explored working with a few EA entities through our consultancy (https://www.daymark-di.com/) that works to help organisations improve their decision making processes, along with discussions I've had with others on similar endeavours:

  1. Funding is a key bottleneck, which isn't surpising. I think there is naturally an aversion to consultancy-type support in EA organisations, mostly driven by lack of funds to pay for it and partly because I think they are concerned how it'll look in progress reports if they spend money on consultants. 
    1. EAIF funding could make this easier, as it'll remove the entire (or a large part) of the cost above.
  2. There does appear a fairly frequent assumption that EA organisations suffer less from poor epistemics and decision making practices, which my experience suggests is somewhat true but unfortunately not entirely. I want to repeat what Jona from cFactual commented below that there are lots of actions EA organisations take that are very positive, such as BOTECs, likely failure models, and using decision matrices. This should be praised as many organisations don't even do these. However, the simple existence of these is too often assumed to mean good analysis and judgment will naturally occur and the systems/processes that are needed to make them useful are often lacking. To be more concrete, as an example few BOTECs incorporate second order probability/confidence correctly (or it is conflated with first order probability) and they subsequently fail to properly account for the uncertainty of the calculation and the accurate comparisons that can be made between options. 

    It has been surprising to observe the difference between some EA organisations and non-EA institutions when it comes to interest in improving their epistemics/decision making. With large institutions (including governmental) being more receptive and proactive in trying to improve - often those institutions are mostly being constrained by their slow procurement processes as opposed appetite.

When it comes to future projects, my recommendations of those with highest value add would be:

  1. Projects that help organisations improve their accountability, incentive, and prioritisation mechanisms. In particular helping to identify and implement internal processes that properly link workstreams/actions/decisions/judgments to the goals of the organisation and that role's objectives. This is something which is most useful to larger organisations (and especially those that make funding/granting decisions or recommendations), but can also be something which smaller organisations would benefit from.
  2. Projects that help organisations improve their processes to more accurately and efficiently reason under uncertainty, namely assisting with defining the prediction/diagnostic they are trying to make a judgment on, identify the causal chain and predictors (and their relative importance) that underpins their judgment, and provide them with a framework to communicate their uncertainty and reasoning in a transparent and consistent way.
  3. Projects that provide external scrutiny and advanced decision modelling capabilities. I think at a low level there are some relatively easy wins that can be gained from having an entity/service which red teams and provides external assessment of big decisions that organisations are looking at making (e.g. new requests for new funding). At a greater level there should be more advanced modelling (including using tools such as Bayesian models) which can provide transparent and updatable analysis which can expose conditional relationships that we can't properly account for in our minds.
    1. I think funding entities like EA Funds could utilise such an entity/entities to inform their grant making decisions (e.g. if such analysis was instrumental in the decision EA Funds made on a grant request, they'd pay half of the cost of the analysis).

I'm considering my future donation options, either directly to charities or through a fund. I know EA Funds is still somewhat cash constrained but I'm also a little concerned with the natural variance in grant quality. 

I'd be interested in why others have or have not chosen to donate to EA Funds, and if so, would they again in the future?

I respect people may prefer to answer this by DM, so please do feel free to drop me a message there if posting here feels uncomfortable.

There seems to be quite a few people who are keen to take up the bet against extinction in 2027... are there many who would be willing to take the opposite bet (on equal terms (i.e. bet + inflation) as opposed to the 200% loss that Greg is on the hook for)?

Do people also mind where their bet goes? In this case I see the money went to PauseAI, would people be willing to make the bet if the money went to that person for them to just spend as they want? I could see someone who believed p(doom) by 2027 was 90%+ might just want more money to go on holiday before the end if they doubt any intervention will succeed. This is obviously hypothetical for interest sake as a real trade would need some sort of guarantee the money would be repaid etc. etc.

My prior on this is that the opportunity cost of what the money could've been spent on is excessively high and there are much better uses it could go towards. 

Obviously the more novel and actionable the information is, the lower that trade-off becomes. However, I expect that most of the time the information they can provide would be valued too high due to personal intrigue as opposed to it actually being anything that meaningfully moves the dial. Equally, if the information they have was especially ground-breaking then I'd hope the person under the NDA would sacrifice personal wealth to expose that, then retrospectively they may get support with any legal costs etc. A reactive system as opposed proactive would also help prevent weird incentives being created where people would try to hold out on blowing the whistle until they had funds confirmed.

I’m a strong believer that AI can be of massive assistance in this area, especially in areas such as improving forecasting ability where the science is fairly well understood/evidenced (I.e. improve reasoning process to improve forecasting and prediction).

My point of caution would be that exploration here, if not done with sufficient scientific rigour, can result in superficially useful tools that add to AI misuse risks and/or worse decision making. For more info do see the below research paper: https://arxiv.org/abs/2402.01743#:~:text=This report examines a novel,many critical real world problems.

The bottleneck on funding is the biggest issue in unlocking the potential in this space imo, and on trying to improve decision making more broadly.

I find the void between the importance placed on improving reasoning and decision making on cause priority researchers such as 80k and the EA community as a whole, and the appetite from funders to invest is quite huge. A colleague and I have struggled for even relatively small amounts of funding, even when having existing users/clients from which funding would allow us to scale from.

That’s not a complaint - funders are free to determine what they want to fund. But it seems a consistent challenge people who want to improve IIDM face.

It increases my view that such endeavours, though incredibly important, should be focused on earning a profit as the ability to scale as a non-profit will be limited.

Load more