Director @ Impactful Government Careers
78 karmaJoined Jan 2018Working (6-15 years)London, UK



I am currently the Director of Impactful Government Careers - an organisation focused on helping individuals find, secure, and excel in high impact civil service careers. My main interests are in improving institutional decision making as I believe even small changes could have substantial benefits for humanity.

I've spent the last 5 years working in the heart of the UK Government, with 4 of those at HM Treasury. My roles have included:

  • Head of Development Policy, HM Treasury
  • Head of Strategy, Centre for Data Ethics and Innovation
  • Senior Policy Advisor, Strategy and Spending for Official Development Assistance, HM Treasury

These roles have involved: advising UK Ministers on policy, spending, and strategy issues relating to international development; assessing the value for money of proposed high-value development projects; developing the 2021 CDEI Strategy and leading the organisational change related to this.

I have recently completed an MSc in Cognitive and Decision Sciences at UCL, where I have focused my research on probabilistic reasoning and improving individual and group decision-making processes. My final research project involved an experimental study into whether a short course (2 hours) on Bayesian reasoning could improve individual's single-shot accuracy when forecasting geopolitical events. On the side, a colleague and me run a small project helping to improve predictive reasoning: https://www.daymark-di.com/ 

How others can help me

I am looking for individuals and groups that are interested in improving institutional decision making, whether that's within the typical high-power institutions such as governments/civil services, multilateral bodies, large multinational corporations, or smaller EA organisations that are delivering high-impact work.

How I can help others

I have a broad range of experience, but can probably be of best help on the topics of:

  • Improving institutional decision making, particularly through embedding decision science methods to deliver more accurate, efficient, and inclusive reasoning processes.
  • Reasoning under uncertainty, particularly how to improve predictive reasoning (both causal inference, i.e. 'will x action lead to y outcome', and forecasting, i.e. 'will x event happen').
  • Working in the UK Civil Service, particularly central Government and opportunities for maximising impact in such roles.
  • Getting things done in Government - how to best utilise your role, skills, and those of your teams/stakeholders, to support the decision-making process and deliver high-impact results.
  • Changing career (as someone that has done two large career changes)

On the side, a colleague and me run a small project helping to improve predictive reasoning which can be found here: https://www.daymark-di.com/. If you are interested in finding out more, feel free to drop me a message.


Sorted by New
· 1mo ago · 1m read
· 20d ago · 1m read
· 2mo ago · 11m read


I agree - though not so much on everything gets funded anyway point.

I think there is also a wider meta question which is what is the best use of EA's marginal time/energy/money. My (highly unjustified) judgement would be that people donating for such causes aren't motivated by effectiveness, or at least are motivated much more by emotion. So the likelihood of changing their donation based on an argument around effectiveness may be quite hard to achieve.

I'm also not sure on the scale of difference between the worst and best charities for such causes (i.e. is the best cancer charity 100x better than the worst)? It'd be great to know, but assuming not, this would also reduce the benefit of any success.

A more effective solution achieve the same goal by proxy would seem to be just influencing the existing major funds or initiatives to focus more on the marginal impact of every £ they receive.

Thanks for posting this!

I want to highlight up front that I am a big supporter of any work that aims to improve institutional decision making. I believe it’s a highly impactful area with unparalleled potential given the decision power (in both terms of spending and benefit potential) of large institutions is immense. I personally feel there’s a big moral and EA argument in supporting solutions that could practically deliver benefits (even small returns given the scale).

In terms of cleaner questions upfront which get to the heart of my uncertainties:

  1. How much will the elements combined improve the quality of decisions that are made? Tied to this - which elements could be cut if needed for time without undermining the benefits you’d expect?
  2. Are there examples of previous decisions that have been made that have been run through this process, to show what different outcomes would have been generated? 
  3. Given the time investment needed to implement this process, why is it advantageous over existing solutions that have been shown to provide substantial improvements in decision making quality (under experimentation) but often face complaints over needing significant time and expertise investment (e.g. training on and aggregating Bayesian models)? 

Further reflections if interested/useful
Having read your paper, I have some concerns over how the solution can be implemented at a beneficial scale. I raise this particularly as a number of the problems you’ve mentioned in the White Paper (e.g. unstructured/limited consultations with experts or limited analysis of the problem space) are driven more by time constraints rather than a clear framework of how to do it. This is an important consideration as planning for catastrophic risks is only half of the problem - we can’t consistently (or at all) predict black swan events and thus decision making at speed in crises is incredibly (if not more) important, as Covid showed us.

Given decision science research, I query the heavy reliance on expert judgment as a key node to improve the predictive accuracy, as there’s a healthy body of evidence that suggests quality of reasoning as opposed to domain expertise is a better predictor for such accuracy. Your White Paper actually seems to account for this by proxy when it highlights specific reasoning methods to drive improved accuracy (e.g. IDEA framework). 

In addition, I’m less sure how beneficial the democratic/deliberation process with citizens is for the risks you are targeting. The examples you note (such as abortion and LGBTQ+ issues) are primarily social issues which lend themselves well to citizen assemblies given they are moral in nature. On the other side, planning policy is quite heavily democratised in the UK and arguably has led to very bad outcomes given wider economic or societal benefits from construction are less tangible than personal concerns around changes to the local area. These externalities aren’t always accurately priced into people’s incentives and thus their judgements aren’t necessarily what’s best for society. Do you see a similar issue for catastrophic risks/how will you mitigate if so?

With a number of charity evaluators recommendations coming out over the last few days/weeks, has there been any further development on AI safety/GCR evaluator(s)? A need that was raised in the post below (I don't know if best EA forum practice is to ressurrect an old thread or not, so I apologies if it's better I just comment in there).


A unique and interesting concept.

I can see the logic but I’d suspect the payments for meetings would need to be quite low in most instances as I wonder whether you create weird incentives where time strapped high-in-demand people slap high fees on meetings and people who have limited funds have yet another barrier to meeting such people. Could also make some very moral conscious people maximise the charge they put on the meeting slot given the time:cost benefit is now not just their own time, but also the opportunity cost of what someone else was willing to pay.

How do you see mitigating the risk that it’ll no longer just be your idea that gets you through the door, but the depth of your pockets?

Separately, am I correct that there is no funds to create the MVP? It feels like something which a few thousand £/$’s could build from a reputable programmer? Would that be useful and possible to get the concept off the ground and able to be tested?

Very fair identification of some sloppy wording by me there with "increasing". Apologies, my main focus was on the relatively high risk. Though as you've noted the WSJ survey had a median probability of 50%, and Forbes (link below) notes the NY Fed recession probability indicator is at 56% (albeit decreased from its previous 66%) - an unusually accurate indicator. Assessments which should be making central banks and economic ministries a little nervous.

Even with some forecasters reducing their probabilities, the relative risk and high level of uncertainties underpinning them would suggest to me it'd be a good time to review plans to weather any downturn.


Do EA funding organisations have contingency plans (I use the term loosely) to manage downside risks to cash-flow in the eventuality of a recession? With the possibility of a recession in the coming 6-18 months seemingly increasing, it would be prudent to model the scenarios and think through how to smooth the funding profile if needed. Given cash is already constrained post the collapse of "that will shall not be named", it feels a recession (and thus implications to donations/value of investments etc.) could compound existing funding issues. I don't expect to get a financial plan from any of the big funders, but I thought worth raising the question in the low probability scenario it isn't already part of their modelling.

Interesting post, thank you!

Your analysis on Congress/National level legislation is particularly telling as it seems to show that if you can survive the next political cycle/election, then the probability of the legislation remaining in place becomes almost static between 40-60% (apart from a drop at the 20-35 year mark).

This fits anecdotally with my experience and aligns with the reality that actually changing legislation can be incredibly tricky, especially on policies that are controversial within a political party. See the U.K.s 0.7% international aid legislation as an example.

Does your research look at any potential practical predictors (beyond neglectfulness) of a policy sticking for longer (such as complexity of language, cross-party support, integration in larger bill)? I’m out so haven’t read the full paper, but do just point me to that if it’s in there and I’ll look later!

I was intrigued by this when you told me about it at EAG Nick, so it’s great to see you’ve written it in a post here as it absolutely merits further investigation.

As you say, finding out why this (may) be true is highly important. Given the potential scale of impact here and the complexity of interactions which may be creating that effect, this feels a great candidate for some sort of causal/Bayesian model to be created as part of a research project. If done correctly this then could be used to inform several RCTs that can try and find the true effect of potential predictors (of which we seem to only have very data poor priors for currently).

Nice post! 

I do agree there is a potential gap for more impact evaluation in EA space and that it is common place for many non-EA NGOs/organisations to be required to have a certain percentage of their programme set aside for monitoring & evaluation purposes... so it feels something similar for EA organisations could be easily achieved.

A potential option - though would need far more exploration - is a central EA organisation that is funded by 5% of all OP/GW/EA Funds grants. So if a Open Phil gives a $1m grant, $50k is allocated to the central EA impact evaluation organisation who now need to add that recipient org to their list of orgs to work with and do an independent evaluation of at some agreed point (depending on grant objectives etc.).

One thing I would stress in particular links to your point around the difficulty of doing M&E on several of the largest EA cause areas (esp. in the GCR space) that have very long (or potentially non-existent) feedback loops and unclear metrics to track. Rather than just accepting its too difficult to do impact evaluation, the focus should be on the process of decision making and reasoning in those organisations, which can act as a 'best alternative' proxy. This can then be evaluated, through independent assessment of items such as theory of changes and use of decision methods like Bayesian belief networks of the route to impact/change.

Yes we are, though I'm always ambitious on how we can expand! Unfortunately, the US picture was solely a choice to avoid any risk of copyright, and it looked slightly more impressive than the U.K. based images...

However, I'll edit if it creates too much confusion.

Load more