I’ve seen this posted to some EA Facebook groups, but not here on the forum. Yesterday Dominic Cummings, Chief Special Adviser to the UK Prime Minister, released a blog post where he talked about restructuring the British civil service, and invited applicants for various potentially impactful policy roles.

At the top of the blog post he included a quote by ‘Eliezer Yudkowsky, AI Expert, Less Wrong etc’. Cummings has posted on Less Wrong in the past, is plausibly aware of EA, and is likely to be receptive to at least some EA ideas, such as AI safety and prediction markets.

If you’re based in the UK, are interested in policy careers and/or are gifted in data science/maths/economics/project management etc. or are a ‘super talented weirdo’ (his words not mine), and wouldn’t mind spending a couple of years working alongside Dominic Cummings, this could be a great opportunity to influence some big policy changes in the UK.

33

0
0

Reactions

0
0
Comments6
Sorted by Click to highlight new comments since:

Regarding AI alignment and existential risk in general, Cummings already has a blog post where he mentions these: https://dominiccummings.com/2019/03/01/on-the-referendum-31-project-maven-procurement-lollapalooza-results-nuclear-agi-safety/

So he is clearly aware and responsive to the these ideas, it would be great to have an EA minded person on his new team to emphasise these.


Exactly, he has written posts about those topics, and about 'effective action', predictions and so on. And there is this article from 2016 which claims 'he is an advocate of effective altruism', although it then says 'his argument is mothball the department (DFID)', which I'm fairly sure most EAs would disagree with.

But as he's also written about a huge number of other things, day-to-day distractions are apparently the rule rather than the exception in policy roles, and value drift is always possible, it would be good to have someone on his team, or with good communication channels to them, who can re-emphasise these issues (without publicly associating EA with Cummings or any other political figure or party).

Although the blog post is seeking applications for various roles, the email address to send applications to is ‘ideas for number 10 at gmail dot com’.

If someone/some people took that address literally and sent an email outlining some relatively non-controversial EA-aligned ideas (e.g. collaboration with other governments on near-term AI-induced cyber security threats, marginal reduction of risks from AI arms races, pandemics and nuclear weapons, enhanced post-Brexit animal welfare laws, maintenance of the UK’s foreign aid commitment and/or increased effectiveness of foreign aid spending), would the expectancy of that email be positive (higher chance of above policies being adopted), negative (lower chance of above policies being adopted) or basically neutral (highly likely to be ignored or unread, irrelevant if policies are adopted due to uncertainty over long term impact)?

I’m inclined to have a go unless the consensus is that it would be negative in expectation.

I don't think cold emailing is usually a good idea. I've sent you a private message with some more thoughts.

Thanks Khorton for the feedback and additional thoughts.

I think the impact of cold emails is normally neutral, it would have to be a really poorly-written or antagonising email to make the reader actively go and do the opposite of what the email suggests! I guess neutral also qualifies as 'not good'.

But it seems like people with better avenues of contact to DC have been considering contacting him anyway, through cold means or otherwise, so that’s great.

Curated and popular this week
Relevant opportunities