Jamie Bernardi

Co-Founder, AI Safety @ BlueDot Impact
844 karmaJoined Nov 2018Working (0-5 years)London, UK
jamiebernardi.com

Bio

Participation
6

Co-founding BlueDot Impact, focusing on AI safety talent pipeline strategy.

Have a background consisting of a brief research stint on pessimistic agents (reinforcement learning), ML engineering & product ownership, and Physics

Comments
24

I revisit this post from time to time, and had a new thought!

Did you consider at the time talent needs in the civil service & US congress? If so, would you consider these differently now?

This might just be the same as "doing policy implementation", and would therefore be quite similar to Angelina's comment. My question is inspired by the rapid growth in interest in AI regulation in the UK & US governments since this post, which led me to consider potential talent needs on those teams.

Yes - the best thing to do is to sign up and work through the curriculum in your own time!

https://course.aisafetyfundamentals.com/governance

Thanks for the post!

There was consensus that it would be good if CEA replaced one of its (currently) three annual conferences with a conference that’s explicitly framed as being about x-risk or AI-risk focused conference.

In response to a corresponding prompt (“ … at least one of the EAGs should get replaced by an x-risk or AI-risk focused conference …”)

I'm curious if you felt the thrust was that the group thought it's good if CEA in particular replace the activity of running its 3rd EAG with running an AI safety conference, or that there should be an AI safety conference?

In general when we talk about 'cause area specific field building', the purpose that makes most sense to me is to build a community around those cause areas, which people who don't buy the whole EA philosophy can join if they spot a legible cause they think is worthwhile working on.

I'm a little hesitant to default to repurpose existing EA institutions, communities and events to house the proposed cause area specific field building. It seems to me that the main benefit of cause area specific field building is to potentially build something new, fresh and separate from the other cultural norms and beliefs that the EA community brings with it.

Perhaps the crux for me is "is this a conference for EAs interested in AI safety, or is it a conference for anyone interested in AI safety?" If the latter, this points away from an EA-affiliated conference (though I appreciate there are pragmatic questions around "who else would do it"). A fresh feel and new audience might still be achievable in the case that CEA runs the conference ops, but I imagine it would be important to bear in mind during CEA's branding, outreach and choices made during the execution of such a conference.

We'll aim to release a short post about this by the end of the week!

I also sometimes use naturalreaders. Unfortunately I find it a bit... unnatural at times.

I've been really enjoying Type III Audio's reader on this forum, though!

I totally agree there's a gap here. At BlueDot Impact  (/ AGI safety fundamentals), we're currently working on understanding the pipeline for ourselves.

 

We'll be launching another governance course in the next week, and in the longer term we will publish more info on governance careers on our website, as and when we establish the information for ourselves.

In the meantime, there's great advice on this account, mostly targeted at people in the US, but there might be some transferrable lessons:

https://forum.effectivealtruism.org/users/us-policy-careers

Thanks for highlighting that there were other 2 announcements that I didn't focus on in this post.

Whilst the funding announcement may be positive, I didn't expect that it would have strong implications for alignment research - so I chose to ignore it in this post. I didn't spend more than a minute checking my assumption there, though.

RE the announcement of further OMB policies- I totally agree that it sounds like it could be important for alignment / risk reduction. I omitted that announcement mostly because I  didn't have very much context to know what those policies would entail, given the announcement was quite light on details at this point. Thanks for shedding some light on what it could mean!

FWIW, I think this post makes progress and could work in the contexts of some groups. As a concrete example, it would probably work for me as an organiser of one-off courses, and probably for organisers of one-off retreats or internships.

I appreciate the thrust of comments pointing out imperfections in e.g. local group settings, but I just want to be careful that we don't throw out the proposal just because it doesn't work for everyone in all contexts; I think it's better to start with an an imperfect starting point and to iterate on that where it doesn't work in specific contexts, rather than to try to come up with the perfect policy in-theory and get paralysed when we can't achieve that.

Thanks for highlighting this!

Load more