CC

Christopher Clay

Non-Trivial Fellow @ Non Trivial
319 karmaJoined Pursuing an undergraduate degreeUnited Kingdom

Bio

Participation
1

Maths student at Bristol. Interested in AI Safety pipelines.

Things I've done:

  • Non Trivial Fellowship. I produced an explainer of the risks posed by improved precision in nuclear warfare.
  • AI Safety Fundamentals. I produced this explainer of superposition: https://chrisclay.substack.com/p/what-is-superposition-in-neural-networks 

How others can help me

I'm looking for opportunities to gain career capital this summer, particularly in EA-related orgs. I'm open to many things, so if you think I might be a good fit, feel free to reach out!

How I can help others

If you'd like advice on Non-Trivial or are interested in talking about Cause prioritisation, send me a message!

Comments
13

Unfortunately as I produced those stats quite quickly in the summer, I didn't have a formal definition and was eyeballing it a bit! It's something I might look back into more rigorously in the future.

What I would point out is that MATS had a very similar proportion remaining in AI Safety according to their analysis - https://www.lesswrong.com/users/ryankidd44?from=search_autocomplete. Although I agree it definitely would be surprising if the retention rate of all these fellowships was as high as MATS.

Absolutely! My estimate in the summer was 4.5% (around 30 fellows), but this was excluding people at frontier labs who were explicitly on the safety teams. If specifics are important I'd be more than happy to revisit!

Really? I suspect only a small proportion of EAs are pro-Reform UK. 

[This comment is no longer endorsed by its author]Reply

yes exactly thats what I've heard - orgs are reluctant to accept inexperienced ops people. I'd love to see a way round it!

This discussion comes at a really good time for me! I'm actually working on a project in this area.

 

For the past month(ish) I've been researching and building a post (or series of posts) about the AI Safety Pipeline - i.e. how we direct talent most effectively into and between organisations. My main goal is to get more people talking about this in public - a goal that many people I've talked to seem to share!

In particular:
1. How can we improve pipelines and encourage demand for non-technical support roles (e.g. People Management and Operations) in AI Safety?
2. How can we improve resources for the 'other 95%' who struggle to get roles within AI orgs but still want to make an impact?


I'm still deciding which of these areas to prioritize for the final post. I've had more conversations about #1, but I'm personally more excited by the potential impact of #2. I'd be very grateful to hear which of these you think is more pressing.

I'm writing to see if anyone else is thinking along these lines. If you'd like to have a conversation about the pipeline, or are even working on the same thing and would like to collaborate, please reach out to me! I have a low bar, and have spoken to people at all levels within the ecosystem.

I'm looking to publish towards mid-august. 

 

Quick question: are you saying that this is cost-effectively better than going vegan? Or just that if you are going to eat meat, it's better to switch your consumption to beef?

I see the argument about the US Government's statistical value of a life used a lot - and I'm not sure if I agree. I don't think it echoes public sentiment - rather a government's desire to remove itself of blame. Note how much more is spent per life on say, air transport than disease prevention.

Interesting argument - I don't know much about this argument, but my thoughts are that there's not much value in thinking in terms of conditional value. If AI Safety is doomed to fail, there's not much value focusing on good outcomes which won't happen, when there are great global health interventions today. Arguably, these global health interventions could also help at least some parts of humanity have a positive future.

Load more