Former AI safety research engineer, now PhD student in philosophy of ML at Cambridge. I'm originally from New Zealand but have lived in the UK for 6 years, where I did my undergrad and masters degrees (in Computer Science, Philosophy, and Machine Learning). Blog:


Wiki Contributions


AGI Safety Fundamentals curriculum and application

Yeah, I also feel confused about why I didn't have this thought when talking to you about RAISE.

Most proximately, AGI safety fundamentals uses existing materials because its format is based on the other EA university programs; and also because I didn't have time to write (many) new materials for it.

I think the important underlying dynamic here is starting with a specific group of people with a problem, and then making the minimum viable product that solves their problem. In this case, I was explicitly thinking about what would have helped my past self the most.

Perhaps I personally didn't have this thought back in 2019 because I was still in "figure out what's up with AI safety" mode, and so wasn't in a headspace where it was natural to try to convey things to other people.

AGI Safety Fundamentals curriculum and application

This post (plus the linked curriculum) is the most up-to-date resource.

There's also this website, but it's basically just a (less-up-to-date) version of the curriculum.

Ngo's view on alignment difficulty

Seems like it was just repeated; fixed now.

AGI Safety Fundamentals curriculum and application

Update: see here:

We need alternatives to Intro EA Fellowships

I’ve advised one person to skip the fellowship and do the readings at an accelerated pace on their own and talk to other organizers about it.

This seems like good advice. In general I think fellowship curricula are pretty great resources regardless of whether you're actually doing the fellowship or not, so one low-effort change could just be to tell people "you can do this fellowship, or if you're really excited about spending much more time on this, you can just speedrun all the readings".

In fact, maybe the best option is for those people to do both. E.g. do all the readings up front, but still have ongoing fellowship sessions over the next 8 weeks to have higher-fidelity communication/make sure they have interpreted the readings in the right way/answer relevant questions.

(epistemic status: not strong opinions, since I don't have much context on student EA groups right now)

AGI Safety Fundamentals curriculum and application

Not finalised, but here's a rough reading list which would replace weeks 5-7 for the governance track.

AGI Safety Fundamentals curriculum and application

Actually, Joe Carlsmith does it better in Is power-seeking AI an existential risk? So I've swapped that in instead.

AGI Safety Fundamentals curriculum and application

This is a great point, and I do think it's an important question for participants to consider; I should switch the last reading for something covering this. The bottleneck is just finding a satisfactory reading - I'm not totally happy with any of the posts covering this, but maybe AGI safety from first principles is the closest to what I want.

richard_ngo's Shortform

Disproportionately many of the most agentic and entrepreneurial young EAs I know are community-builders. I think this is because a) EA community-building currently seems neglected compared to other cause areas, but b) there's currently no standard community-building career pathway, so to work on it they had to invent their own jobs.

Hopefully the people I'm talking about changing the latter will lead to the resolution of the former.

Load More