I imported them into RemNote where you can read all the cards. You can also quiz yourself on the questions using the queue functionality at the top. Or here's a Google Doc.
If someone was interested in adding more facts to the deck, there are a bunch in these notes from The Precipice. (It's fairly easy to export from RemNote to Anki and vice versa, though formatting is sometimes a little broken.)
Awesome, glad to hear that! Thanks, JP!
Is there a way to show my appreciation for an edit?
Often I see excellent edits to the Wiki show up in my Forum homepage, and I would like to be able to show my appreciation to someone. Ideally with low effort and without otherwise adding any value.
Is there a like/upvote button for Wiki edits I'm missing?
 For example, check out how much information this article on iterated embryo selection is collating and condensing. It was written a few months ago, and is now Google's featured snippet for iterated embryo selection (a sign that Google 'thinks' it's the best, succinct summary of the term).
 To be honest this is so frequently Pablo or the EA Wiki Assistant (I think also Pablo?) that I should probably just send a DM.
This is such a useful public good. Thank you!!
Thanks for writing this!
Just wanted to let everyone know that at 80,000 Hours we’ve started headhunting for EA orgs and I’m working full-time leading that project. We’re advised by a headhunter from another industry, and as suggested, are attempting to implement executive search best practices.
Have reached out to your emails listed above - looking forward to speaking.
Great article, thanks Carrick!
If you're an EA who wants to work on AI policy/strategy (including in support roles), you should absolutely get in touch with 80,000 Hours about coaching. Often, we've been able to help people interested in the area clarify how they can contribute, made introductions etc.
Apply for coaching here.
We agree these are technical problems, but for most people, all else being equal, it seems more useful to learn ML rather than cog sci/psych.
Thanks for writing this. Since you mention some 80,000 Hours content, I thought I’d respond briefly with our perspective.
We had intended the career review and AI safety syllabus to be about what you’d need to do from a technical AI research perspective. I’ve added a note to clarify this.
We agree that there a lot of approaches you could take to tackle AI risk, but currently expect that technical AI research will be where a large amount of the effort is required. However, we’ve also advised many people on non-technical routes to impacting AI safety, so don’t think it’s the only valid path by any means.
We’re planning on releasing other guides and paths for non-technical approaches, such as the AI safety policy career guide, which also recommends studying political science and public policy, law, and ethics, among others.
Thanks for writing this up! It's very useful to be able to compare this to census data. Did you use the same/similar message for everyone? If so, I'd be interested to see what it was. This sort of thing would also be useful to a/b test to refine it. There is also the option to add people manually, bypassing the need for admin approval; did you contact these people too?
Hi Eric, thanks for writing these and pointing us to them. I think this is a great idea. I just posted these on our business society and law society Facebook page to test the waters and see what response we'd get from a similar input. Out of interest, what has the response been that you've gotten so far?