I'm interested what people think are the best overviews of AI risk for various types of people. Below I've listed as many good overviews as I could find (excluding some drafts), splitting based on "good for a popular audience" and "good for AI researchers." I'd also like to hear if people think some of these intros are better than others (prioritizing between intros). I'd be interested to hear about podcasts and videos as well.

I am maintaining a list at this Google doc to incorporate people's suggestions.

Popular audience: 

AI researchers:

Alignment landscape:

Podcasts and videos:




New Answer
New Comment

4 Answers sorted by

I had some success with the AI sections in The Precipice and What We Owe The Future, both are less than 10 pages (between 10-15 small book pages) and were clearly reviewed and optimized by multiple people, not just one person setting aside a few hours to write something that looks like it works.

It can't be emailed to people, but screens aren't necessarily the best place to do a lot of critical thinking for most people. Plus, loaning someone a book and personally recommending them the best part, is a nice gesture (DO NOT buy them a copy to display wealth or dedication, if necessary buy yourself a second copy and loan the first one out, AI has strong absurdity heuristic effects so you never want to come off as obsessed).

Yep, books are good too. Maybe MacAskill + Ord would agree to release pdfs of the AI sections to the public?

The top 10% of submissions to the AI safety arguments competition was a crowdsourced attempt to make some really good one-liners and short paragraphs that do a really good job arguing and articulating specific AI concepts. I tested that document on someone though, and by itself I don't think it worked very well. So it might be one of those things where it looks good to someone already familiar with the concepts, but doesn't work very well in the field.

I'd be excited to see a person / team of people go through the spreadsheet and make an opinionated list of what they think are the top ~10 points. Not to share with people as an introduction, but to use as part of future intro articles and talks. Although maybe the list is short enough already.

List of Lethalities is for people already familiar with the space. I don't think it's a very good introduction. #2 on the list literally dives straight into a detailed description of how AI kills everyone with rocket nanobots (it's to prove a specific point about disparity between human intelligence and optimal AGI intelligence and shouldn't be taken out of context, but it still is what it is).

List of Lethalities fit neatly into the situation of AI safety at the time (June 2022) and using it to introduce AI concepts to people is too far outside what it was optimized for.

Eliezer Yudkowsky's 2008 paper AI as a pos neg factor in global risk is an oxford publication and has more than 600 citations. Even if it's a bit long and several of its sections are severely out of date, it is still a really really good introduction, and it's very impressive that someone was able to write something that predictive of the future in 2008.

Mostly agree on List of Lethalities, but I do think it's an excellent intro for particular types of people. I included it after hearing Shoshannah Tekofsky say it was her first serious encounter with AI risk arguments on this podcast with Akash Wasil.

The yudkowsky-christiano debate might be a good resource for AI scientists since it lets them dive straight into the topic, plus it's a real debate so it's provably balanced. The problem with AI researchers is intense-and-unfounded-skepticism-by-default, so honest and balanced debate is the way to go, even if the yudkowsky-christiano debate isn't the right thing for that.

Interesting idea! I wonder if Scott Alexander's review is a better version. And there's also the Yudkowsky-Ngo discussion with Scott Alexander's summary.