ADDED 8Jan: get your copy for free from Impact Books here.
I think as many people as possible need to read Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World by @Darren McKee, especially people who are new to AI and AI risk. To this end, I'm happy to send copies to people for free. Feel free to give it to a friend or family member. Also it would be great if it could be passed on after reading.
DM me your address and I'll order a copy on Amazon to be sent to you (unfortunately it has to be paper unless you are in the UK; Kindle is difficult for international). I've now partnered with Impact Books - order here for free.
Also open to providing funding for local groups (or individuals) to distribute copies.
This is my review:
Best introduction to AI risk for 2023/4
I highly recommend this as a great up-to-date introduction to AI risk for a lay audience, that is surprisingly easy and quick to read (one thing - no footnote numbers - there are references at the end but you aren't continuously flipping back and forth like with other popular science books). The key point summaries at the end of each chapter are a nice addition.
I think I'll be buying it for a bunch of people.
I did kind of feel that the ending was a bit disappointing, but I think that's really the nature of the situation we are in with AI extinction risk more than anything else! Darren McKee's "reasons to be hopeful" are 1) uncertainty; 2) despairing is counterproductive to reducing risk(!)
Darren goes through a bunch of policy options, but stops short of saying we need to shut it all down. I feel like, short of fundamental breakthroughs in alignment -- which we are fast running out of time for -- that render the problem fully solvable rather than asymptoting toward not-quite-unobtainable existential-safety, the best bet is (still) to push for a global AGI moratorium. I get that some of the recommended policies if fully implemented (e.g. liability/recalls, evals) could lead to a de facto moratorium though, which is good.
See also, the FLI podcast episode with Darren, which covers a decent amount of the material.