What should I read next? Any AGI safety related material that you can recommend? I've read the following books related (broadly) to AI:
- Human Compatible: Artificial Intelligence and the Problem of Control,
by Stuart Russell - AI Superpowers: China, Silicon Valley, and the New World Order, by Kai-Fu Lee
- Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, by Cathy O'Neil
- Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom
- The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, by Pedro Domingos
- The Alignment Problem: Machine Learning and Human Values, by Brian Christian
- Algorithms to Live By: The Computer Science of Human Decisions, by Brian Christian and Tom Griffiths
I find that much (maybe 50%) of what I've read in the above books simply reviews/re-hashes the same handful of concepts (a brief history of AI, what a neural network is, how big data requires a lot of data, what "garbage in garbage out" means, AlexNet was impressive, how impactful AI is and can be, etc.). Several years ago I did some reading/learning about machine learning[1], and I find that I generally don't learn much from reading about AI.[2]
- ^
I spent a few months learning python, read various blog posts, did a tiny tutorial to build a very simple toy project with Scikit-learn, and generally developed a decent lay-persons understanding of machine learning. I have a vague familiarity with multiple regression, K nearest neighbors, dimensionality reduction, but I don't have enough of an understanding to describe them for more than a sentence or two, and I definitely don't have enough of an understanding to describe them in a detailed and technical sense.
- ^
The analogy that I am thinking of is that I have of learned the equivalent of the freshmen 100-level course on AI for non-technical people, and all the books that I am reading are also at the 100-level. Are there any non-technical books at the 200-level, or would I have to do a few years of programming in order to be able to understand the 200-level content?
If you enjoyed some of the issues raised in Weapons of Math Destruction (which I really enjoyed, as it's an AI book written by an actual developer but focuses on the social issues), you may enjoy going down the regulation/policy rabbithole. None of these are EA books, but I think that's important and in some ways makes them better due to a wider viewpoint.
- Algorithmic Regulation by Karen Yeung and Martin Lodge
This is a great, user-friendly intro to algorithmic regulation, especially because it also explores the how and more importantly why of regulation efforts. Made up of essays from a variety of experts in different areas.
- Robot Rules by Jacob Turner
This is a really good, highly detailed and yet simple introduction to a lot of the legal and regulatory challenges of AI. Written by a very knowledgeable UK-based lawyer. Quite a broad scope but a good foundation of knowledge in this area.
- Advanced Introduction to Law and Artificial Intelligence
This is a bit more 'lawyery' but is still easy to understand for a general reader. It goes via theme which is useful, eg liability, legal personhood, weaponry. Covers multiple Western and some Eastern jurisdictions with examples of how various countries have approached issues.
I also added these as good examples because they don't fall into the 'America is the World' trap that a lot of books do. They focus on global policy and how it interlinks, without just talking about US policy and assuming it's global.
Lovely! Thank you so much for the recommendations. All three of these are books I've never heard of before. Much appreciated.