Hide table of contents

I’m interested in learning more about a wide variety of topics relevant to "longtermism-motivated AI governance/strategy/policy research, practice, advocacy, and talent-building", or basically "anything relevant to understanding and mitigating AI x-risk other than technical AI safety stuff". 

I don't just mean things that are explicitly from a longtermist perspective or even explicitly about AI. Many topics and fields are relevant to, larger than, and older than AI governance, so I expect many of the best resources for my purposes will be things like great books (including biographies), textbooks, or lecture series on topics like information security, the tech industry, tech policy in various jurisdictions, tech diplomacy, tech development and forecasting, regulation, espionage, great power relations, the semiconductor industry, monitoring and enforcement of treaties, ...

As such, I’d be interested in: 

  • People’s thoughts on which books, textbooks, lecture series, or courses might be worth consuming for these purposes
    • (Bonus points if it's an audiobook and you include an Audible link, but I'd welcome other suggestions as well)
  • People's thoughts on which books/whatever someone like me might end up reading but should actually skip
  • Links to good summaries/reviews/notes about relevant books/whatever.

I imagine such a collection could be useful for other people too. I’ll also share the relevant books and links that I know about already. And I've started making a Collection of AI governance reading lists, syllabi, etc., but those lists don't include many books, lecture series, or similar.

The cluster of topics I’m pointing to is intentionally broad. If you’re not sure whether a book/whatever is relevant enough, please mention it anyway, and just say something about what the book/whatever seems relevant to. 

See also: 

22

0
0

Reactions

0
0
New Answer
New Comment

3 Answers sorted by

Thanks! In addition to the books in your authoritarianism reading list, I'd suggest two from this (partly long-term-oriented) course on AI governance:

  • The Brussels Effect: How the European Union Rules the World, Anu Bradford (2020)
    • On how EU regulations sometimes become international standards (which makes them relevant to US/China-based AI developers)
  • Army of None: Autonomous Weapons and the Future of War, Paul Scharre (2018)
    • On what factors influence whether/how new weapons are deployed in war (which seems relevant to how/when/which military decisions will be delegated to AI systems--delegations which may pose significant long-term risks)
    • The course recommended the introduction, Part VI, and chapters 3, 16, and 18

Also:

  • These histories of institutional disasters and near-disasters
    • Edit: see footnote 1

In case anyone was wondering, Army of None seems to be available on US Audible and on Audiobooks.co.uk.

Thanks Mauricio!

(Btw, if anyone else is interested in "These histories of institutional disasters and near-disasters", you can find them in footnote 1 of the linked post.)

Here are some relevant books from my ranked list of all EA-relevant (audio)books I've read, along with a little bit of commentary on them.

  • The Precipice, by Ord, 2020
    • See here for a list of things I've written that summarise, comment on, or take inspiration from parts of The Precipice.
    • I recommend reading the ebook or physical book rather than audiobook, because the footnotes contain a lot of good content and aren't included in the audiobook
    • Superintelligence may have influenced me more, but that’s just due to the fact that I read it very soon after getting into EA, whereas I read The Precipice after already learning a lot. I’d now recommend The Precipice first.
  • Superintelligence, by Bostrom, 2014
  • The Alignment Problem, by Christian, 2020
    • This might be better than Superintelligence and Human-Compatible as an introduction to the topic of AI risk. It also seemed to me to be a surprisingly good introduction to the history of AI, how AI works, etc.
    • But I'm not sure this'll be very useful for people who've already read/listened to a decent amount (e.g., the equivalent of 4 books) about those topics.
    • This is more relevant to technical AI safety than to AI governance (though obviously the former is relevant to the latter anyway).
  • Human-Compatible, by Russell, 2019
  • The Strategy of Conflict, by Schelling, 1960
    • See here for my notes on this book, and here for some more thoughts on this and other nuclear-risk-related books.
    • This is available as an audiobook, but a few Audible reviewers suggest using the physical book due to the book's use of equations and graphs. So I downloaded this free PDF into my iPad's Kindle app.
  • Destined for War, by Allison, 2017
    • See here for some thoughts on this and other nuclear-risk-related books, and here for some thoughts on this and other China-related books.
  • The Better Angels of Our Nature, by Pinker, 2011
    • See here for some thoughts on this and other nuclear-risk-related books.
  • Rationality: From AI to Zombies, by Yudkowsky, 2006-2009
    • I.e., “the sequences”
  • Age of Ambition, by Osnos, 2014
    • See here for some thoughts on this and other China-related books.
       

I've also now listened to Victor's Understanding the US Government (2020) due to my interest in AI governance, and made some quick notes here.

I'm also going to listen to Tegmark's Life 3.0, but haven't done so yet.

The New Fire by Andrew Imbrie & Ben Buchanan

  • This will soon be available as an audiobook
    • I haven't read it yet but plan to as soon as it comes out as an audiobook
  • The authors have worked at CSET and have written other things I found useful
  • Someone whose AI-gov-related judgement & knowledge I respect told me this is probably an especially good book to read for people who want an intro to the intersection of AI and national security / geopolitics (although it's not written from an EA/longtermist/x-risk perspective)
  • The description from audiobooks.com is:
    • "Artificial intelligence is revolutionizing the modern world. It is ubiquitous-in our homes and offices, in the present and most certainly in the future. Today, we encounter AI as our distant ancestors once encountered fire. If we manage AI well, it will become a force for good, lighting the way to many transformative inventions. If we deploy it thoughtlessly, it will advance beyond our control. If we wield it for destruction, it will fan the flames of a new kind of war, one that holds democracy in the balance. As AI policy experts Ben Buchanan and Andrew Imbrie show in The New Fire, few choices are more urgent-or more fascinating-than how we harness this technology and for what purpose.

      The new fire has three sparks: data, algorithms, and computing power. These components fuel viral disinformation campaigns, new hacking tools, and military weapons that once seemed like science fiction. To autocrats, AI offers the prospect of centralized control at home and asymmetric advantages in combat. It is easy to assume that democracies, bound by ethical constraints and disjointed in their approach, will be unable to keep up. But such a dystopia is hardly preordained. Combining an incisive understanding of technology with shrewd geopolitical analysis, Buchanan and Imbrie show how AI can work for democracy. With the right approach, technology need not favor tyranny."
Comments1
Sorted by Click to highlight new comments since: Today at 11:37 AM

Thanks a lot for compiling this, I'm thinking about switching my career into AI governance and the lists in your Google Doc seem super useful!

Curated and popular this week
Relevant opportunities