I’m trying to collect a relatively comprehensive list of AI safety newsletters (and some other ways to keep up with AI developments).[1]
- If you know of some that I’ve missed, please comment![2]
- I’m also quite interested in hearing about your experiences with or reflections on the resources listed. Please feel free to comment about that, too.
The list includes newsletters and resources that I haven’t spent time reading. I’m not commenting much on my experience with the resources in the post (and the main ways I keep up with AI are via news sources, posts on different forums, an assortment of Slack channels, my RSS feed, and Twitter), but I’ve particularly appreciated:
- A few episodes from the AI X-risk Research Podcast with Daniel Filan (AXRP) that I’ve listened to.
- The AI Safety and ML Safety newsletters, and Import AI.
(Note that I’ve just not engaged much with many of the other resources!)
Before I continue with the list, I just want to express a quick note of thanks to everyone who puts together resources like the ones I'm collecting here. I'm impressed with a lot of this work, and grateful that it's being done.
Broad collections and reading lists
- Getting involved
- AI Safety Training - A database of training programs, conferences, and other events for AI existential safety, collected by AI Safety Support
- Opportunities in AGI safety - Opportunities board (and newsletter) for advancing your career in AGI safety collected by BlueDot Impact (see also the EA Opportunity Board, the 80,000 Hours job board.)
- Recurring courses & programs
- AGI Safety Fundamentals (AGISF) - courses by BlueDot Impact on AI alignment (101 and 201) and AI governance
- SERI MATS - Stanford Existential Risks Initiative ML Alignment Theory Scholars
- Intro to ML Safety by the Center for AI Safety (CAIS)
- I’m not sure how recurring or standardized these are:
- MLAB: Upskill in machine learning (advanced)
- ML Safety Scholars: Upskill in machine learning (beginners) (not running this year)
- Philosophy Fellowship: For grad students and PhDs in philosophy
- PIBBSS: For social scientists and natural scientists
- Lists/collections (see also reading lists from the above)
- Lots of Links by AI Safety Support
- A collection of AI Governance-related Podcasts, Newsletters, Blogs, and more (Alex Lintz, 2 Oct 2021)
- Resources that (I think) new alignment researchers should know about (LessWrong post by Akash, 29 Oct 2022)
- Resources I send to AI researchers about AI safety (LessWrong post by Vael Gates, 14 Jun 2022)
- List of AGI safety talks gathered by BlueDot Impact
- Forums
- AI Alignment Forum - quite technical, restricted posting
- LessWrong - lots of AI content, but also focuses on other topics
- Effective Altruism Forum - this platform
- A few highlighted blogs
- Cold Takes by Holden Karnofsky
- Planned Obsolescence by Ajeya Cotra and Kelsey Piper
- AI Impacts
Podcasts
- AI X-risk Research Podcast with Daniel Filan (AXRP)
- The AI Safety Podcast
- The Nonlinear Library - EA Forum, Lesswrong, and Alignment Forum posts in the form of machine-read podcasts
- They have additional podcast feeds for top posts of all time (or something like that) from the Alignment Forum, from the EA Forum (though it's just a single Nuno post right now...), and from LessWrong, as well as a feed called “the alignment section” which seems to be curated by them.
- Future of Life Institute Podcast
(The 80,000 Hours podcast also often has episodes related to AI safety.)
Newsletters
Many of the descriptions are taken near-verbatim from the sites that host the newsletters.
Safety-oriented — general audience
Note that the EA Newsletter, which I currently run, also often covers relevant updates in AI safety.
AI Safety Newsletter (Center for AI Safety — new)
- Stay up-to-date with the latest advancements in the world of AI and AI safety with our newsletter, crafted by the experts at the Center for AI Safety. No technical background required.
AI Safety Support (AI Safety Support)
- This monthly newsletter summarizes current AI Safety events, opportunities, and resources.
Opportunities in AGI safety (BlueDot Impact)
- Newsletters for upcoming AI safety researchers and practitioners that periodically share opportunities.
Safety-oriented — more in-the-weeds
ML Safety Newsletter (Center for AI Safety)
- A safety newsletter that is designed to cover empirical safety research and be palatable to the broader machine learning research community. You can subscribe here or follow the newsletter on Twitter.
Alignment Newsletter (Rohin Shah — on hiatus(?))
- Covers recent work on AI alignment, with original analysis and criticism.
- Written by a team of researchers and programmers.
GovAI newsletter (Centre for the Governance of AI)
- Includes research, annual reports, and rare updates about programmes and opportunities. They also have a blog.
ChinAI (Jeffrey Ding)
- This weekly newsletter, sent out by Jeff Ding, a researcher at the Future of Humanity Institute, covers the Chinese AI landscape and includes translations from Chinese government agencies, newspapers, corporations, and other sources.
The EU AI Act Newsletter (Future of Life Institute (FLI))
- A bi-weekly newsletter about up-to-date developments and analyses of the proposed EU AI law.
Other AI newsletters (not necessarily safety-oriented)
EuropeanAI newsletter (Charlotte Stix)
- This bi-monthly newsletter covers the state of European AI and the most recent developments in AI governance within the EU Member States.ai
Import AI (Jack Clark)
- This is a weekly newsletter about artificial intelligence, covering everything from technical advances to policy debates, as well as a weekly short story.
Policy.ai (Center for Security and Emerging Technology (CSET))
- A biweekly newsletter on artificial intelligence, emerging technology and security policy.
TLDR AI
- Daily email about new AI tech.
Related
Note that I haven’t really checked these, but they were recommended.
RAND newsletters (and research you can get on RSS feeds)
- E.g. Policy Currents
GCR Policy Newsletter
- A twice-monthly newsletter that highlights the latest research and news on global catastrophic risk.
This week in security (@zackwhittaker)
- A weekly tl;dr cybersecurity newsletter of all the major stuff you missed, but really need to know. It includes news, the happy corner, a featured cyber cat (or friend), and more. It's sent every Sunday, and it's completely free.
Crypto-Gram (Schneier on Security)
- Crypto-Gram is a free monthly e-mail digest of posts from Bruce Schneier’s Schneier on Security blog.
Oxford Internet Institute
- This newsletter, which is distributed eight times a year, provides information about the Oxford Internet Institute, a multidisciplinary research and teaching department of the University of Oxford dedicated to the social science of the Internet.
Closing notes
- Please suggest additions by commenting!
- Please post reflections and thoughts on the different resources (or your personal highlights).
- Thanks again to everyone.
- ^
The closest thing to this that I’m aware of is Lots of Links by AI Safety Support, which is great, but you can’t comment on it to add more and share reflections, which I think is a bummer. There’s probably more. (Relevant xkcd.)
- ^
Thanks to folks who directed me to some of the resources listed here!
Note also that in some cases, I'm quoting near-verbatim from assorted places that directed me to these or from the descriptions of the resources listed on their websites.
Zvi does a pretty in-depth AI news round up every week or two now, plus some individual posts on AI topics. Not exclusively about safety, but often gives his own safety-informed perspective on capabilities news, etc. https://thezvi.substack.com/
The navigating AI risks newsletter could be relevant as well: "Welcome to Navigating AI Risks, where we explore how to govern the risks posed by transformative artificial intelligence. " https://navigatingairisks.substack.com/