Hide table of contents

I’m trying to collect a relatively comprehensive list of AI safety newsletters (and some other ways to keep up with AI developments).[1] 

  • If you know of some that I’ve missed, please comment![2]
  • I’m also quite interested in hearing about your experiences with or reflections on the resources listed. Please feel free to comment about that, too.

The list includes newsletters and resources that I haven’t spent time reading. I’m not commenting much on my experience with the resources in the post (and the main ways I keep up with AI are via news sources, posts on different forums, an assortment of Slack channels, my RSS feed, and Twitter), but I’ve particularly appreciated: 

(Note that I’ve just not engaged much with many of the other resources!)

Before I continue with the list, I just want to express a quick note of thanks to everyone who puts together resources like the ones I'm collecting here. I'm impressed with a lot of this work, and grateful that it's being done.

Created with DALL-E

Podcast & video channels

(The 80,000 Hours podcast also often has episodes related to AI safety.)

Newsletters

Many of the descriptions are taken near-verbatim from the sites that host the newsletters. 

Safety-oriented newsletters on AI and AI governance — general audience

Note that the EA Newsletter, which I currently run, also often covers relevant updates in AI safety. 

AI Safety Newsletter (Center for AI Safety)

  • Stay up-to-date with the latest advancements in the world of AI and AI safety with our newsletter, crafted by the experts at the Center for AI Safety. No technical background required. 

Opportunities in AGI safety (BlueDot Impact)

  • Newsletters for upcoming AI safety researchers and practitioners that periodically share opportunities.

3-Shot Learning (Center for AI Policy)

  • Each week, this newsletter provides summaries of three important developments that AI policy professionals should know about, especially folks working on US AI policy. Visit the archive to read a sample issue.

Safety-oriented — more in-the-weeds

ML Safety Newsletter (Center for AI Safety)

  • A safety newsletter that is designed to cover empirical safety research and be palatable to the broader machine learning research community. You can subscribe here or follow the newsletter on Twitter.

Alignment Newsletter (Rohin Shah — on hiatus(?))

  • Covers recent work on AI alignment, with original analysis and criticism.
  • Written by a team of researchers and programmers.

GovAI newsletter (Centre for the Governance of AI)

  • Includes research, annual reports, and rare updates about programmes and opportunities. They also have a blog.  

ChinAI (Jeffrey Ding)

  • This weekly newsletter, sent out by Jeff Ding, a researcher at the Future of Humanity Institute, covers the Chinese AI landscape and includes translations from Chinese government agencies, newspapers, corporations, and other sources.

The EU AI Act Newsletter (Future of Life Institute (FLI))

  • A bi-weekly newsletter about up-to-date developments and analyses of the proposed EU AI law.

Other AI newsletters (not necessarily safety-oriented)

EuropeanAI newsletter (Charlotte Stix)

  • This bi-monthly newsletter covers the state of European AI and the most recent developments in AI governance within the EU Member States.ai

Import AI (Jack Clark)

  • This is a weekly newsletter about artificial intelligence, covering everything from technical advances to policy debates, as well as a weekly short story.

Policy.ai (Center for Security and Emerging Technology (CSET))

  • A biweekly newsletter on artificial intelligence, emerging technology and security policy.

TLDR AI

  • Daily email about new AI tech.

Note that I haven’t really checked these, but they were recommended.

RAND newsletters (and research you can get on RSS feeds)

GCR Policy Newsletter

  • A twice-monthly newsletter that highlights the latest research and news on global catastrophic risk.

This week in security (@zackwhittaker)

  • A weekly tl;dr cybersecurity newsletter of all the major stuff you missed, but really need to know. It includes news, the happy corner, a featured cyber cat (or friend), and more. It's sent every Sunday, and it's completely free.

Crypto-Gram (Schneier on Security)

  • Crypto-Gram is a free monthly e-mail digest of posts from Bruce Schneier’s Schneier on Security blog.

Oxford Internet Institute 

  • This newsletter, which is distributed eight times a year, provides information about the Oxford Internet Institute, a multidisciplinary research and teaching department of the University of Oxford dedicated to the social science of the Internet.

Other resources: collections, programs, reading lists, etc.

Closing notes

  • Please suggest additions by commenting! 
  • Please post reflections and thoughts on the different resources (or your personal highlights). 
  • No longer active newsletters can be found in this footnote.[3]
  • Thanks again to everyone. 
  1. ^

    The closest thing to this that I’m aware of is Lots of Links by AI Safety Support, which is great, but you can’t comment on it to add more and share reflections, which I think is a bummer. There’s probably more. (Relevant xkcd.)

  2. ^

    Thanks to folks who directed me to some of the resources listed here! 

    Note also that in some cases, I'm quoting near-verbatim from assorted places that directed me to these or from the descriptions of the resources listed on their websites.

  3. ^

     AI Safety Support (AI Safety Support)

49

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since: Today at 10:54 PM

Zvi does a pretty in-depth AI news round up every week or two now, plus some individual posts on AI topics. Not exclusively about safety, but often gives his own safety-informed perspective on capabilities news, etc. https://thezvi.substack.com/

On AI x-risk communication efforts: https://xriskobservatory.substack.com/

The navigating AI risks newsletter could be relevant as well: "Welcome to Navigating AI Risks, where we explore how to govern the risks posed by transformative artificial intelligence. "  https://navigatingairisks.substack.com/ 

Probably worth adding a section of similar collections / related lists. For instance, see Séb Krier's post and https://aisafety.video/.

Apart Research has a newsletter that might be on hiatus.