Hide table of contents

This is a list of AI safety newsletters (and some other ways to keep up with AI developments).[1] 

  • If you know of any that I’ve missed, please comment![2]
  • And I'd love to hear about your experiences with or reflections on the resources listed

Thanks to everyone who puts together resources like the ones I'm collecting here!

Created with DALL-E

Just to be clear: the list includes newsletters and resources that I haven’t engaged with much.

Podcast & video channels

Newsletters

General-audience, safety-oriented newsletters on AI and AI governance

Note that the EA Newsletter, which I currently run, also often covers relevant updates in AI safety. 

AI Safety Newsletter (Center for AI Safety)

  • Stay up-to-date with the latest advancements in the world of AI and AI safety with our newsletter, crafted by the experts at the Center for AI Safety. No technical background required. 

Opportunities in AGI safety (BlueDot Impact)

  • Newsletters for upcoming AI safety researchers and practitioners that periodically share opportunities.

Transformer (Shakeel Hashim)

  • A weekly briefing of AI and AI policy updates and media/popular coverage.

3-Shot Learning (Center for AI Policy)

  • Each week, this newsletter provides summaries of three important developments that AI policy professionals should know about, especially folks working on US AI policy. Visit the archive to read a sample issue.

More in-the-weeds safety-oriented newsletters

ML Safety Newsletter (Center for AI Safety)

  • A safety newsletter that is designed to cover empirical safety research and be palatable to the broader machine learning research community. You can subscribe here or follow the newsletter on Twitter.

Alignment Newsletter (Rohin Shah — on hiatus(?))

  • Covers recent work on AI alignment, with original analysis and criticism.
  • Written by a team of researchers and programmers.

GovAI newsletter (Centre for the Governance of AI)

  • Includes research, annual reports, and rare updates about programmes and opportunities. They also have a blog.  

ChinAI (Jeffrey Ding)

  • This weekly newsletter, sent out by Jeff Ding, a researcher at the Future of Humanity Institute, covers the Chinese AI landscape and includes translations from Chinese government agencies, newspapers, corporations, and other sources.

The EU AI Act Newsletter (Future of Life Institute (FLI))

  • A bi-weekly newsletter about up-to-date developments and analyses of the proposed EU AI law.

The Autonomous Weapons Newsletter (Future of Life Institute (FLI))

  • Monthly updates on the technology and policy of autonomous weapons.

Other AI newsletters (not necessarily safety-oriented)

EuropeanAI newsletter (Charlotte Stix)

  • This bi-monthly newsletter covers the state of European AI and the most recent developments in AI governance within the EU Member States.ai

Import AI (Jack Clark)

  • This is a weekly newsletter about artificial intelligence, covering everything from technical advances to policy debates, as well as a weekly short story.

Policy.ai (Center for Security and Emerging Technology (CSET))

  • A biweekly newsletter on artificial intelligence, emerging technology and security policy.

TLDR AI

  • Daily email about new AI tech.

Note that I haven’t really checked these, but they were recommended.

RAND newsletters (and research you can get on RSS feeds)

GCR Policy Newsletter

  • A twice-monthly newsletter that highlights the latest research and news on global catastrophic risk.

This week in security (@zackwhittaker)

  • A weekly tl;dr cybersecurity newsletter of all the major stuff you missed, but really need to know. It includes news, the happy corner, a featured cyber cat (or friend), and more. It's sent every Sunday, and it's completely free.

Crypto-Gram (Schneier on Security)

  • Crypto-Gram is a free monthly e-mail digest of posts from Bruce Schneier’s Schneier on Security blog.

Oxford Internet Institute 

  • This newsletter, which is distributed eight times a year, provides information about the Oxford Internet Institute, a multidisciplinary research and teaching department of the University of Oxford dedicated to the social science of the Internet.

Other resources: collections, programs, reading lists, etc.

Closing notes

  • Please suggest additions by commenting! 
  • Please post reflections and thoughts on the different resources (or your personal highlights). 
  • No longer active newsletters can be found in this footnote.[3]
  • Thanks again to everyone. 
  1. ^

    The closest thing to this that I’m aware of is Lots of Links by AI Safety Support, which is great, but you can’t comment on it to add more and share reflections, which I think is a bummer. There’s probably more. (Relevant xkcd.)

  2. ^

    Thanks to folks who directed me to some of the resources listed here! 

    Note also that in some cases, I'm quoting near-verbatim from assorted places that directed me to these or from the descriptions of the resources listed on their websites.

  3. ^

     AI Safety Support (AI Safety Support)

49

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

Zvi does a pretty in-depth AI news round up every week or two now, plus some individual posts on AI topics. Not exclusively about safety, but often gives his own safety-informed perspective on capabilities news, etc. https://thezvi.substack.com/

On AI x-risk communication efforts: https://xriskobservatory.substack.com/

The navigating AI risks newsletter could be relevant as well: "Welcome to Navigating AI Risks, where we explore how to govern the risks posed by transformative artificial intelligence. "  https://navigatingairisks.substack.com/ 

This qualitative survey of AI safety experts I just finished, I think it might be a useful resource for people just starting their career in AI safety! https://www.lesswrong.com/s/xCmj2w2ZrcwxdH9z3 

Probably worth adding a section of similar collections / related lists. For instance, see Séb Krier's post and https://aisafety.video/.

Apart Research has a newsletter that might be on hiatus.

More from Lizka
Curated and popular this week
Relevant opportunities