Hide table of contents

105

Podcasts are a great way to learn about EA. Here’s a list of the EA-related podcasts I’ve come across over the last few years of my podcast obsession.

I’ve split them up into two categories:

  1. Strongly EA-related podcasts: Podcasts run by EA organisations or otherwise explicitly EA-related.
  2. Podcasts featuring EA-related episodes: Podcasts which are usually not EA-related but have some episodes which are about an EA idea or interviewing an EA-aligned guest.

Please add to the comments any podcasts that I have missed. I am always excited to find out about more interesting podcasts!

Strongly EA-related podcasts

  • Doing Good Better Podcast- Five short episodes about EA concepts. Produced by the Centre for Effective Altruism. No new content since 2017.
  • The Life You Can Save Podcast- Episodes from Peter Singer’s organisation that focus on alleviating global poverty. The latest episodes are interviews with EA organisation staff.
  • The Turing Test - The newly restarted EA podcast from the Harvard University EA group. Interviews with EA thinkers including Brian Tomasik on ethics, animal welfare, and a focus on suffering, and Scott Weathers on Charity Science Health.
  • 80,000 Hours Podcast - Robert Wiblin leads long-form interviews (up to 4 hours) with individuals in high impact careers. This podcast really gets into the weeds of the most important cause areas.
  • Global Optimum - An informal podcast by professional psychology researcher, Daniel Gambacorta. Discussing psychology results that can help you become a more effective altruist. There is usually no extra padding in this podcast, it's straight to the point.
  • Future Perfect Podcast - The podcast part of Vox Media’s Future Perfect project. Dylan Matthews leads scripted discussions about interesting and hopefully effective ways to improve the world.
  • Morality is hard - Michael Dello Iacovo interviews guests about topics related to effective animal advocacy.
  • Future of Life Podcast - Interviews with researchers and thought leaders who the Future of Life Institute believe are helping to “safeguard life and build optimistic visions of the future”. They include a series on AI alignment and a recent series on climate change.
  • Wildness - A new podcast of Wild Animal Initiative. Narrative episodes based around a theme relevant to wild animal welfare research, typically including multiple interviews with animal welfare researchers.
  • EARadio - hundreds of audio recordings from EA Global talks. Some episodes are hard to follow due to the missing visual information that is used in presentations.
  • Sentience Institute Podcast - New podcast on effective animal advocacy.

Podcasts featuring EA-related episodes

*Edited to include hyperlinks.

Comments53


Sorted by Click to highlight new comments since:

Two more podcasts:

Increments by Ben Chugg and Vaden Masrani

Vaden Masrani, a PhD student in machine learning at UBC and Ben Chugg, a research fellow at Stanford Law School, get into trouble arguing about everything except machine learning and law. Coherence is somewhere on the horizon. Love, bribes, suggestions, and hate-mail all welcome at incrementspodcast@gmail.com.

Clearer Thinking by Spencer Greenberg of Spark Wave.

Clearer Thinking is the brand-new podcast about ideas that truly matter. If you enjoy learning about powerful, practical concepts and frameworks, or wish you had more deep, intellectual conversations in your life, then we think you'll love this podcast!

Thanks for compiling this.

I've created a ListenNotes list with all the "Strongly EA-related podcasts" and a few others here. It displays the most recent episode from each of those podcasts and lets you import them all easily to your favorite podcast app.

The Lunar Society

(I haven't listened yet and am not yet able to recommend it, but it seems EA-relevant)

Having listened to several episodes, I can strongly recommend this podcast. One of the very best.

Radio Bostrom - "Audio narrations of academic papers by Nick Bostrom."

New podcast on AI X-risk research: AXRP.

Although this was announced in a separate EA Forum post, I'm adding a comment here so that all EA-related podcasts can be found in the same thread.

Note that I continue to keep this list updated.

Luke Muehlhauser's Conversations from the Pale Blue Dot had an episode interviewing Toby Ord back in January 2011. This is from before the term "effective altruism" was being used to describe the movement. I think it may be the first podcast episode to really discuss what would eventually be called EA, with the second oldest podcast episode being Massimo Pigliucci's interview with Holden Karnofsky on Rationally Speaking in July 2011.

(There was plenty of discussion online about these issues in years prior to this, but as far as I can tell, discussion didn't appear in podcast form until 2011.)

Fin Moorhouse and a friend started the podcast Hear This Idea. Fin writes about the podcast here, and says:

A few months ago, I started a podcast with a friend at uni interviewing (mostly) academics in the social sciences and philosophy. Since we're both involved with EA, about half of the episodes ended up addressing topics either relevant to or directly concerning effective altruism.

And a few of the interviewees are EAs whose names I recognise. (Some others maybe be EAs who I just happen to not know of.)

Future Matters

A machine-read audio version of the Future Matters newsletter, which is:

a newsletter about longtermism written by Matthew van der Merwe & Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist.

Also the Future Matters Reader:

Future Matters Reader uses text-to-speech software to convert into audio the writings summarized in the Future Matters newsletter.

It seems like a lot of those writings weren't on Nonlinear's podcast feeds, either due to not being on the EA Forum / LessWrong / Alignment Forum or for some other reasons. 

The Inside View also focuses on AI alignment. There's a YouTube channel with videos of the interviews. Sometimes there are interview highlights on LessWrong.

It would be real great if these were hyperlinks...

Would take some time, but might be useful for people gathering EA resources?

Done. Thanks for the nudge to put a little more time into it.

Nice! Thanks

Yes, agree that it would have been natural to include hyperlinks in this otherwise very helpful post.

Pablo's list does include links.

Cold Takes Audio is Holden Karnofsky reading posts from his new-ish blog site. I'd highly recommend it. 

Nonlinear Library has machine-read (but still pretty good) versions of a large and increasing number of posts from the EA Forum, LessWrong, and the Alignment Forum. See https://forum.effectivealtruism.org/posts/JTZTBienqWEAjGDRv/listen-to-more-ea-content-with-the-nonlinear-library This is probably the podcast I've listened to most often since it came out, and will probably remain the podcast I listen to most often for the indefinite future.

Nonlinear Fund also now have additional podcast feeds: 

I imagine they might release more feeds in future, so it may be worth occasionally searching for podcasts by "The Nonlinear Fund" in podcast apps to see if others come up.

I've been hitting these feeds pretty hard and really valuing them. Examples of cool things that this had allowed me to do are easily fit into my schedule Richard Ngo's sequence on AGI Safety from First Principles, some old Luke Muehlhauser posts on science/rationality-based self-help approaches, and all core readings in the AGI Safety Fundamental governance course. 

Two book-length series of rationality-related posts by Eliezer Yudkowsky have been made into podcast versions:

(Not sure if those are the most useful links. Personally I just found the podcasts via searching the Apple Podcasts app.)

I found Rationality: From AI to Zombies very useful and quite interesting, and HPMOR fairly useful and very surprisingly engaging. I've ranked them as the 4th and 30th (respectively) most useful EA-related books I've read so far.

Founders Pledge now has a podcast, featuring interviews with their members and researchers: https://founderspledge.com/stories/category/podcasts

Un equilibrio inadecuado (Spotify - Apple Podcasts - Google Podcasts)

Interviews in Spanish on EA topics. I particularly enjoyed the episode with Andrés Gómez Emilsson from Qualia Research Institute. Sadly, no new content since October 2021.

Thank you for sharing this.

Something I'm surprised neither I nor anyone else has mentioned yet: the Slate Star Codex Podcast. This consists almost entirely of audio versions of SSC articles, along with a handful of recordings of SSC meetups (presentations + Q&As). 

(I think this is my second favourite EA-related podcast, with the 80k podcast being first.)

  • Conversations with Tyler
  • The Portal with Eric Weinstein

Alex Lintz made a collection of AI Governance-related Podcasts, Newsletters, Blogs, and more, through which I've found some podcasts or individual podcast episodes that I've found helpful.

There are some biorisk and biosecurity podcasts or podcast episodes collected in the "Talks, Podcasts, and Videos" section of A Biosecurity and Biorisk Reading+ List

Sorry for the late comment. I've recently been listening to, and enjoying, The End of the World with Josh Clark. It seems like a really solid and approachable introduction to existential risks. It starts by covering why x-risks might be things that we should be concerned about, and then talks about AI, biosecurity and other possible threats. Includes interviews with Nick Bostrom, Toby Ord, Anders Sandberg, Robin Hanson and others :)

I also recommend Joe Carlsmith Audio.

Audio versions of essays by Joe Carlsmith. Philosophy, futurism, and other topics. Text versions at joecarlsmith.com.

Joe reads the essays himself.



There is a German EA Podcast that Lia Rodehorst and I created, called "Gutes Einfach Tun". 
Here is the link.

Also, Sarah Emminghaus recently launched a German EA Podcast called "WirklichGut" (link here).

Recent addition: Founders Pledge have started a podcast called How I Give.

There's also the podcast NonProphets:

NonProphets is a podcast about forecasting by three superforecasters. We earned the right to be called superforecasters by being among the top forecasters in the Good Judgment Project, an experimental forecasting tournament sponsored by the Intelligence Advanced Research Projects Activity. We now forecast professionally for Good Judgment, Inc.

I came across it because one of the three hosts is Robert de Neufville, who works with the Global Catastrophic Risk Institute.

I've only listened to one episode so far, but several seem fairly EA-related (e.g., one with Shahar Avin from CSER talking about AI), as one might expect given de Neufville's involvement.

Ace! this is the first time I've heard of that podcast. Thanks for sharing.

A new podcast transcription of Nate Soares Replacing Guilt - https://anchor.fm/guilt

Sentience Institute released a new podcast on effective animal advocacy just today!

ChinaTalk

I'd recommend this for people interested in AI governance or otherwise interested in things like Chinese policymaking. 

National Security Commission on AI

A podcast related to the Final Report of the National Security Commission on Artificial Intelligence (NSCAI, 2021)

I'd recommend people focused on AI governance listen to at least some episodes. But unfortunately I often felt that the interviews (a) didn't grab my attention and (b) weren't really saying much of substance in a crisp and clear way (I had a feeling sort-of as if they were talking in platitudes or talking vaguely around some topics - but this may have been just because I already knew a decent amount of because I was spacing out). 

The Asianometry Podcast

I've found this podcast really useful for getting up to speed on some topics relevant to compute governance (which I'm interested in due to my and my colleague's AI governance work). So, to be clear, this is an "EA-relevant" podcast that I'd recommend for people with an interest in that area, but it's not an "EA podcast" (I don't think the host is involved in the EA community and I'm not sure if he's even aware of it). 

There's also the podcast Utilitarian. Only two episodes so far, but one is with Anders Sandberg, and the podcast's creator posted about that episode on the Forum, so this can probably be considered an EA-related podcast.

(I'm currently halfway through the Anders episode, and enjoying it so far.)

Very Bad Wizards: The One with Peter Singer (released in April, 2020)

I like that podcast a lot! I suggest to skip directly to 31:20, the second part where Singer comes in, unless you are interested in half an hour of discussion about typography :)

These are some links to podcast links aimed at children that touch on EA topics.

 

https://www.buzzsprout.com/1018843/episodes/9252374 -Meet Vaidehi Agarwalla - An Effective Altruism Community Builder

 

https://www.buzzsprout.com/1018843/episodes/8940898 Meet Wanyi Zeng - The Executive Director of Effective Altruism Singapore

 

https://www.buzzsprout.com/1018843/episodes/3754505

Meet Elissa Lane - A Farm Animal Welfare Expert

https://www.buzzsprout.com/1018843/episodes/4040834 Meet Abhay Rangan - A man dedicated to make plant based milk affordable and accessible

https://www.buzzsprout.com/1018843/episodes/3950246 Meet Varun Deshpande - A man on a mission for smart food systems

https://www.buzzsprout.com/1018843/episodes/3858503 Meet Manjunath - A Free-range egg farmer

This is the general link to the podcast 

https://podcasts.apple.com/us/podcast/curious-vedanth/id1508532011

https://open.spotify.com/show/7ekQ9OoUrEVuavsudEO3Md

Updating with another recent podcast: GiveDirectly's Michael Faye and Caroline Teti on Important, Not Important. A really interesting and easy to listen to interview on the value of unconditional cash transfers and their underlying philosophy. 

Thanks for this. I've integrated this list as well as @pablo's and a couple I added ('Not Overthinking' and 'Great.com Talks With') into an Airtable

View only

or you can on collaborate this base HERE

Marco
-12
0
0
[comment deleted]2
0
0
Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 3m read
 · 
We are excited to share a summary of our 2025 strategy, which builds on our work in 2024 and provides a vision through 2027 and beyond! Background Giving What We Can (GWWC) is working towards a world without preventable suffering or existential risk, where everyone is able to flourish. We do this by making giving effectively and significantly a cultural norm. Focus on pledges Based on our last impact evaluation[1], we have made our pledges –  and in particular the 🔸10% Pledge – the core focus of GWWC’s work.[2] We know the 🔸10% Pledge is a powerful institution, as we’ve seen almost 10,000 people take it and give nearly $50M USD to high-impact charities annually. We believe it could become a norm among at least the richest 1% — and likely a much wider segment of the population — which would cumulatively direct an enormous quantity of financial resources towards tackling the world’s most pressing problems.  We initiated this focus on pledges in early 2024, and are doubling down on it in 2025. In line with this, we are retiring various other initiatives we were previously running and which are not consistent with our new strategy. Introducing our BHAG We are setting ourselves a long-term Big Hairy Audacious Goal (BHAG) of 1 million pledgers donating $3B USD to high-impact charities annually, which we will start working towards in 2025. 1 million pledgers donating $3B USD to high-impact charities annually would be roughly equivalent to ~100x GWWC’s current scale, and could be achieved by 1% of the world’s richest 1% pledging and giving effectively. Achieving this would imply the equivalent of nearly 1 million lives being saved[3] every year. See the BHAG FAQ for more info. Working towards our BHAG Over the coming years, we expect to test various growth pathways and interventions that could get us to our BHAG, including digital marketing, partnerships with aligned organisations, community advocacy, media/PR, and direct outreach to potential pledgers. We thin
Recent opportunities in Building effective altruism
48
Ivan Burduk
· · 2m read