Today we're launching a new podcast feed that might be useful to you or someone you know.
It's called Effective Altruism: An Introduction, and it's a carefully chosen selection of ten episodes of The 80,000 Hours Podcast, with various new intros and outros to guide folks through them.
We think that it fills a gap in the introductory resources about effective altruism that are already out there. It's a particularly good fit for people:
- prefer listening over reading, or conversations over essays
- have read about the big central ideas, but want to see how we actually think and talk
- want to get a more nuanced understanding of how the community applies EA principles in real life — as an art rather than science.
The reason we put this together now, is that as the number of episodes of The 80,000 Hours Podcast show has grown, it has become less and less practical to suggest that new subscribers just 'go back and listen through most of our archives.'
We hope EA: An Introduction will guide new subscribers to the best things to listen to first in order to quickly make sense of effective altruist thinking.
Across the ten episodes, we discuss:
- What effective altruism at its core really is
- The strategies for improving the world that are most popular within the effective altruism community, and why they’re popular
- The key disagreements between researchers in the field
- How to ‘think like an effective altruist’
- How you might figure out how to make your biggest contribution to solving the world’s most pressing problems
At the end of each episode we suggest the interviews people should go to next if they want to learn more about each area.
If someone you know wants to get an understanding of what 80,000 Hours or effective altruism are all about, and audio content fits into their life better than long essays, hopefully this will prove a great resource to point them to.
It might also be a great fit for local groups who we've learned are already using episodes of the show for discussion groups.
Like 80,000 Hours itself, the selection leans towards a focus on longtermism, though other perspectives are covered as well.
The most common objection to our selection is that we didn’t include dedicated episodes on animal welfare or global development. (ADDED: See more discussion of how we plan to deal with this issue here.)
We did seriously consider including episodes with Lewis Bollard and Rachel Glennister, but i) we decided to focus on our overall worldview and way of thinking rather than specific cause areas (we also didn’t include a dedicated episode on biosecurity, one of our 'top problems'), and ii) both are covered in the first episode with Holden Karnofsky, and we prominently refer people to the Bollard and Glennerster interviews in our 'episode 0', as well as the outro to Holden's episode.
If things go well with this one, we may put together multiple curated feeds, likely differentiated by difficulty level, or cause area.
Folks can find it by searching for 'effective altruism' in their podcasting app.
We’re very open to feedback – comment here, or you can email us at podcast@80000hours.org.
— Rob and Keiran
It's frustrating that I need to explain the difference between the “argument that would cause us to donate to a charity for guide dogs” and the arguments being made for why introductory EA materials should include content on Global Health and Animal Welfare, but here goes…
People who argue for giving to guide dogs aren’t doing so because they’ve assessed their options logically and believe guide dogs offer the best combination of evidence and impact per dollar. They’re essentially arguing for prioritizing things other than maximizing utility (like helping our local communities, honoring a family member’s memory, etc.) And the people making these arguments are not connected to the EA community (they'd probably find it off-putting).
In contrast, the people objecting to non-representative content branded as an “intro to EA” (like this playlist or the EA Handbook 2.0) are people who agree with the EA premise of trying to use reason to do the most good. We’re using frameworks like ITN, we’re just plugging in different assumptions and therefore getting different answers out. We’ve heard longtermist arguments for why their assumptions are right. Many of us find those longtermist arguments convincing and/or identify as longtermists, just not to such an extreme degree that we want to exclude content like Global Health and Animal Welfare from intro materials (especially since part of the popularity of those causes is due to their perceived longterm benefits). We run EA groups and organizations, attend and present at EA Global, are active on the Forum, etc. The vast majority of the EA experts and leaders we know (and we know many) would look at you like you’re crazy if you told them intro to EA content shouldn’t include global health or animal welfare, so asking us to defer to expertise doesn’t really change anything.
Regarding the narrow issue of “Crucial Considerations” being removed from effectivealtruism.org, this change was made because it makes no sense to have an extremely technical piece as the second recommended reading for people new to EA. If you want to argue that point go ahead, but I don’t think you’re being fair by portraying it as some sort of descent into populism.