Today we're launching a new podcast feed that might be useful to you or someone you know.
It's called Effective Altruism: An Introduction, and it's a carefully chosen selection of ten episodes of The 80,000 Hours Podcast, with various new intros and outros to guide folks through them.
We think that it fills a gap in the introductory resources about effective altruism that are already out there. It's a particularly good fit for people:
- prefer listening over reading, or conversations over essays
- have read about the big central ideas, but want to see how we actually think and talk
- want to get a more nuanced understanding of how the community applies EA principles in real life — as an art rather than science.
The reason we put this together now, is that as the number of episodes of The 80,000 Hours Podcast show has grown, it has become less and less practical to suggest that new subscribers just 'go back and listen through most of our archives.'
We hope EA: An Introduction will guide new subscribers to the best things to listen to first in order to quickly make sense of effective altruist thinking.
Across the ten episodes, we discuss:
- What effective altruism at its core really is
- The strategies for improving the world that are most popular within the effective altruism community, and why they’re popular
- The key disagreements between researchers in the field
- How to ‘think like an effective altruist’
- How you might figure out how to make your biggest contribution to solving the world’s most pressing problems
At the end of each episode we suggest the interviews people should go to next if they want to learn more about each area.
If someone you know wants to get an understanding of what 80,000 Hours or effective altruism are all about, and audio content fits into their life better than long essays, hopefully this will prove a great resource to point them to.
It might also be a great fit for local groups who we've learned are already using episodes of the show for discussion groups.
Like 80,000 Hours itself, the selection leans towards a focus on longtermism, though other perspectives are covered as well.
The most common objection to our selection is that we didn’t include dedicated episodes on animal welfare or global development. (ADDED: See more discussion of how we plan to deal with this issue here.)
We did seriously consider including episodes with Lewis Bollard and Rachel Glennister, but i) we decided to focus on our overall worldview and way of thinking rather than specific cause areas (we also didn’t include a dedicated episode on biosecurity, one of our 'top problems'), and ii) both are covered in the first episode with Holden Karnofsky, and we prominently refer people to the Bollard and Glennerster interviews in our 'episode 0', as well as the outro to Holden's episode.
If things go well with this one, we may put together multiple curated feeds, likely differentiated by difficulty level, or cause area.
Folks can find it by searching for 'effective altruism' in their podcasting app.
We’re very open to feedback – comment here, or you can email us at podcast@80000hours.org.
— Rob and Keiran
Seems like a sad development if this is being done for symbolic or coalitional reasons, rather than for the sake of optimizing the specific topics covered in the episodes and the quality of the coverage.
An example of the former would be something along the lines of 'if we don't include words like "Animal" and "Poverty" in big enough print on this webpage, that will send the wrong message about how EAs in general feel about those causes'.
An example of the latter would be 'if we don't include argument X about animal welfare in one of the first five episodes somewhere, a lot of EA newbies will probably make worse decisions because they'll be missing that specific key consideration'; or 'the arguments in the first forty-five minutes of episode n are terrible because X and Y, so that episode should be cut or a rebuttal should be added'.
I like arguments like this: (I) "I think long-termism is false, in ways that make a big difference for EAs' career selection. Here's a set of compelling arguments against longtermism; until the 80K Podcast either refutes them to my satisfaction, or adds prominent discussion of them to this podcast episode list, I'll continue to think this is a bad intro resource, and I'll tell newbies to check out [X] instead."
I think it's fine if 80K disagrees, and I endorse them producing content that reflects their perspective (including the data they get from observing that other smart people disagree), rather than a political compromise between their perspective and others' perspectives. But equally, I think it's fine for people who disagree with 80K to try to convince 80K that they're wrong about stuff like long-termism. If the debate looks broadly like that, then that seems good.
I don't like arguments like this: (II) "Regardless of how likely you or I think it is that long-termism is false (either before or after updating on others' beliefs), you should give lots of time to short-termism since a lot of EAs are short-termist."
There's a mix of both (I) and (II) in this comment section, so I want to praise the first thing at the same time that I anti-praise the second thing. +1 to 'your podcast is bad because it says false things X and Y and Z and doesn't discuss these counter-arguments to X and Y', -1 to 'your podcast is bad because it's unrepresentative of coalitions A and B and C'.