In the most recent episode of the 80,000 Hours podcast, Rob Wiblin and Ajeya Cotra from Open Phil discuss "the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely.
"They also discuss:
- Which worldviews Open Phil finds most plausible, and how it balances them
- Which worldviews Ajeya doesn’t embrace but almost does
- How hard it is to get to other solar systems
- The famous ‘simulation argument’
- When transformative AI might actually arrive
- The biggest challenges involved in working on big research reports
- What it’s like working at Open Phil
- And much more"
I'm creating this thread so that anyone who wants to share their thoughts on any of the topics covered in this episode can do so. This is in the spirit of MichaelA's suggestion of posting all EA-relevant content here.
I haven't finished listening to the podcast episode yet but I picked up on a few of these inaccuracies and was disappointed to hear them. As you say I would be surprised if Ajeya isn't aware of these things. Anyone who has read Greaves and MacAskill's paper The Case for Strong Longtermism should know that longtermism doesn't necessarily mean a focus on reducing x-risk, and that it is at least plausible that longtermism is not conditional on a total utilitarianism population axiology*.
However, given that many people listening to the show might not have read that paper, I feel these inaccuracies are important and might mislead people. If longtermism is robust to different views (or at least if this is plausible), then it is very important for EAs to be aware of this. I think that it is important for EAs to be aware of anything that might be important in deciding between cause areas, given the potentially vast differences in value between them.
*Even the importance of reducing extinction risk isn't conditional on total utilitarianism. For example, it could be vastly important under average utilitarianism if we expect the future to be good, conditional on humans not going extinct. That said, I'm not sure how many people take average utilitarianism seriously.