In the most recent episode of the 80,000 Hours podcast, Rob Wiblin and Ajeya Cotra from Open Phil discuss "the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely.
"They also discuss:
- Which worldviews Open Phil finds most plausible, and how it balances them
- Which worldviews Ajeya doesn’t embrace but almost does
- How hard it is to get to other solar systems
- The famous ‘simulation argument’
- When transformative AI might actually arrive
- The biggest challenges involved in working on big research reports
- What it’s like working at Open Phil
- And much more"
I'm creating this thread so that anyone who wants to share their thoughts on any of the topics covered in this episode can do so. This is in the spirit of MichaelA's suggestion of posting all EA-relevant content here.
The doomsday argument, the self-sampling assumption (SSA), and the self-indication assumption (SIA)
The interview contained an interesting discussion of those ideas. I was surprised to find that, during that discussion, I felt like I actually understood what the ideas of SSA and SIA were, and why that mattered. (Whereas there've been a few previous times when I tried to learn about those things, but always ended up mostly still feeling confused. That said, it's very possible I currently just have an illusion of understanding.)
While listening, I felt like maybe that section of the interview could be summarised as follows (though note that I may be misunderstanding things, such that this summary might be misleading):
"We seem to exist 'early' in the sequence of possible humans. We're more likely to observe that if the sequence of possible humans will actually be cut off relatively early than if more of the sequence will occur. This should update us towards thinking the sequence will be cut off relatively early - i.e., towards thinking there will be relatively few future generations. This is how the SSA leads to the doomsday argument.
But, we also just seem to exist at all. And we're more likely to observe that (rather than observing nothing at all) the more people will exist in total - i.e., the more of the sequence of possible humans will occur. This should update us towards thinking the sequence won't be cut off relatively early. This is how the SIA pushes against the doomsday argument.
Those two updates might roughly cancel out [I'm not actually sure if they're meant to exactly, roughly, or only very roughly cancel out]. Thus, these very abstract considerations have relatively little bearing on how large we should estimate the future will be."
(I'd be interested in people's thoughts on whether my attempted summary seems accurate, as well as on whether it seems relatively clear and easy to follow.)
One other thing on this section of the interview: Ajeya and Rob both say that the way the SSA leads to the doomsday argument seems sort-of "suspicious". Ajeya then says that, on the other hand, the way the SIA causes an opposing update also seems suspicious.
But I think all of her illustrations of how updates based on the SIA can seem suspicious involved infinities. And we already know that loads of things involving infinities can seem counterintuitive or suspicious. So it seems to me like this isn't much reason to feel that SIA in particular can caus... (read more)