Next week for the 80,000 Hours Podcast I'll be interviewing Carl Shulman, advisor to Open Philanthropy, and generally super informed person about history, technology, possible futures, and a shocking number of other topics.
He has previously appeared on our show and the Dwarkesh Podcast:
- Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment
- Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future
- Carl Shulman on the common-sense case for existential risk work and its practical implications
He has also written a number of pieces on this forum.
What should I ask him?
Can you ask him whether or not it's rational to assume AGI comes with significant existential risk as a default position, or if one has to make a technical case for it coming with x-risk?