Next week for the 80,000 Hours Podcast I'll be interviewing Carl Shulman, advisor to Open Philanthropy, and generally super informed person about history, technology, possible futures, and a shocking number of other topics.
He has previously appeared on our show and the Dwarkesh Podcast:
- Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment
- Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future
- Carl Shulman on the common-sense case for existential risk work and its practical implications
He has also written a number of pieces on this forum.
What should I ask him?
Maybe if/how his thinking about AI governance has changed over the last year?
A bit, but more on the willingness of AI experts and some companies to sign the CAIS letter and lend their voices to the view 'we should go forward very fast with AI, but keep an eye out for better evidence of danger and have the ability to control things later.'
My model has always been that the public is technophobic, but that 'this will be constrained like peaceful nuclear power or GMO crops' isn't enough to prevent a technology that enables DSA and OOMs (and nuclear power and GMO crops exist, if AGI exists somewhere that place outgrows the rest of the w... (read more)