Next week for the 80,000 Hours Podcast I'll be interviewing Carl Shulman, advisor to Open Philanthropy, and generally super informed person about history, technology, possible futures, and a shocking number of other topics.
He has previously appeared on our show and the Dwarkesh Podcast:
- Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment
- Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future
- Carl Shulman on the common-sense case for existential risk work and its practical implications
He has also written a number of pieces on this forum.
What should I ask him?
How did he deal with two-envelope considerations in his calculation of moral weights for OpenPhil?
I have never calculated moral weights for Open Philanthropy, and as far as I know no one has claimed that. The comment you are presumably responding to began by saying I couldn't speak for Open Philanthropy on that topic, and I wasn't.