What directions do you feel have been most successful with regards to AI safety progress over the past several years, and why?
What AI capability developments are the most alarming to you, and what can we do to address them?
What's the single biggest mistake people excited about working in AI safety can make?
What's one specific thing someone interested in working in AI safety can do over the near-term?
What's changed since you published Life 3.0? What did your book get right, and what did it get wrong?