Here's Teddy Tantum Collins' LinkedIn, a recent interview and short bio.
Main topic is AI but we could also talk about other things.
What should I ask?
Here's Teddy Tantum Collins' LinkedIn, a recent interview and short bio.
Main topic is AI but we could also talk about other things.
What should I ask?
I'd be fascinated to hear a White House insider comment on the likelihood that the AI safety issue will become politicized into a partisan issue, along party lines in the US. Specifically, whether Democrats or Republicans are more likely to adopt anti-AI policies such as a 'pause/stop AI' moratorium, or advocate stronger government regulations, or morally stigmatize AI research as evil and reckless.
Personally, I think the chances that AI safety remains a bipartisan issue are pretty close to zero, but I'm not sure which party is likely to advocate stronger constraints on the AI industry.
How unusual does he think the current policy interest in AI safety is? Will this be a temporary window or an ever-increasing level of interest?
Best policy idea for AI safety? Best one I won't have heard of? Best 10? (Any policy ideas floating around in AI safety that are bad/doomed?) If we live in a world where people can accidentally kill everyone by making powerful AI, what policy levers should we pull?
Takes on the track hardware, mandatory licensing for large training runs, monitor large training runs with capability evals & red-teaming & audits, pause training runs with concerning eval results plan? Takes on other plans, like training compute cap that gradually grows over time or the underspecified-but-evocative IAEA for AI?