Next week for The 80,000 Hours Podcast I'll be interviewing Nova Das Sarma.
She works to improve computer and information security at Anthropic, a recently founded AI safety and research company.
She's also helping to find ways to provide more compute for AI alignment work in general.
Here's an (outdated) LinkedIn and in-progress personal website, and an old EA Forum post from Claire Zabel and Luke Muehlhauser about the potential EA relevance of information security.
What should I ask her?
There have been estimates that there are around 100 AI researchers & engineers focused on AI alignment. This seems quite small given the scale of the problem. What are some of the bottlenecks for scaling up, and what is being done to alleviate this?