I've heard that there could be a trade-off between robust info security measures and hiring top talent for AI research. (I think the reasoning was something like: If state of the art AI research is a seller’s market and improving info security is inconvenient, some employees may be unwilling to comply with these measures and just take their talent elsewhere.) How accurate is this in your experience?
I'm wondering if it would be useful to track data on national legislatures (or maybe just heads of state) worldwide? This could include:
I'm not sure how feasible this is, but I imagine it could help EAs think more concretely about where they're likely to find support for different advocacy efforts.
I'd love an episode on s-risks (although I'm not sure who would be best to invite on).