In the spirit of Career Conversations Week, throwing out some quick questions that I hope are also relevant for others in a similar position!
I'm an early-career person aiming to have a positive impact on AI safety. For a couple of years, I've been building skills towards a career in technical AI safety research, such as:
- Publishing ML safety research projects
- Doing a Master's degree in machine learning at a top ML school
- Generally focusing on building technical ML research skills and experience at the expense of other forms of career capital.
However, I'm now much more strongly considering paths to impact that route through AI governance, including AI policy, than pure technical alignment research. Since I still feel pretty junior, I think I have room to explore a bit. However, I'm not junior enough to have a fresh degree in front of me (e.g. to choose to study public policy), and I feel like I have a strong fit for technical ML skills and knowledge, including explaining technical concepts to non-technical audiences, that I want to leverage.
What are some of the best ways for people like me to transition from technical AI safety research roles into more explicit AI governance and policy? So far, I'm only really aware of:
- Policy fellowships that might take technical researchers without policy experience, like the Horizon, STPI, PMF, or STPF fellowships
- Policy positions in top AI labs, which are themselves important for AI governance and could transition well into other AI governance careers
- Policy research positions that require significant technical knowledge at organizations like GovAI
- Some vague notion of "being a trusted scientific advisor to key decision makers in DC or London," though I'm not sure what this practically looks like or how to get there.
Any other ideas? Or for those who have been in a similar situation, how have you thought about this?