๐ฃ Final weekend - applications close Monday 12 Jan at noon (GMT)
๐ Apply your expertise to advanced AI governance.
Are you a mid-career or senior professional considering a transition into AI governance focused on advanced-AI risk?
Applications are closing imminently for the AI Governance Taskforce Research Associate role for our 12-week Winter 2026 cohort, starting end of January.
This role is designed for experienced professionals who want to apply existing domain expertise to frontier AI governance. You will work in small, high-calibre, multidisciplinary research teams producing policy-relevant research, led by Research Team Leaders and guided by expert practitioners based at respected organisations.
Projects:
๐น Phase identification of AI incident emergence
Sean McGregor (Founder, AI Incident Database)
๐น Leveraging open source intelligence (OSINT) for loss of control AI risk
Tommy Shaffer Shane (Interim Director of AI policy, Centre for Long-Term Resilience)
๐น Stress-testing a loss of control safety case
Henry Papadatos (Executive Director, SaferAI)
๐น Advancing dangerous capability red lines through the AI Safety Institute network
Su Cizem (Visiting Analyst, The Future Society)
๐น Developing international AI incident tracking and response infrastructure: the severity threshold โ technical triggers for โunacceptable riskโ
Caio Vieira Machado (Senior Associate, The Future Society)
What this involves:
๐ฅ Applied research in small teams
๐ Policy-relevant outputs (papers, briefs, articles)
๐ Fully remote, global, part-time (8 hrs/week), 12 weeks
๐ผ Selective, volunteer-based research fellowship
๐
Deadline: Mon 12 January 2026, 12 noon GMT
๐ More info and apply: https://www.arcadiaimpact.org/ai-governance-taskforce
For queries, please email:
Ben R Smith, Taskforce Lead
ben [at] arcadiaimpact [dot] org
ย
