Josephine Schwab is a Research Associate with the Arcadia Impact AI Safety Governance Taskforce, a former Senior Research Fellow (European Security) with the European Institute of Policy Research and Human Rights, former Veritas Forecasting Research Fellow with The Midas Project, UK parliamentary reporter and international justice news writer, and author of “Diplomacy in the Age of AGI”, featured by Futures4Europe, the European Commission’s foresight community platform.
I would value connections with those working at the intersection of existential risk, diplomacy, and international governance — especially around Europe’s role in shaping AI policy, security cooperation, and systemic resilience. Mentorship or collaboration opportunities with researchers and practitioners in EA-affiliated institutes (AI safety, global catastrophic risk, future governance) would be particularly helpful as I refine my research agenda and explore career transitions into policy-focused or research management roles.
I can share expertise in cross-border political coordination, as well as insights from NGO governance and European security studies. I enjoy helping others frame projects for policy audiences and connect local resilience practices with governance debates. I can also bring experience in writing and publishing (academic and public-facing) on peace and democracy.
Hi. Apologies for the late response, I have not been well.
I agree that the article is more diagnostic – one reason for this is that it was published in two places. It found a home in the AI safety ecosystem, but also in the climate governance ecosystem. There are different lessons for each side, and if I had attempted to make recommendations for both then the article would have been too long. However, I have considered publishing a "part 2" for both communities.
That said, there are three lessons I think the AI safety community can viably act on:
Establish capability thresholds as deployment gates. The Montréal Protocol worked because scientists gave policymakers specific, measurable indicators to act on. METR's pre-deployment evaluations are a foundation, so the actionable step could be making specific capability benchmarks legally binding. However, we also know that setting such benchmarks has proved a problem as frontier systems development is so rapid and evolving.
Design any international framework with exit costs. Paris failed partly because exit from the Agreement was costless. The Montréal Protocol included trade measures against non-signatories. AI governance treaty design should try to build in analogous mechanisms – market access conditions, or liability linkages, that make opt-out costly. It will only achieve this through serious issue linkage and a general semantic shift from "impact" to "safety".
Route campaigns more through existing political infrastructures. Trust and parliamentary access are a scarce resource which takes time to build. Climate communities (and others) have spent decades accumulating both. Routing AI governance proposals through existing channels is going to be faster than starting from scratch, but our current coalition building is left wanting.
These are a few actions I believe that we can viably consider for both strategising and framework design. Thanks!