M

MMMaas

Senior Research Fellow @ Institute for Law & AI
673 karmaJoined Jan 2016Working (6-15 years)

Bio

Matthijs Maas

Senior Research Fellow (Law & Artificial Intelligence), Legal Priorities Project

Research Affiliate, Centre for the Study of Existential Risk.

https://www.matthijsmaas.com/  | https://linktr.ee/matthijsmaas 

Sequences
1

Strategic Perspectives on Long-term AI Governance

Comments
23

This is very interesting, thanks for putting this together! FWIW, you might also be interested in some of the other terms I've been collecting for a related draft report ( see shared draft at https://docs.google.com/document/d/1RNlHOt32nBn3KLRtevqWcU-1ikdRQoz-CUYAK0tZVz4/edit , pg 18 onwards)

Thanks for the overview! You might also be interested in this (forthcoming) report and lit review: https://docs.google.com/document/d/12AoyaISpmhCbHOc2f9ytSfl4RnDe5uUEgXwzNJhF-fA/edit?usp=drivesdk

I previously drew on Adler's work to derive lessons for (military) AI governance, in: https://www.tandfonline.com/doi/abs/10.1080/13523260.2019.1576464

I will! Though likely in the form of a long form report that's still in draft, planning to write it out in the next months. Can share a (very rough) working draft if you PM me.

Thanks for collating this, Zach! Just to note, my 'TAI Governance: a Literature Review' is publicly shareable -- but since we'll be cleaning up the main doc as a report the coming week, could you update the link to this copy? https://docs.google.com/document/d/1CDj_sdTzZGP9Tpppy7PdaPs_4acueuNxTjMnAiCJJKs/edit#heading=h.5romymfdade3

Thanks for collating these comments--that's useful to get that overview.

FWIW, some people at CSER have done good work on this broad topic, working with researchers at Chinese institutions -- e.g. https://link.springer.com/article/10.1007/s13347-020-00402-x

This is awful -- Nathan was such an engaging and bright scholar, generous with his comments and insights. I had been hoping to see much more of his work in this field. Thank you for sharing this.

Answer by MMMaasMar 02, 20237
1
0

I've got a number of literature reviews and overview reports on this coming out soon, can share you on a draft if of interest. See also the primer / overview at https://forum.effectivealtruism.org/posts/isTXkKprgHh5j8WQr/strategic-perspectives-on-transformative-ai-governance

+1 to this proposal and focus.

On 'technical levers to make AI coordination/regulation enforceable', there is a fair amount of work suggesting that e.g. arms control agreements have often dependend on/been enabled by new technological avenues for enabling unilateral monitoring (or for enabling cooperative, but non-intrusive monitoring - e.g. sensors on missile factories, as part of the US-USSR INF Treaty), have been instrumental (see Coe and Vaynmann 2020 ).

That doesn't mean that it's always an unalloyed good: there are indeed cases where new capabilities can introduce new security or escalation risks (e.g. Vaynmann 2021); they can also perversely hold up negotiations; e.g. Richard Burns (link, introduction) discusses a case where the involvement of engineers in designing a monitoring system for the Comprehensive Test Ban Treaty, actually held up negotiations of the regime, basically because the engineers focused excessively on technical perfection of the monitoring system [beyond a level of assurance that would've been strictly politically required by the contracting parties], which enabled opponents of the treaty to paint it as not giving sufficiently good guarantees.

Still, beyond improving enforcement, there's interesting work on ways that AI technology could speed up and support the negotiation of treaty regimes (Deeks 2020, 2020b, Maas 2021), both for AI governance specifically, and in supporting international cooperation more broadly.

That's a great suggestion, I will aim to add that for each!

Load more