Senior Research Fellow (Law & Artificial Intelligence), Legal Priorities Project
Research Affiliate, Centre for the Study of Existential Risk.
https://www.matthijsmaas.com/ | https://linktr.ee/matthijsmaas
A few additional papers that look into this topic, that might be of interest:
And (more narrowly focused on NAT in LAWS) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3161446
Thanks for this post, I found it very interesting.
More that I'd like to write after reflection, but briefly -- on further possible scenario variables, on either the technical or governance side, I'm working out a number of these here https://docs.google.com/document/d/1Mlt3rHcxJCBCGjSqrNJool0xB33GwmyH0bHjcveI7oc/edit# , and would be interested to discuss.
Thanks for these points! I like the rephrasing of it as 'levers' or pathways, thosea re also good.
A downside of the term 'strategic perspective' is certainly that it implies that you need to 'pick one', that a categorical choice needs to be made amongst them. However:
-it is clearly possible to combine and work across a number of these perspectives simultaneously, so they're not mutually exclusive in terms of interventions;
-in fact, under existing uncertainty over TAI timelines and governance conditions (i.e. parameters), it is probably preferable to pursue such a portfolio approach, rather than adopt any one perspective as the 'consensus one'.
I do agree that the 'Perspectives' framing may be too suggestive of an exclusive, coherent position that people in this space must take, when what I mean is more a loosely coherent cluster of views.
@tamgent "it seems hard to span more than two beliefs next to each other on any axis as an individual to me" could you clarify what you meant by this?
Thanks for the catch on the table, I've corrected it!
And yeah, there's a lot of drawbacks to the table format -- and a scatterplot would be much better (though unfortunately I'm not so good with editing tools, would appreciate recommendations for any). In the meantime, I'll add in your disclaimer for the table.
I'm aiming to restart posting on the sequence later this month, would appreciate feedback and comments.
To some extend, I'd prefer not yet to anchor people too much, before finishing the entire sequence. I'll aim to circle around later and have more deep reflection on my own commitments. In fact, one reason why I'm doing this project is that I notice I have rather large uncertainties over these different theories myself, and want to think through their assumptions and tradeoffs.
Still, while going into more detail on it later, I think it's fair that I provide some disclaimers about my own preferences, for those who wish to know them before going in:
[preferences below break]
TLDR: my currently (weakly held) perspective is something like '(a) as default, pursue portfolio approach consisting of interventions from Exploratory, Prosaic Engineering, Path-setting, Adaptation-enabling, Network-building, and Environment-shaping perspectives: (b) under extremely short timelines and reasonably good alignment chances, switch to Anticipatory and Pivotal Engineering; (c) under extremely low alignment success probability, switch to Containing;"
This seems grounded in a set of predispositions / biases / heuristics that are something like:
Given I've quite a lot of uncertainty about key (technical and governance) parameters, I'm hesitant to commit to any one perspective and prefer portfolio approaches.
--That means I lean towards strategic perspectives that are more information-providing (Exploratory), more robustly compatible with- and supportive of many others (Network-building, Environment-shaping), and/or more option-preserving and flexible (Adaptation-enabling);
--conversely, for these reasons I may have less affinity for perspectives that potentially recommend far-reaching, hard-to-reverse actions under limited information conditions (Pivotal Engineering, Containing, Anticipatory);
My academic and research background (governance; international law) probably gives me a bias towards the more explicitly 'regulatory' perspectives (Anticipatory, Path-setting, Adaptation-enabling), especially in multilateral version (Coalitional); and a bias against perspectives that are more exclusively focused on the technical side alone (eg both Engineering perspectives), pursue more unilateral actions (Pivotal Engineering, Partisan), or which seek to completely break or go beyond existing systems (System-changing)
There are some perspectives (Adaptation-enabling, Containing) that have remained relatively underexplored within our community. While I personally am not yet convinced that there's enough ground to adopt these as major pillars for direct action, from an Exploratory meta-perspective I am eager to see these options studied in more detail.
I am aware that under very short timelines, many of these perspectives fall away or begin looking less actionable;
[ED: I probably ended up being more explicit here than I intended to; I'd be happy to discuss these predispositions, but also would prefer to keep discussion of specific approaches concentrated in the perspective-specific posts (coming soon).
(apologies for very delayed reply)
Broadly, I'd see this as:
Thanks for this analysis, I found this a very interesting report! As we've discussed, there are a number of convergent lines of analysis, which Di Cooke, Kayla Matteucci and I also came to for our research paper 'Military Artificial Intelligence as Contributor to Global Catastrophic Risk' on the EA Forum ( link ; SSRN). Although by comparison we focused more on the operational and logistical limits to producing and using LAWS swarms en masse, and we sliced the nuclear risk escalation scenarios slightly different. We also put less focus on the question of 'given this risk portfolio, what governance interventions are more/less useful'.This is part of ongoing work (including a larger project and article that also examines the military developers/operators angle on AGI alignment/misuse risks, and the 'arsenal overhang (extant military [& nuclear] infrastructures) as a contributor to misalignment risk' arguments (for the latter, see also some of Michael Aird's discussion here), though that had to be cut from this chapter for reasons of length and focus.
strong +1 to everything Markus suggests here.
Other journals (depending on the field) could include Journal of Strategic Studies, Contemporary Security Policy, Yale Journal of Law & Technology, Minds & Machines, AI & Ethics, 'Law, Innovation and Technology', Science and Engineering Ethics, Foresight, ...
As Markus mentions, there are also sometimes good disciplinary journals that have special issue collections on technology -- those can be opportunities to get it into high-profile journals even if they are usually more aversive to tech-focused pieces (e.g. I got a piece into Melbourne Journal of International Law); though it really depends what audiences you're trying to reach / position your work into.
Thanks Nuño! I don't think I've got well thought out views on relative importance or rankings of these work streams; I'm mostly focused on understanding scenarios in which my own work might be more or less impactful (I also should note that if some lines of research mentioned here seem much more impactful, that may be more a result of me being more familiar with them, and being able to give a more detailed account of what the research is trying to get at / what threat models and policy goals it is connected to).On your second question, as with other academic institutes, I believe it's actually both doable and common for donors or funders to support some of CSER's themes or lines of work but not others. Some institutional funders (e.g. for large academic grants) will often focus on particular themes or risks (rather than e.g. 'X-risk' as a general class), and therefore want to ensure their funding is going to just that work. The same has been the case for individual donations, to support certain projects we've done, I think.[ED: -- see link to CSER donation form. Admittedly, this web form doesn't clearly allow you to specify different lines of work to support, but in practice this could be arranged in a bespoke way -- by sending an email to firstname.lastname@example.org indicating what area of work one would want to support.]
The Legal Priorities Project's research agenda also includes consideration of s-risks, alongside with x-risks and other type of trajectory changes, though I do agree this remains somewhat under-integrated with other parts of the long-termist AI governance landscape (in part, I speculate, because the perspective might face [even] more inferential distance from the concerns of AI policymakers than x-risk focused work).