Epistemic status: quick personal views, based on own experiences, but non-comprehensive; estimate this covers less than 50% of CSER’s work of last few years.

Format: edited transcript-with-slides (in style of other overviews).

*

I’m Matthijs Maas, a researcher at the Centre for the Study of Existential Risk (Cambridge University). In March I gave a 25-minute Hamming Hour talk at EA Bahamas, on some of the things we’ve been up to at CSER. I especially focused on CSER’s work on long-termist AI governance, my area of expertise, but also covered some of our other x-risk research lines, as well as policy work. The talk was not recorded, but several people mentioned afterwards they found it useful and novel, so I decided to turn it into a quick Forum post. Most of the papers I link to below should be open-access, let me know if they’re not and I’m happy to share directly.

This talk aims to give a quick overview on CSER’s work – what we’ve been up to recently, and how we approach the study and mitigation of existential risks.

Now, I know that ‘general institutional sales pitch’ is many people’s least-favourite genre of talk… 

…but hear me out: by giving a primer on some of what CSER has been up to the past year or two, I will argue that CSER’s work and approach offers... 

  1. substantive and decision-relevant insights into a range of existential risks, both in terms of threat models and in terms of mitigation strategies; 
  2. a distinct approach and methodology to study existential risks and global catastrophic risks; and a record of working across academic disciplines and institutes to produce peer-reviewed academic work on existential risks, which can help build the credibility of the field to policymakers;
  3. a track record of policy impact at both the national (UK) and international level, which others in EA can draw from, or learn from. 

As such, while CSER’s academic work has occasionally been comparatively less visible in the EA community, I believe that much of CSER’s work is relevant to EA work on existential risks and long-term trajectories. That’s not to say we’ve worked this all out–there are a lot of uncertainties which I and others have, and cruxes to be worked out. They reflect some thoughts and insights that I thought would be useful to share, and I would be eager to discuss more with the community.

In terms of structure: I’ll discuss CSER’s background, go over some of its research, and finally discuss our policy actions and community engagement:

CSER was formally founded in 2012 by Lord Martin Rees, Jaan Tallinn and Prof Huw Price (so it recently celebrated its 10th anniversary). Our first researcher started in late 2015, and since then we’ve grown to 28 people.

Most of our work can be grouped into four major clusters: AI risk (alignment, impact forecasting, and governance), biorisk, environmental collapse (climate change and ecosystem collapse), and 'meta' work on existential risks (including both the methodology of how to study existential risks, as well as the ethics of existential risks).

That’s the background on CSER - now I’ll go through some recent projects under these research themes. This is non-exhaustive–I estimate I cover less than 50% of CSER’s work over the last few years--and I will focus mostly on our AI work, which is my specialty. 

Specifically, at CSER I’m part of the AI: Futures and Responsibility team. AI-FAR’s work is focused on long-term AI risks, impacts and governance, and covers three main research lines, (1) AI safety, security and risk, (2) futures and foresight (of impacts), and (3) AI governance:

Within the AI safety track, one interesting line of work is work by John Burden and José Hernandez-Orallo, on mapping the space of AI generality, capability, and control (“Exploring AI Safety in Degrees: Generality, Capability and Control”; see also “General intelligence disentangled via a generality metric for natural and artificial intelligence”), with the aim of empirically analysing these safety challenges in existing systems (“Negative Side Effects and AI Agent Indicators: Experiments in SafeLife”).

Another line of work aims to map links between contemporary AI techniques and distinct types of safety issues, in order to understand how these might scale. (‘AI Paradigms and AI Safety: Mapping Artefacts and Techniques to Safety Issues’), as well as understanding competition and collaboration dynamics in AI research communities by analysing the links between different popular performance benchmarks in AI research (“The Scientometrics of AI Benchmarks: Unveiling the Underlying Mechanics of AI Research”):

Within AI-FAR’s foresight track, research by Carla Zoe Cremer (Oxford) and Jess Whittlestone (previously CSER, but now at CLTR) has focused on identifying early warning signs for imminent AI capability breakthroughs that might herald the development or deployment of transformative AI capabilities (‘Artificial Canaries: Early Warning Signs for Anticipatory and Democratic Governance of AI’): 

This is underpinned by underlying conceptual work by Jess and Ross Gruetzemacher (Wichita State University; previous CSER visiting scholar) to disaggregrate and clarify what we mean by ‘transformative AI’ within the community, and what accordingly are relevant historical comparisons for understanding (and communicating) the likely impact of this technology (‘The transformative potential of artificial intelligence’):

Another ‘foresight/risk’ project, by Sam Clarke and Jess Whittlestone, explores different pathways by which mid-term AI systems could produce or contribute to catastrophic impacts or major shifts in our long-term trajectory, by exerting positive or negative effects on (1) scientific progress, (2) global cooperation; (3) dynamics of power concentration and inequality; (4) our epistemic processes, and (5) the prevailing values that will steer our future (this paper is not out yet, so I can’t link to it, but should be out within a few months). This also builds on previous work b y CSER and partners, on the ‘malicious use’ of AI systems, as well as their potential impact on ‘epistemic security’ (‘Tackling threats to informed decisionmaking in democratic societies’):

Another output by the AI-FAR forecasting team has been the development of different methodologies and scenario approaches to map out deployment scenarios of TAI systems. This has been worked out into ‘Intelligence Rising’, a strategic role-play game to explore possible AI futures (website), which allows us to track players’ strategic and research choices as they progress along a tech tree of different intermediate AI capabilities, and their decisions around AGI or CAIS. Through its various iterations, the game has been run with FHI RSPs, at EAG and other conferences, with various EA and AI Safety groups, AI labs, government departments, and as part of university courses (LW: “Takeaways from the Intelligence Rising RPG”; “Exploring AI Futures Through Role Play”).

Another AI-FAR ‘risk’ research theme focuses on particular catastrophic risks arising from the military use of AI. Previous work by Shahar Avin has examined risks from integrating machine learning in & around nuclear weapons (‘Autonomy and Machine Learning as Risk Factors at the Interface of Nuclear Weapons, Computers and People’). Likewise, in still-unpublished draft work, Kayla Matteuci, Diane Cooke and myself explore (and critically evaluate) four ways in which military AI systems could constitute or contribute to existential risk

Within the AI-FAR ‘governance’ strand, researchers like Sam Clarke and myself have aimed to contribute to greater strategic clarity or consensus around long-termist AI governance. For instance, Sam has written up views to clarify different approaches and levers within the longtermist AI governance landscape (‘The longtermist AI governance landscape: a basic overview’):

I am currently working on a project to map ‘Long-Termist AI Governance Theories of Impact’, in order to identify and distinguish: 

  1. their assumptions or cruxes around TAI timelines, -trajectories, -access thresholds, and about governance; 
  2. their theories of impact or victory (i.e. the ways they link near-term actions to mid-term assets that help us shape TAI governance parameters, which in turn will shape key deployment or use decisions, which affect eventual good long-term outcomes), 
  3. their potential strengths and failure modes (from an outside view). 

(this is currently non-public, but a sequence for the EA Forum is forthcoming. Meanwhile, I can share a link to the overview upon request):

Other research focuses on the robustness and adequacy of different TAI governance levers into the future. For instance, Shin-Shin Hua and Haydn Belfield have been working on the intersection of antitrust/competition law and AI governance. In a recent paper, they analyse 14 proposed forms of inter-lab cooperation (e.g. OpenAI’s assist clause) and show how to structure them in ways that don’t raise antitrust concerns (‘AI & Antitrust: Reconciling Tensions Between Competition Law and Cooperative AI Development’). They also have forthcoming work that provides a more general look at the effective enforceability of EU Competition Law under distinct TAI development scenarios, exploring not just under which TAI trajectories such regimes would be relevant, but also whether, in those cases, we would prefer more or less competition:

In other work, we have explored international institutional designs for the global governance for advanced AI: for instance, Peter Cihon (Github), Luke Kemp and myself have explored six tradeoffs that affect whether we would expect that TAI systems would be more effectively governed by a centralized regime (e.g. single treaty or convention), or a globally decentralized regime (e.g. collections of overlapping different institutions and initiatives, such as GPAI, OECD, G20, or future ‘club governance’ initiatives, etc.). (‘Fragmentation and the Future: Investigating Architectures for International AI Governance’; ‘Should AI Governance Be Centralized?’):

This dovetails with work by Martina Kunz and Seán Ó hÉigeartaigh, who have surveyed international law to map exactly which existing international treaties and conventions cover (or will likely be extended) different uses of- and safety risks by AI technologies; and how some of these treaties might or might not be extended to future TAI systems (“Artificial Intelligence and Robotization”, in The Oxford Handbook of the International Law of Global Security).

In other research, we explore ways to promote cross-actor collaboration and cooperation on responsible and aligned AI development. This includes work on promoting West-China cooperation on responsible AI governance, co-authored with Chinese AI researchers ('Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance'), – a work stream that converges with the EA focus on ways to help promote Chinese attention to- and cooperation on issues in AI safety and governance ):

We’re also keen on finding areas where ‘near-’ and ‘long-term’ communities on AI ethics can work together more productively. As such, we have explored a series of papers that nuance perceived divisions between these communities, and highlight areas of productive cooperation (Bridging the Gap: the case for an Incompletely Theorized Agreement on AI policy ;  Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society). 

Such work integrates well with recent attempts to identify opportunities for long-termist concerns around AI to work with nearer-term AI challenges (e.g. FTX Future Fund's emphasis on exploring “approaches that integrate with the existing AI ethics landscape, discussing, for example, fairness and transparency in current ML systems alongside risks from misaligned superintelligence”).

AI-FAR researchers have explored various approaches to improve regulatory levers to ensure they will remain adequate for increasingly advanced AI. 

  • Jess Whittlestone and Jack Clark (OpenAI) have produced high-visibility work proposing information infrastructures that governments could use to monitor the development and deployment of AI, including in ways that can help them anticipate greater future impacts. (‘Why and How Governments Should Monitor AI Development’). 
  • I’ve recently developed a framework for AI regulation that aims to be more resilient to- and effective at regulating increasingly more general AI systems or applications, by lifting regulatory responses from fragmented legal domains (e.g. ‘contract law’, ‘tax law’), and instead focusing regulatory attention on transformative societal impacts across domains. (‘Aligning AI Regulation to Sociotechnical Change’, [open access]).
  • I’ve also undertaken work exploring how AI tools could support or affect global governance institutions, tools and international law, in ways relevant for broader X-risk-governance. (“AI, Governance Displacement, and the (De)Fragmentation of International Law”).

Now switching tack from AI to biorisk: CSER teams led by Luke Kemp have undertaken several comprehensive horizon scans of emerging issues in biosecurity for researchers and policymakers, to highlight the intersections with global catastrophic risks (WHO: Emerging Technologies and Dual-Use Concerns / Bioengineering horizon scan 2020):

These teams have also done work on prioritising a research agenda to feed into the UK Biosecurity Strategy (‘80 Questions for UK Biological Security’); and Lalitha Sundaram has explored processes of self-regulation and oversight within DIY bio-labs (Biosafety in DIY‐bio laboratories: from hype to policy):

While I have been less directly engaged with CSER’s work on environmental risk; , we’re doing fundamental research on what is climate change’s contribution to existential risk (“Assessing Climate Change’s Contribution to Global Catastrophic Risk’; see preprint). Practically, senior CSER academics led on a Vatican report that helped the Pope say climate change was a major moral challenge (which in turn contributed to the Paris Agreement), and Ellen Quigley, one of our researchers, was the advisor to Cambridge University’s CFO on divesting the largest endowment in Europe:

One of our researchers, Lara Mani, along with visiting researcher Mike Cassidy (Oxford), have been thinking about potential methods for the mitigation and prevention of high-impact volcanic eruptions. They recently posted two blogs on the EA forum, which aim to explain the risk from volcanic eruptions in-depth, and which suggests lessons we can take from the recent eruption in Tonga, identifying potential steps for mitigation and prevention of such an eruption. 

Last year, Lara, Asaf Tzachor and Paul Cole (University of Plymouth) also put out a piece in Nature Communications that explored the intersection between known volcanic centres and our vulnerable global critical infrastructures (e.g. submarine cables, global shipping lanes and transportation networks) (“Global catastrophic risks from lower magnitude volcanic eruptions’). Later this year, Lara, Mike and FHI’s Anders Sandberg will publish a piece exploring the ethics of ‘volcano engineering’, with the aim to begin assessing the possibilities of volcanic eruption interventions:

More generally, CSER has done work on developing the methodology of studying existential risk, in a way that allows a systematic assessment of extreme risks and other pivotal determinants of long-term trajectories, building on other academic disciplines focused on risk reduction. (“Classifying global catastrophic risks”, open access). This lens also puts more focus on cascading risks and system collapses–cases where smaller risks or individual GCRs can interact with one another, or with pre-existing vulnerabilities, to result in outsized (or even terminal) impacts on our long-term trajectory:

On the governance side, Luke Kemp and Catherine Rhodes have undertaken work for the Global Challenges Foundation to map out existing networks of international institutions, treaties, and other global actors that are currently active on different domains of GCR or X-risk. That’s not to say that these are all adequate, but such work rather points out gaps in the existing global governance tapestry, and ways this might be filled. (“The Cartography of Global Catastrophic Governance”). 

In this space, CSER affiliate Rumtin Sepasspour has also accumulated an existential risk policy database’, collecting policy ideas and recommendations proposed by researchers in the existential risk field (see also link).

As part of CSER’s ‘A Science of Global Risk’ project, researchers Luke Kemp, Clarissa Rios Rojas, Lara Mani and Catherine Richards have been seeking to explore the best methods and tools for exploring, communicating and developing policy for global catastrophic risks. This project explores questions such as, how can we provoke people to think about the future, and how can we talk about extreme risks in a way that doesn't invoke fear? How can such communication result in actual policy-maker action towards the mitigation and prevention of risks? 

Finally, beyond research, CSER is also active in lots of actions to shape policy around existential risks, and to support the EA and existential risks community:

In the last few years, CSER has developed an extensive track record in providing input and contributions to a wide range of policy processes that touch on various aspects of existential risks. 

Granted, it can be difficult to attribute downstream policy changes to specific interventions (and certainly, even where there are direct causal chains, key credit should certainly also be given to work by orgs such as FHI, CLTR, etc). Still, it is notable that in the UK: (1) existential risk, FHI and CSER have become mentioned in national strategy documents, such as the National Resilience Call for Evidence; (2) existential and global catastrophic risk have begun to become prominently discussed in the House of Lords; (3) risks from non-aligned AGI were referenced in the UK's National AI strategy; (4) there are now frequent meetings between xrisk researchers and UK policymakers, including up to Cabinet level.

Finally, CSER has had a long-standing active role in the EA community: you can find us at most EAGs, we collaborate with researchers across the space, and we support initiatives such as the Cambridge Existential Risks Initiative (CERI), and have worked closely together with the Simon Institute for Longterm Governance (see also their recent 1st-year writeup).

As mentioned, the above has been just a part of the work undertaken by CSER, and there are several other major lines of x-risk research and policy work that I’m currently leaving out / might have forgotten / may not yet know about. At the same time, there are a ton of new topics and research directions that I know CSER researchers are planning to explore, and I encourage you reach out to me or leave a comment if interested–


In short, this has given a brief personal review on some of the research and activities at CSER which I've gotten to follow through the last few years.  


Additional event plug: if this work is of interest, a lot of this will also be covered at CSER’s upcoming Centre conference (next week, 19th-21st April 2022) - virtual registration is open now
 

69

Comments4
Sorted by Click to highlight new comments since: Today at 10:50 PM

I get the impression that some parts of CSER are fairly valuable, whereas others are essentially dead weight. E.g., if I imagine ranking in pairs all the work referenced in your presentation, my impression is that value would range 2+ orders of magnitude between the most valuable and the least valuable.

Is that also your impression? Even if not, how possible is it to fund some parts of CSER, but not others?

MMMaas
1y100

Thanks Nuño! I don't think I've got well thought out views on relative importance or rankings of these work streams; I'm mostly focused on understanding scenarios in which my own work might be more or less impactful  (I also should note that if some lines of research mentioned here seem much more impactful, that may be more a result of me being more familiar with them, and being able to give a more detailed account of what the research is trying to get at / what threat models and policy goals it is connected to).

On your second question, as with other academic institutes, I believe it's actually both doable and common for donors or funders to support some of CSER's themes or lines of work but not others. Some institutional funders (e.g. for large academic grants) will often focus on particular themes or risks (rather than e.g. 'X-risk' as a general class), and therefore want to ensure their funding is going to just that work. The same has been the case for individual donations, to support certain projects we've done, I think.

[ED: -- see link to CSER donation form. Admittedly, this web form doesn't clearly allow you to specify different lines of work to support, but in practice this could be arranged in a bespoke way -- by sending an email to  director@cser.cam.ac.uk indicating what area of work one would want to support.]

Thanks ‪Matthijs

Thank you, very useful. Happy to see CSER expanding to domains where ALLFED is working such as food shocks, critical infrastructure, volcano engineering, etc. Looking forward to collaborate more!