The Centre for the Governance of AI is becoming a nonprofit

by MarkusAnderljung1 min read9th Jul 20214 comments

77

Org updateAI governanceCentre for the Governance of AIFuture of Humanity Institute
Frontpage

We wanted to update the EA community about some changes to the Centre for the Governance of AI (GovAI). GovAI was founded in 2018, as part of the University of Oxford’s Future of Humanity Institute (FHI)[1]. GovAI is now becoming an independent nonprofit. We are currently in the process of setting up the organisation, forming a board, and fundraising. You will find updates on our placeholder website and through our new mailing list (see the signup form on the homepage).

These changes were prompted by an opportunity that arose for Allan Dafoe – GovAI’s Director – to take on a senior role at DeepMind. The university informed us that it would not be possible for Allan to hold a dual appointment, and that GovAI’s status as an Oxford-affiliated research centre depended on that. In response to these constraints, we opted to become an independent nonprofit.

Some background for our decision is that we had previously considered the potential benefits of standing up a nonprofit to support the longtermist AI governance community, particularly due to much lower administrative overhead and new opportunities to expand our activities. Moreover, we recognised that our community had grown well beyond Oxford: the majority of our members are based at other universities, companies, and think tanks. An independent nonprofit structure therefore seemed consistent with our ambitions and with the established geography of our community.

Allan will continue as a co-leader of the organisation. I will help set up the organisation over the coming months. FHI’ers Toby Ord, Ben Garfinkel, Alexis Carlier, Toby Shevlane, and Anne le Roux are also likely to be prominently involved (pending university approval).

We would love to hear from people with a diverse set of skills – including research, event organizing, grantmaking, operations, and project management – who are motivated to work on AI governance. We are especially interested in growing our expertise in institutional design. For those interested, we outline our theory of impact here and some research questions here. You can fill out an expression of interest form here.


  1. It succeeded the Oxford-based Governance of AI Program (which was founded in 2017) and the Yale-based Global Politics of AI Research Group (which was founded in 2016). ↩︎

77

4 comments, sorted by Highlighting new comments since Today at 5:46 PM
New Comment

Will GovAI in its new form continue to deal with the topic of regulation (i.e. regulation of AI companies by states)?

DeepMind is owned by Alphabet (Google). Many interventions that are related to AI regulation can affect the stock price of Alphabet, which Alphabet is legally obligated to try to maximize (regardless of the good intentions that many senior executives there may have). If GovAI will be co-lead by an employee of DeepMind, there is seemingly a severe conflict of interest issue about anything that GovAI does (or avoids doing) with respect to the topic of regulating AI companies.

GovAI's research agenda (which is currently linked to from their 'placeholder website') includes the following:

[...] At what point would and should the state be involved? What are the legal and other tools that the state could employ (or are employing) to close and exert control over AI companies? With what probability, and under what circumstances, could AI research and development be securitized--i.e., treated as a matter of national security--at or before the point that transformative capabilities are developed? How might this happen and what would be the strategic implications? How are particular private companies likely to regard the involvement of their host government, and what policy options are available to them to navigate the process of state influence? [...]

How will this part of the research agenda be influenced by GovAI being co-lead by a DeepMind employee?

Thanks for the question. I agree that managing these kinds of issues is important and we aim to do so appropriately.

GovAI will continue to do research on regulation. To date, most of our work has been fairly foundational, though the past 1-2 years has seen an increase in research that may provide some fairly concrete advice to policymakers. This is primarily as the field is maturing, as policymakers are increasingly seeking to put in place AI regulation, and some folks at GovAI have had an interest in pursuing more policy-relevant work.

My view is that most of our policy work to date has been fairly (small c) conservative and has seldom passed judgment on whether there should be more or less regulation and praising specific actors. You can sample some of that previous work here:

We're not yet decided on how we'll manage potential conflicts of interest. Thoughts on what principles are welcome. Below is a subset of things that are likely to be put in place:

  • We're aiming for a board that does not have a majority of folks from any of: industry, policy, academia.
  • Allan will be the co-lead of the organisation. We hope to be able to announce others soon.
  • Whenever someone has a clear conflict of interest regarding a candidate or a piece of research – say we were to publish a ranking of how responsible various AI labs were being – we'll have the person recuse themselves from the decision.
  • For context, I expect most folks who collaborate with GovAI to not be directly paid by GovAI. Most folks will be employed elsewhere and not closely line managed by the organization.

FWIW I agree that for some lines of work you might want to do managing conflicts of interests is very important, and I'm glad you're thinking about how to do this.

Related to the concern that I raised here: I recommend interested readers to listen to (or read the transcript of) this FOL podcast episode with Mohamed Abdalla about their paper: "The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity".