Hide table of contents

This is cross-posted from the GCRI website.

GCRI is currently welcoming inquiries from people who are interested in seeking our advice and/or collaborating with us on select AI projects, which are detailed below. We have some funding available for select project work; details are available upon request. We will post a separate call for advisees and collaborators on a broader range of issues later in the year; people with interests outside the AI projects below should hold their inquiries until then.

GCRI is particularly interested in connecting with

• Scholars and professionals whose backgrounds are in any way relevant to the project topics. Participation could include informal conversations to share expertise or more extensive collaboration on a range of project activities.

• Students and early-career professionals from all fields who are interested in any of the project topics. Participation could include conversations about how to pursue careers related to the topics, connecting with senior people working on these topics, and contributing to GCRI’s work on the projects.

• Out of all of the projects listed below, we are especially interested in hearing from people with expertise relevant to our projects on “Corporate governance of AI” and “Transfer of safety techniques to AI projects”.

Participation does not entail any significant time commitment. It could consist of just a short email exchange or call. People who are a good fit for our projects may be able to become more involved, potentially be contributing to ongoing dialog on these topics, collaborating on research and outreach activities, and even co-authoring publications.

We welcome inquiries from people anywhere in the world. For logistical and administrative reasons, we have some preference for people based in the US or elsewhere in the Americas. People from underrepresented demographic groups are especially encouraged to reach out.

Individuals interested in speaking with us or collaborating with us should email Mr. Robert de Neufville, robert [at] gcrinstitute.org, with a short description of their background and interests, as well as what they hope to get out of their interaction with GCRI. Please also include a resume/CV and/or link to your professional website, and note the city or area you are based in.

The AI projects:

All of the projects are funded by a recent donation from Gordon Irlam as announced here, including continuations of projects funded a year ago as announced here.

The projects are as follows:

AGI R&D Survey and Interviews

This project extends our 2017 publication A survey of artificial general intelligence projects for ethics, risk, and policy. We plan to update the survey and conduct interviews of people at the AGI projects. Of particular interest for this project are:

• People interested in helping update the survey. This involves detailed research to identify new AGI R&D projects and update the descriptions of projects in the 2017 publication. This activity is especially well-suited for students and early-career professionals

• People with interviews expertise who can advise or contribute to the interviews. We welcome inquiries from researchers and journalists alike.

Anthropocentrism in AI ethics

This project assesses human-centric bias in AI ethics. It includes descriptive ethics (the prevalence of anthropocentrism observed in existing articulations of AI ethics) and prescriptive ethics (arguments for how AI ethics should handle the distinction between humans and non-humans). Of particular interest for this project are:

• Experts in the ethics of anthropocentrism with backgrounds in environmental ethics and related branches of moral philosophy.

• People knowledgeable about existing articulations of AI ethics.

• People active in initiatives for new articulations of AI ethics.

Collective action on AI

This project seeks to promote constructive cooperation and avoid dangerous competition between AI developers. It surveys existing work on AI collective action and seeks to advice future work, including research and direct efforts to improve collective action. The project applies insights from the broader social science study of collective action, such as the work of Elinor Ostrom and colleagues. Of particular interest for this project are:

• People active in any aspect of AI collective action.

• Social scientists with backgrounds on collective action who wish to apply their expertise to AI.

Corporate governance of AI

This project seeks to improve the ethics and safety practices of for-profit developers of AI, with emphasis on long-term AI. A central challenge this project seeks to address is how to ensure that for-profit companies develop AI in the public interest, even when this goes against their financial interest. Of particular interest for this project are:

• People active in efforts to improve AI corporate governance.

• Experts in corporate governance interested in applying their knowledge to AI. Experts could be from academia, business, governmental regulatory agencies, or other relevant sectors.

Expert judgment on long-term AI

This project seeks to advance the use of expert judgment in research on long-term AI. Expert judgment has been used in long-term AI forecasting (e.g., this by our group, and this, this, this, and this) and risk analysis (e.g., this by our group). We will apply insights from the broader study of the use of expert judgment on technical risks, such as the work of Granger Morgan and colleagues, in order to formulate best methodological practices. Of particular interest for this project are:

• People active in the use of expert judgment on long-term AI, including people eliciting expert judgment or using expert judgment for analysis, decision-making, journalism, or other purposes.

• Experts in the study of expert judgment who wish to apply their expertise to AI.

Global strategy for AGI

This project develops and promotes strategy for maintaining global security as AGI is developed and used. It looks in particular at international rivalry and competition scenarios. The project considers the initial development of AGI (including dangerous AGI races), the period after the first AGI is launched, and the subsequent period when multiple (potentially rivalrous) parties possess AGI. It draws on insights from international security, including from the history of nuclear weapons, as in our recent paper Lessons for artificial intelligence from other global risks. Of particular interest for this project are:

• People active in research or other conversations on any aspect of global strategy for AGI.

• Experts in international security who wish to apply their expertise to AI.

International institutions for AI

This project evaluates and promotes potential international institutions for governing AI, especially long-term AGI. We are especially interested in what advice to provide the international community about AI policy, including about the formation of new institutions such as the OECD AI Policy Observatory and Global Partnership on AI. Of particular interest for this project are:

• People active in the development of international institutions for AI, or in related research.

• Experts in international institutions who wish to apply their expertise to AI.

National security dimensions of AI

This project assesses the AI national security topic from a global catastrophic risk perspective. It considers ways in which near-term military AI can affect global catastrophic risk (e.g., by changing the risk of nuclear war) and ways in which national security applications can affect the development of long-term AI. Of particular interest for this project are:

• People active in research and policy on near-term military AI who wish to explore this topic in terms of global catastrophic risk.

• Experts in international security who wish to apply their expertise to long-term AI risk.

Transfer of safety techniques to AI projects

This project investigates how to ensure that technical solutions for AI safety are implemented in AI projects, with emphasis on long-term AGI. At present, technical safety solutions are often developed outside of the groups developing the AI itself. For the safety solutions to be incorporated into the AI, they must be transferred from one group to another. Of particular interest for this project are:

• People active in the transfer of safety techniques into AI development projects, or in related research.

• Experts in safety science and related fields (especially anyone with background on the transfer of safety techniques across organizations) who wish to apply their expertise to AI.

10

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities