Hide table of contents

https://aipsychphil.github.io/

 

*About*

Be it in advice from a chatbot, suggestions on how to administer resources, or which content to highlight, AI systems increasingly make value-laden decisions. However, researchers are becoming increasingly concerned about whether AI systems are making the right decisions. These emerging issues in the AI community have been long-standing topics of study in the fields of moral philosophy and moral psychology. Philosophers and psychologists have for decades (if not centuries) been interested in the systematic description and evaluation of human morality and the sub-problems that come up when attempting to describe and prescribe answers to moral questions. For instance, philosophers and psychologists have long debated the merits of utility-based versus rule-based theories of morality, their various merits and pitfalls, and the practical challenges of implementing them in resource-limited systems. They have pondered what to do in cases of moral uncertainty, attempted to enumerate all morally relevant concepts, and argued about what counts as a moral issue at all.

In some isolated cases, AI researchers have slowly started to adopt the theories, concepts, and tools developed by moral philosophers and moral psychologists. For instance, we use the “trolley problem” as a tool, adopt philosophical moral frameworks to tackle contemporary AI problems, and have begun developing benchmarks that draw on psychological experiments probing moral judgment and development.

Despite this, interdisciplinary dialogue remains limited. Each field uses specialized language, making it difficult for AI researchers to adopt the theoretical and methodological frameworks developed by philosophers and psychologists. Moreover, many theories in philosophy and psychology are developed at a high level of abstraction and are not computationally precise. In order to overcome these barriers, we need interdisciplinary dialog and collaboration. This workshop will create a venue to facilitate these interactions by bringing together psychologists, philosophers, and AI researchers working on morality. We hope that the workshop will be a jumping-off point for long-lasting collaborations among the attendees and will break down barriers that currently divide the disciplines.

The central theme of the workshop will be the application of moral philosophy and moral psychology theories to AI practices. Our invited speakers are some of the leaders in the emerging efforts to draw on theories in philosophy or psychology to develop ethical AI systems. Their talks will demonstrate cutting-edge efforts to do this cross-disciplinary work, while also highlighting their own shortcomings (and those of the field more broadly). Each talk will receive a 5-minute commentary from a junior scholar in a field that is different from that of the speaker. We hope these talks and commentaries will inspire conversations among the rest of the attendees.

 

**Invited Speakers and Tentative Talk Topics**

Laura Weidinger (Senior Research Scientist, DeepMind, AI + Psychology): The use of findings in developmental moral psychology to create benchmarks for an AI system’s moral competence.

Josh Tenenbaum (Professor, MIT, AI + Psychology): Using a recent “contractualist” theory of moral cognition to lay a roadmap for developing an AI system that makes human-like moral judgments.

Sam Bowman (Associate Professor, NYU & Anthropic, AI): Using insights from cognitive science for language model alignment.

Walter Sinnott-Armstrong (Professor, Duke, AI + Philosophy): Using preference-elicitation techniques to align kidney allocation algorithms with human values.

Regina Rini (Associate Professor, York University, Philosophy): The use of John Rawls’ “decision procedure for ethics” as a guiding framework for crowdsourcing ethical judgments to be used as training data for large language models.

Josh Greene (Professor, Harvard, Psychology): An approach to AI safety and ethics inspired by the human brain’s dual-process (“System 1/System 2”) architecture.

Rebecca Saxe (Professor, MIT, Psychology): Using the neuroscience of theory-of-mind to build socially and ethically aware AI systems.

 

*Call for Contribution*

The core of the workshop will be a series of in-person invited talks from leading scholars working at the intersection of AI, psychology, and philosophy on issues related to morality. Each talk will be followed by a 5-minute comment by a junior scholar whose training is primarily in a field that is different from the speaker’s field. This format will encourage interdisciplinary exchange. The day will end with a panel discussion of all the speakers. We will also organize two poster sessions (of contributed papers) to ensure individual interaction between the attendees and presenters.


 

**We invite the submissions in the following topics**

Ideal submissions will show how a theory from moral philosophy or moral psychology can be applied in the development or analysis of ethical AI systems. For example:

How can moral philosophers and psychologists best contribute to ethically-informed AI?

What can theories of developmental moral psychology teach us about making AI?

How do theories of moral philosophy shed light on modern AI practices?

How can AI tools advance the fields of moral philosophy and psychology themselves?

How can findings from moral psychology inform the trustworthiness, transparency or interpretability of AI decision-makers?

What human values are already embedded in current AI systems?

Are the values embedded in the current-day AI systems consistent with those in society at large?

What pluralistic values are missing from current-day AI?

Methodologically, what is the best way to teach an AI system human values? What are competitors to RLHF, reinforcement learning from human feedback?

Concerning AI alignment, to which values are we to align? Is the current practice of AI alignment amplifying monolithic voices? How can we incorporate diverse voices, views and values into AI systems?

 

**Submission format**

To apply, submit a short paper (3-8 pages), formatted for blind review. References do not count towards the page limit. Figures and tables are permitted. Note that papers on the shorter end of the range will be given full consideration. The workshop is non-archival, though there will be an option to have the papers posted on the workshop website. Accepted submissions will be presented as posters. 

A small subset of the accepted submissions will be offered the opportunity to present their work as a 5-7-minute talk immediately following one of the invited talks. These short talks will be framed as a “discussion” or “commentary” on the main talk. The short talks will deal with a similar theme as that discussed in the main talk, but from a different theoretical or methodological perspective. These talks can (and should) present the author’s original work, as well as explicitly address the way that their work challenges or supplements the work of the main speaker. On the submission page, there is an opportunity to indicate if you would like your submission to be considered for a short talk and, if so, which invited speaker you see as potentially most relevant (though this is simply a suggestion to the organizers). Those submissions accepted as short talks will not be presented as posters. Preference will be given to junior scholars. In addition, the organizers are committed to many forms of intellectual and sociological diversity—those from under-represented groups are especially encouraged to apply.
 

**Important Dates** 

All deadlines are in AoE.

Submission Deadline: Sep 29, 2023

Accept/Reject Notification: Oct 20, 2023

Camera-ready Final Submission: Nov 10, 2023

Workshop Date: Dec 15, 2023

10

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since: Today at 7:19 AM

Executive summary: A workshop at NeurIPS 2023 will bring together AI researchers with moral philosophers and psychologists to facilitate interdisciplinary collaboration on developing ethical AI systems.

Key points:

  1. The fields of AI, moral philosophy, and moral psychology use specialized languages and frameworks, limiting interdisciplinary collaboration.
  2. The workshop will host talks by leading researchers working at the intersections of these fields.
  3. Talks will demonstrate applying theories from philosophy and psychology to ethical AI practices.
  4. Junior scholars will give short commentaries on the talks from cross-disciplinary perspectives.
  5. Poster sessions will enable discussion and exchange among attendees.
  6. The workshop calls for contributions applying moral philosophy and psychology to AI practices.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities