Hide table of contents

Hi everyone!

I'm co-organizing a workshop on a really interesting topic that's very relevant for AI safety. We call it "Rebellion and Disobedience in AI".  If you're doing work that could be relevant for us, please submit it! If you have questions or want to discuss the scope of this workshop, feel free to ask on this thread and I'll try to answer.

 

 

Dear colleagues,

This is a second call for papers for the workshop on Rebellion and Disobedience in AI (RaD-AI) which will take place on May 30, 2023, as part of the AAMAS workshop program.

This call contains an extended submission deadline (February 13) and a list of confirmed speakers.

More details can be found on the workshop’s website: https://sites.google.com/view/rad-ai/home
 

RaD-AI agents are artificial agents (virtual or robots) that reason intelligently about why, when, and how to rebel and disobey their given commands. The need for agents to disobey contrasts with most existing research on collaborative robots and agents, where the definition of a “good” agent is one that complies with the commands it is given, and that works in a predictable manner under the consent of the human it serves. However, as exemplified in Issac Asimov’s Second Law of Robotics, this compliance is not always desired, such as when it might interfere with a human’s safety. While there has not been much prior research on RaD-AI, we identify main related topics, each of which is studied by a thriving subcommunity of AI, namely: Intelligent Social Agents, Human-Agent/Robot Interaction, and Societal Impacts. In each of these areas, there are research questions relevant to RaD-AI. 

 

Confirmed Speakers:

Joel Leibo, DeepMind

Matthias Scheutz, Department of Computer Science, Tufts University

Liz Sonenberg, School of Computing and Information Systems, The University of Melbourne

 

We are specifically interested in submissions on the following topics:

Intelligent Social Agents (including but not limited to: Goal Reasoning, Plan Recognition, Value Alignment, and Social Dilemmas)

Human-Agent/Robot Interaction (including but not limited to: Human-agent Trust, Interruptions, Deception, 

Command Rejection, and Explainability) 

Societal Impacts (including but not limited to: Legal and Ethical Reasoning, Liability, AI safety, and AI governance)

 

Submission details:

The submission deadline is January 20, 2023. February 13, 2023

Notifications will be sent on March 13, 2023.
The submission website is: https://easychair.org/conferences/?conf=radai23

Accepted submission types:

Regular Research Papers (6 to 8 pages)

Short Research Papers (up to 4 pages)

Position Papers (up to 2 pages)

Tool Talks (up to 2 pages)

 

Organizing Committee:

David Aha, Navy Center for Applied Research in AI; Naval Research Laboratory; Washington, DC; USA

Gordon Briggs, Navy Center for Applied Research in AI; Naval Research Laboratory; Washington, DC; USA

Reuth Mirsky, Department of Computer Science; Bar-Ilan University; Israel (mirskyr@cs.biu.ac.il)

Ram Rachum, Department of Computer Science; Bar-Ilan University; Israel 

Kantwon L. Rogers, Department of Computer Science; Georgia Tech; USA

Peter Stone, The University of Texas at Austin; USA and Sony AI


 

2

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities