Most work in AI Safety, and especially in AI Alignment, considers the problem of ensuring that a single AI system acts safely and according to the preferences of a single human. This is a natural place to begin tackling such an important and difficult problem.
In this talk, however, Lewis Hammond will explain why safety in the single-agent setting is insufficient for avoiding risks from AI in our multi-agent world, and will introduce a growing subfield of research known as Cooperative AI that attempts to deal with this problem. They will provide a number of technical examples of recent work on this topic, and leave plenty of time for both technical and non-technical discussion at the end.
Lewis is a DPhil Affiliate at the Future of Humanity Institute and a DPhil candidate in Computer Science at the University of Oxford. He is interested in how a combination of the logical and statistical paradigms within AI can be used to help create safe, explainable, and provably beneficial technologies. His current research explores game theory, formal methods, and machine learning; the working title of his thesis is “Rational Synthesis in Evolutionary Games”. Before coming to Oxford he completed a Bachelor’s degree in mathematics and philosophy at the University of Warwick, and a Master’s degree in AI at the University of Edinburgh.
Please join us for a buffet social afterwards!