Tan Zhi Xuan: AI alignment, philosophical pluralism, and the relevance of non-Western philosophy

by EA Global1 min read21st Nov 20201 comment

7

VideoMoral philosophyAI alignment
Frontpage
Write a Review

How can we build (super) intelligent machines that are robustly aligned with human values? AI alignment researchers strive to meet this challenge, but currently draw upon a relatively narrow set of philosophical perspectives common in effective altruism and computer science. This could pose risks in a world where human values are complex, plural, and fragile. Tan Zhi Xuan discusses how these risks might be mitigated by greater philosophical pluralism, describing several problems in AI alignment where non-Western philosophies might provide insight.

In the future, we may post a transcript for this talk, but we haven't created one yet. If you'd like to create a transcript for this talk, contact Aaron Gertler — he can help you get started.

1 comments, sorted by Highlighting new comments since Today at 1:46 PM
New Comment

An extended transcript of the talk is available at https://www.alignmentforum.org/posts/jS2iiDPqMvZ2tnik2/ai-alignment-philosophical-pluralism-and-the-relevance-of. There's also a lot more discussion there.