How can we build (super) intelligent machines that are robustly aligned with human values? AI alignment researchers strive to meet this challenge, but currently draw upon a relatively narrow set of philosophical perspectives common in effective altruism and computer science. This could pose risks in a world where human values are complex, plural, and fragile. Tan Zhi Xuan discusses how these risks might be mitigated by greater philosophical pluralism, describing several problems in AI alignment where non-Western philosophies might provide insight.

In the future, we may post a transcript for this talk, but we haven't created one yet. If you'd like to create a transcript for this talk, contact Aaron Gertler — he can help you get started.

19

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since: Today at 7:23 PM

An extended transcript of the talk is available at https://www.alignmentforum.org/posts/jS2iiDPqMvZ2tnik2/ai-alignment-philosophical-pluralism-and-the-relevance-of. There's also a lot more discussion there.