This is a linkpost for https://www.youtube.com/watch?v=dbMp4pFVwnU&list=PLwp9xeoX5p8Pq5nu2KkiBFCXmeurxws1u&index=3&t=5s
How can we build (super) intelligent machines that are robustly aligned with human values? AI alignment researchers strive to meet this challenge, but currently draw upon a relatively narrow set of philosophical perspectives common in effective altruism and computer science. This could pose risks in a world where human values are complex, plural, and fragile. Tan Zhi Xuan discusses how these risks might be mitigated by greater philosophical pluralism, describing several problems in AI alignment where non-Western philosophies might provide insight.
In the future, we may post a transcript for this talk, but we haven't created one yet. If you'd like to create a transcript for this talk, contact Aaron Gertler — he can help you get started.
An extended transcript of the talk is available at https://www.alignmentforum.org/posts/jS2iiDPqMvZ2tnik2/ai-alignment-philosophical-pluralism-and-the-relevance-of. There's also a lot more discussion there.