How can we build (super) intelligent machines that are robustly aligned with human values? AI alignment researchers strive to meet this challenge, but currently draw upon a relatively narrow set of philosophical perspectives common in effective altruism and computer science. This could pose risks in a world where human values are complex, plural, and fragile. Tan Zhi Xuan discusses how these risks might be mitigated by greater philosophical pluralism, describing several problems in AI alignment where non-Western philosophies might provide insight.
In the future, we may post a transcript for this talk, but we haven't created one yet. If you'd like to create a transcript for this talk, contact Aaron Gertler — he can help you get started.