K

KwanYee

9 karmaJoined Sep 2019

Comments
1

Hi jskatt, great question! I’m a research analyst at Concordia and this is what I said in my Feb 2022 SERI talk re: AI alignment/safety-sympathetic resources/institutions in China:

“Over the past few years, Chinese researchers and policy stakeholders have demonstrated increasing interest in AI safety. 

For instance, last year two AI scientists from China’s AI Strategic Advisory Committee, which advises national policy on AI, wrote an article talking about the risks from AGI and potential countermeasures. The two scientists, Huang Tiejun and Gao Wen, along with their colleagues, present a summary of possible approaches to alignment based on Nick Bostrom’s book, Superintelligence, and cite other classic works in the Western AI alignment community like Concrete Problems in AI Safety and Life 3.0. The article acknowledged the relative lack of attention to AGI safety in China, and recommended “examining international discussions…of AGI policies, integrating cutting-edge legal and ethical findings, and exploring the elements of China’s AGI policymaking in a deeper and more timely manner.”

In the same year, Huang Tiejun and the chairperson of one of China’s top AI labs, the Beijing Academy of AI, endorsed the Chinese translation of Human Compatible, a book on AI alignment written by Professor Stuart Russell, who’ll be speaking at this conference tomorrow. Zhang Hongjiang also participated in a dialogue with Stuart at one of China’s most prestigious AI conferences, talking about the book and AGI safety.  

But despite these cases of high-profile support for AGI safety, many Chinese AI safety researchers focus on areas like robustness and interpretability instead of more alignment-relevant topics like goal specification.”

Separately, the org I work at – Concordia – aims to promote the safe and responsible development of AI, with a particular focus on China (more). For example, we recently wrapped up the first ever lecture series on AI alignment in China, which included speakers Rohin Shah, Max Tegmark, David Krueger, Paul Christiano, Brian Christian, and Jacob Steinhardt. To market this lecture series, we also translated a bunch of AI alignment works from English into Chinese. 

I’d be happy to have a chat about this, just messaged you.