All of Oribis's Comments + Replies

Everyone knows who to look out for in the creation of AI, who should we be paying attention to for the solving of the control problem? I know of Elizier, Stewart Russel and the team mentioned above but is there anyone else you would recommend is worth following?

8
Malo
8y
Over the past couple of years I’ve been excited to see the growth of the community of researchers working on technical problems related to AI alignment. Here a quick and non-exhaustive list of people (and associated organizations) that I’m following (besides MIRI research staff and associates) in no particular order: * Stuart Russell and the new Center for Human-Compatible AI. * FHI’s growing technical AI safety team, which includes: * Stuart Armstrong, who is also a research associate at MIRI and co-author of Safely Interruptible Agents with Laurent Orseau of DeepMind); * Eric Drexler; * Owain Evans; and * Jan Leike, who recently collaborated with MIRI on the paper A formal solution to the grain of truth problem. * The authors of the Concrete Problems in AI Safety paper: * Dario Amodei, who is now at OpenAI, and Chris Olah and Dan Mané at Google Brain; * Jacob Steinhardt at Stanford; * Paul Christiano, who is a long-time MIRI collaborator currently at OpenAI (see also his writing at medium.com/ai-control); and * John Schulman, also at OpenAI. * DeepMind’s AI safety team, led by Laurent Orseau. * Various other individual academics; for a sampling see speakers at our Colloquium Series on Robust and Beneficial AI and grant recipients from the Future of Life Institute.