Hi, all! The Machine Intelligence Research Institute (MIRI) is answering questions here tomorrow, October 12 at 10am PDT. You can post questions below in the interim.
MIRI is a Berkeley-based research nonprofit that does basic research on key technical questions related to smarter-than-human artificial intelligence systems. Our research is largely aimed at developing a deeper and more formal understanding of such systems and their safety requirements, so that the research community is better-positioned to design systems that can be aligned with our interests. See here for more background.
Through the end of October, we're running our 2016 fundraiser — our most ambitious funding drive to date. Part of the goal of this AMA is to address questions about our future plans and funding gap, but we're also hoping to get very general questions about AI risk, very specialized questions about our technical work, and everything in between. Some of the biggest news at MIRI since Nate's AMA here last year:
- We developed a new framework for thinking about deductively limited reasoning, logical induction.
- Half of our research team started work on a new machine learning research agenda, distinct from our agent foundations agenda.
- We received a review and a $500k grant from the Open Philanthropy Project.
Likely participants in the AMA include:
- Nate Soares, Executive Director and primary author of the AF research agenda
- Malo Bourgon, Chief Operating Officer
- Rob Bensinger, Research Communications Manager
- Jessica Taylor, Research Fellow and primary author of the ML research agenda
- Tsvi Benson-Tilsen, Research Associate
Nate, Jessica, and Tsvi are also three of the co-authors of the "Logical Induction" paper.
EDIT (10:04am PDT): We're here! Answers on the way!
EDIT (10:55pm PDT): Thanks for all the great questions! That's all for now, though we'll post a few more answers tomorrow to things we didn't get to. If you'd like to support our AI safety work, our fundraiser will be continuing through the end of October.
A lot of the discourse around AI safety uses terms like "human-friendly" or "human interests". Does MIRI's conception of friendly AI take the interests of non-human sentient beings into consideration as well? Especially troubling to me is Yudkowsky's view on animal consciousness, but I'm not sure how representative his views are of MIRI in general.
(I realize that MIRI's research focuses mainly on alignment theory, not target selection, but I am still concerned about this issue.)
I am not a MIRI employee, and this comment should not be interpreted as a response from MIRI, but I wanted to throw my two cents in about this topic.
I think that creating a friendly AI to specifically advance human values would actually turn out okay for animals. Such a human-friendly AI should optimize for everything humans care about, not just the quality of humans' subjective experience. Many humans care a significant amount about the welfare of non-human animals. A human-friendly AI would thus care about animal welfare by proxy through the values of hu... (read more)