Hi, all! The Machine Intelligence Research Institute (MIRI) is answering questions here tomorrow, October 12 at 10am PDT. You can post questions below in the interim.
MIRI is a Berkeley-based research nonprofit that does basic research on key technical questions related to smarter-than-human artificial intelligence systems. Our research is largely aimed at developing a deeper and more formal understanding of such systems and their safety requirements, so that the research community is better-positioned to design systems that can be aligned with our interests. See here for more background.
Through the end of October, we're running our 2016 fundraiser — our most ambitious funding drive to date. Part of the goal of this AMA is to address questions about our future plans and funding gap, but we're also hoping to get very general questions about AI risk, very specialized questions about our technical work, and everything in between. Some of the biggest news at MIRI since Nate's AMA here last year:
- We developed a new framework for thinking about deductively limited reasoning, logical induction.
- Half of our research team started work on a new machine learning research agenda, distinct from our agent foundations agenda.
- We received a review and a $500k grant from the Open Philanthropy Project.
Likely participants in the AMA include:
- Nate Soares, Executive Director and primary author of the AF research agenda
- Malo Bourgon, Chief Operating Officer
- Rob Bensinger, Research Communications Manager
- Jessica Taylor, Research Fellow and primary author of the ML research agenda
- Tsvi Benson-Tilsen, Research Associate
Nate, Jessica, and Tsvi are also three of the co-authors of the "Logical Induction" paper.
EDIT (10:04am PDT): We're here! Answers on the way!
EDIT (10:55pm PDT): Thanks for all the great questions! That's all for now, though we'll post a few more answers tomorrow to things we didn't get to. If you'd like to support our AI safety work, our fundraiser will be continuing through the end of October.
Quoting Nate's supplement from OpenPhil's review of "Proof-producing reflection for HOL" (PPRHOL) :
How far along the way are you towards narrowing these gaps, now that "Logical Induction" is a thing people can talk about? Are there variants of it that narrow these gaps, or are there planned follow-ups to PPRHOL that might improve our models? What kinds of experiments seem valuable for this subgoal?
I endorse Tsvi's comment above. I'll add that it’s hard to say how close we are to closing basic gaps in understanding of things like “good reasoning”, because mathematical insight is notoriously difficult to predict. All I can say is that logical induction does seem like progress to me, and we're taking various different approaches on the remaining problems. Also, yeah, one of those avenues is a follow-up to PPRHOL. (One experiment we’re running now is an attempt to implement a cellular automaton in HOL that implements a reflective reasoner with access to... (read more)