Hi, all! The Machine Intelligence Research Institute (MIRI) is answering questions here tomorrow, October 12 at 10am PDT. You can post questions below in the interim.
MIRI is a Berkeley-based research nonprofit that does basic research on key technical questions related to smarter-than-human artificial intelligence systems. Our research is largely aimed at developing a deeper and more formal understanding of such systems and their safety requirements, so that the research community is better-positioned to design systems that can be aligned with our interests. See here for more background.
Through the end of October, we're running our 2016 fundraiser — our most ambitious funding drive to date. Part of the goal of this AMA is to address questions about our future plans and funding gap, but we're also hoping to get very general questions about AI risk, very specialized questions about our technical work, and everything in between. Some of the biggest news at MIRI since Nate's AMA here last year:
- We developed a new framework for thinking about deductively limited reasoning, logical induction.
- Half of our research team started work on a new machine learning research agenda, distinct from our agent foundations agenda.
- We received a review and a $500k grant from the Open Philanthropy Project.
Likely participants in the AMA include:
- Nate Soares, Executive Director and primary author of the AF research agenda
- Malo Bourgon, Chief Operating Officer
- Rob Bensinger, Research Communications Manager
- Jessica Taylor, Research Fellow and primary author of the ML research agenda
- Tsvi Benson-Tilsen, Research Associate
Nate, Jessica, and Tsvi are also three of the co-authors of the "Logical Induction" paper.
EDIT (10:04am PDT): We're here! Answers on the way!
EDIT (10:55pm PDT): Thanks for all the great questions! That's all for now, though we'll post a few more answers tomorrow to things we didn't get to. If you'd like to support our AI safety work, our fundraiser will be continuing through the end of October.
I'd mostly put OpenAI in the same category as DeepMind: primarily an AI capabilities organization, but one that's unusually interested in long-term safety issues. OpenAI is young, so it's a bit early to say much about them, but we view them as collaborators and are really happy with "Concrete Problems in AI Safety" (joint work by people at OpenAI, Google Brain, and Stanford). We helped lead a discussion about AI safety at their recent unconference, contributed to some OpenAI Gym environments, and are on good terms with a lot of people there.
Some ways OpenAI's existence adjusts our strategy (so far):
1) OpenAI is in a better position than MIRI to spread basic ideas like 'long-run AI risk is a serious issue.' So this increases our confidence in our plan to scale back outreach, especially outreach toward more skeptical audiences that OpenAI can probably better communicate with.
2) Increasing the number of leading AI research orgs introduces more opportunities for conflicts and arms races, which is a serious risk. So more of our outreach time is spent on trying to encourage collaboration between the big players.
3) On the other hand, OpenAI is a nonprofit with a strong stated interest in encouraging inter-organization collaboration. This suggests OpenAI might be a useful mediator or staging ground for future coordination between leading research groups.
4) The increased interest in long-run safety issues from ML researchers at OpenAI and Google increases the value of building bridges between the alignment and ML communities. This was one factor going into our "Alignment for Advanced ML Systems" agenda.
5) Another important factor is that more dollars going into cutting-edge AI research shortens timelines to AGI, so we put incrementally more attention into research that's more likely to be useful if AGI is developed soon.