Hi, all! The Machine Intelligence Research Institute (MIRI) is answering questions here tomorrow, October 12 at 10am PDT. You can post questions below in the interim.
MIRI is a Berkeley-based research nonprofit that does basic research on key technical questions related to smarter-than-human artificial intelligence systems. Our research is largely aimed at developing a deeper and more formal understanding of such systems and their safety requirements, so that the research community is better-positioned to design systems that can be aligned with our interests. See here for more background.
Through the end of October, we're running our 2016 fundraiser — our most ambitious funding drive to date. Part of the goal of this AMA is to address questions about our future plans and funding gap, but we're also hoping to get very general questions about AI risk, very specialized questions about our technical work, and everything in between. Some of the biggest news at MIRI since Nate's AMA here last year:
- We developed a new framework for thinking about deductively limited reasoning, logical induction.
- Half of our research team started work on a new machine learning research agenda, distinct from our agent foundations agenda.
- We received a review and a $500k grant from the Open Philanthropy Project.
Likely participants in the AMA include:
- Nate Soares, Executive Director and primary author of the AF research agenda
- Malo Bourgon, Chief Operating Officer
- Rob Bensinger, Research Communications Manager
- Jessica Taylor, Research Fellow and primary author of the ML research agenda
- Tsvi Benson-Tilsen, Research Associate
Nate, Jessica, and Tsvi are also three of the co-authors of the "Logical Induction" paper.
EDIT (10:04am PDT): We're here! Answers on the way!
EDIT (10:55pm PDT): Thanks for all the great questions! That's all for now, though we'll post a few more answers tomorrow to things we didn't get to. If you'd like to support our AI safety work, our fundraiser will be continuing through the end of October.
The "more research" part has gone well: we added Benya and Nate in 2014, and Patrick, Jessica, Andrew, and Scott in 2015. We’re hoping to double the size of the research team over the next year or two. MIRI’s Research and All Publications pages track a lot of our output since then, and we’ve been pretty excited about recent developmens there.
For “less outreach,” the absolute amount of outreach work we're doing is probably increasing at the moment, though it's shrinking as a proportion of our total activities as the research team grows. (Eyeballing it, right now I think we spend something like 6 hours on research per hour on outreach.)
The character of our outreach is also quite different: more time spent dialoguing with AI groups and laying groundwork for research collaborations, rather than just trying to spread safety-relevant memes to various intellectuals and futurists.
The last two years have seen a big spike of interest in AI risk, and there's a lot more need for academic outreach now that it's easier to get people interested in these problems. On the other hand, there's also a lot more supply; researchers at OpenAI, Google, UC Berkeley, Oxford, and elsewhere who are interested in safety work often have a comparative advantage over us at reaching out to skeptics or researchers who are new to these topics. So the balance today is probably similar to what Luke and others at MIRI had in mind on a several-year timescale in 2013, though there was a period in 2014/2015 where we had more uncertainty about whether other groups would pop up to help meet the increased need for outreach.