I don't think I quite follow your criticism of FLOP/s; can you say more about why you think it's not a useful unit? It seems like you're saying that a linear extrapolation of FLOP/s isn't accurate to estimate the compute requirements of larger models. (I know there are a variety of criticisms that can be made, but I'm interested in better understanding your point above)
How'd you decide to go focus on going into research, even before you decided that developing technical skills would be helpful for that path?
Thanks for the great post. Ryan, I'm curious how you figured this at an early stage:
I figured that in the longer term, my greatest chance at having a substantial impact lay in my potential as a researcher, but that I would have to improve my maths and programming skills to realize that.
What key metrics do research analysts pay attention to in the course of their work? More broadly, how do employees know that they're doing a good job?
Luke Muehlhauser posted a list of strategic questions here: http://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/ (originally posted in 2014).
By (3), do you mean the publications that are listed under "forecasting" on MIRI's publications page?
I agree that this makes sense in the "ideal" world, where potential donors have better mental models of this sort of research pathway, and have found this sort of thinking useful as a potential donor.
From an organizational perspective, I think MIRI should put more effort into producing visible explanations of their work (well, depending on their strategy to get funding). As worries about AI risk become more widely known, there will be a larger pool potential donations to research in the area. MIRI risks becoming out-competed by others who are better at explaining how their work decreases risk from advanced AI (I think this concern applies both to talent and money, but here I'm specifically talking about money).
High-touch, extremely large donors will probably get better explanations, reports on progress, etc from organizations, but the pool of potential $ from donors who just read what's available online may be very large, and very influenced by clear explanations about the work. This pool of donors is also more subject to network effects, cultural norms, and memes. Given that MIRI is running public fundraisers to close funding gaps, it seems that they do rely on these sorts of donors for essential funding. Ideally, they'd just have a bunch of unrestricted funding to keep them secure forever (including allaying the risk of potential geopolitical crises and macroeconomic downturns).
Do you share Open Phil's view that there is a > 10% chance of transformative AI (defined as in Open Phil's post) in the next 20 years? What signposts would alert you that transformative AI is near?
Relatedly, suppose that transformative AI will happen within about 20 years (not necessarily a self improving AGI). Can you explain how MIRI's research will be relevant in such a near-term scenario (e.g. if it happens by scaling up deep learning methods)?
The authors of the "Concrete Problems in AI safety" paper distinguish between misuse risks and accident risks. Do you think in these terms, and how does your roadmap address misuse risk?