[EDIT: Thanks for the questions everyone! Just noting that I'm mostly done answering questions, and there were a few that came in Tuesday night or later that I probably won't get to.]
Hi everyone! I’m Ajeya, and I’ll be doing an Ask Me Anything here. I’ll plan to start answering questions Monday Feb 1 at 10 AM Pacific. I will be blocking off much of Monday and Tuesday for question-answering, and may continue to answer a few more questions through the week if there are ones left, though I might not get to everything.
About me: I’m a Senior Research Analyst at Open Philanthropy, where I focus on cause prioritization and AI. 80,000 Hours released a podcast episode with me last week discussing some of my work, and last September I put out a draft report on AI timelines which is discussed in the podcast. Currently, I’m trying to think about AI threat models and how much x-risk reduction we could expect the “last long-termist dollar” to buy. I joined Open Phil in the summer of 2016, and before that I was a student at UC Berkeley, where I studied computer science, co-ran the Effective Altruists of Berkeley student group, and taught a student-run course on EA.
I’m most excited about answering questions related to AI timelines, AI risk more broadly, and cause prioritization, but feel free to ask me anything!
An extension of Daniel's bonus question:
If I condition on your report being wrong in an important way (either in its numerical predictions, or via conceptual flaws) and think about how we might figure that out today, it seems like two salient possibilities are inside-view arguments and outside-view arguments.
The former are things like "this explicit assumption in your model is wrong". E.g. I count my concern about the infeasibility of building AGI using algorithms available in 2020 as an inside-view argument.
The latter are arguments that, based on the general difficulty of forecasting the future, there's probably some upcoming paradigm shift or crucial consideration which will have a big effect on your conclusions (even if nobody currently knows what it will be).
Are you more worried about the inside-view arguments of current ML researchers, or outside-view arguments?