Next week for The 80,000 Hours Podcast I'm interviewing Ajeya Cotra, senior researcher at Open Philanthropy, AI timelines expert, and author of Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover.
What should I ask her?
My first interview with her is here:
Some of Ajeya's work includes:
Is marginal work on AI forecasting usefwl? With so much brainpower being spent on moving a single number up or down, I'd expect it to hit diminishing returns pretty fast. To what extent is forecasting a massive brain drain and people should just get to work on the object-level problems if they're sufficiently convinced? How sensitive to AI forecasting estimates are your priorities over object-level projects (as in, how many more years out would your estimate of X have to be)?
Update: I added some arguments against forecasting here, but they are very general, and I suspect they will be overwhelmed by evidence related to specific cases.