I would be curious to know what is your median time from weak AGI to artificial superintelligence (ASI) in this question from Metaculus
I don't want to take the time to really understand the operationalizations used in that question, sorry—I'm worried that the answer might depend substantially on finicky details. In general I think that the AI Futures Model is pretty similar to my views on how long it will take to go from "weak AGI" to "ASI", for various definitions of those terms.
I think AI takeover in the next ten years is like 35% (😔), and conditional on takeover, I mostly agree with this; AI takeover is the main mechanism that causes me to expect massive human fatalities in the next decade.
I agree the survey underestimates the variance in AI timelines and risk.
The AI Futures, which is know for AI 2027, had super broad timelines for artificial superintelligence (ASI) timelines on January 26. The difference between the 90th and 10th percentile was 168 years for Daniel Kokotajlo (2027 to 2195), and 137 years for Eli Lifland (2028 to 2165).
Note that that is a different kind of distribution (one person's beliefs) than the one reported here (many people's medians)
@Ryan Greenblatt and I are going to record another podcast together (see the previous one here). We'd love to hear topics that you'd like us to discuss. (The questions people proposed last time are here, for reference.) We're most likely to discuss issues related to AI, but a broad set of topics other than "preventing AI takeover" are on topic. E.g. last time we talked about the cost to the far future of humans making bad decisions about what to do with AI, and the risk of galactic scale wild animal suffering.
There are so many other risk assessment techniques out there, for reference ISO31010 lists 30 of them (see here) and they're far from exhaustive.
Almost nothing on the list you've linked is an alternative approach to the same problem safety cases try to solve. E.g. "brainstorming" is obviously not a competitor to safety cases. And safety cases are not even an item in that list!
I think EAs are put way too much effort into thinking about safety cases compared to thinking about reducing risks on the margin in cases where risk is much higher (and willingness-to-pay for safety is much lower), because it seems unlikely that willingness-to-pay will be high enough that we'll have low risk at the relevant point. See e.g. here.
There's a social and professional community of Bay Area EAs who work on issues related to transformative AI. People in this cluster tend to have median timelines to transformative AI of 5 to 15 years, tend to think that AI takeover is 5-70% likely, tend to think that we should be fairly cosmopolitan in our altruism.
People in this cluster mostly don't post on the EA Forum for a variety of reasons:
To be clear, I think it's a shame that the EA Forum isn't a better place for people like me to post and comment.
You can check for yourself that the Bay Area EAs don't really want to post here by looking up examples of prominent Bay Area EAs and noting that they commented here much more several years ago than they do today.
Anecdotally, the EA forum skews [...] more Bay Area.
For what it's worth, this is not my impression at all. Bay Area EAs (e.g. me) mostly consider the EA Forum to be very unrepresentative of their perspective, to the extent that it's very rarely worthwhile to post here (which is why they often post on LessWrong instead).
This is not an obscure topic. It's been written about endlessly! I do not want to encourage people to make top-level posts asking questions before Googling or talking to AIs, especially on this topic.
I like Claude's response a lot more than you do. I'm not sure why. I agree that it's a lot less informative than your response.
(The post including "This demographic has historically been disconnected from social impact" made me much less inclined to want this person to stick around.)
There's been some discussion here of the claim that AI capabilities improvements have been a consequence of unsustainable increases in inference compute. Redwood Research Astra fellow Anders Cairns Woodruff has written a great post analyzing the data and disputing this.