Crossposted to LessWrong
While there have been many previous surveys asking about the chance of existential catastrophe from AI and/or AI timelines, none as far as I'm aware have asked about how the level of AI risk varies based on timelines. But this seems like an extremely important parameter for understanding the nature of AI risk and prioritizing between interventions.
Contribute your forecasts below. I'll write up my forecast rationales in an answer and encourage others to do the same.
I didn't mean to imply that. I think we very likely need to solve alignment at some point to avoid existential catastrophe (since we need aligned powerful AIs to help us achieve our potential), but I'm not confident that the first misaligned AGI would be enough to cause this level of catastrophe (especially for relatively weak definitions of "AGI").