I’ve written a draft report evaluating a version of the overall case for existential risk from misaligned AI, and taking an initial stab at quantifying the risk from this version of the threat. I’ve made the draft viewable as a public google doc here (Edit: arXiv version here, video presentation here, human-narrated audio version here). Feedback would be welcome.
This work is part of Open Philanthropy’s “Worldview Investigations” project. However, the draft reflects my personal (rough, unstable) views, not the “institutional views” of Open Philanthropy.
If you're still making this claim now, want to bet on it? (We'd first have to operationalize who counts as an "AI safety researcher".)
I also think it wasn't true in Sep 2017, but I'm less confident about that, and it's not as easy to bet on.