I wrote this several months ago for LessWrong, but it seemed useful to have crossposted here.
It's a writeup of several informal conversations I had with Andrew Critch (of the Berkeley Existential Risk Initiative) about what considerations are important for taking AI Risk seriously, based on his understanding of the AI landscape. (The landscape has changed slightly in the past year, but I think most concerns are still relevant)
Fair, but it's fairly easy to fix. Updated, and I added a link to BERI to give people some more context of who Andrew Critch is and why you might care.