I wrote this several months ago for LessWrong, but it seemed useful to have crossposted here.
It's a writeup of several informal conversations I had with Andrew Critch (of the Berkeley Existential Risk Initiative) about what considerations are important for taking AI Risk seriously, based on his understanding of the AI landscape. (The landscape has changed slightly in the past year, but I think most concerns are still relevant)
FYI there's a lot of good comments at the original LW post.