While I haven't read the book, Slate Star Codex has a great review on Human Compatible. Scott says it speaks of AI safety, especially in the long-term future, in a very professional sounding, and not weird way. So I suggest reading that book, or that review.
You could also list several different smaller scale AI-misalignment problems, such as the problems surrounding Zuckerberg and Facebook. You could say something like "You know how Facebook's AI is programmed to keep you on as long as possible, so often it will show you controversial content in order to rile you up, and get everyone yelling on everyone else so you never leave the platform? Yeah, I make sure that won't happen with smarter, and more influential AIs." If all you're going for is an elevator speech, or explaining to family what is it you do, I'd stop here. Otherwise, say something like "By my estimation, this seems fairly important, as incentives are aligned for companies and countries to use the best AI possible, and better AI means more influential AI, so if you have a really good, but slightly sociopathic AI, it's likely it'll still be used anyway. And if, in a few decades, we get to the point where we have a smarter than human, but still sociopathic AI, it's possible we've just made an immortal Hitler-Einstein combination. Which, needless to say, would be very bad, possibly even extinction-level bad. So if the job is very hard, and the result if the job doesn't get done is very bad, then the job is very very important (that's very2)." after the first part.
I've never tried using these statements, but the seem like they'd work.
Was going to recommend this as well (and I have read the book).