Background: I'm an undergraduate CS major. Recently, I've mentioned to my mom that I've been getting involved in the "effective altruism" community, and I've been expressing an increased interest in getting a PhD. The other day, my mom asked me why exactly I wanted a PhD.
Me: Well, I want to help others as much as possible.
Mom: Okay, how are you going to help people with a PhD?
Me: Well, I don't know... maybe try to reduce existential risks...
Mom: Whoa, existential risks?
Me: Uh, I don't know, I mean, maybe it wouldn't be that bad, but it seems likely that AI will be very important in the future. And if AI has good goals that match up with the goals of humans, they could solve lots of the world's problems, so I really want to increase the odds of that happening.
Mom: So what's going to happen if AIs don't have good goals?
Me: Well, I guess... they could kill off humanity?
Mom: Whoa!
Fortunately, we moved on in the conversation at this point, but I don't think I gave her the best first impression of these ideas. Does anyone know of any good articles or videos for a popular audience that present the AI alignment problem in moderate depth, without too much sensationalism? I'm sure there are people who would do a much better job than me at explaining these concepts to my mom. Similarly, content on EA concepts in general would be helpful.
It's most important to me to convince my mom that what I'm doing is worthwhile, but I also want to be able to talk about my career plans with non-EAs without them thinking I've joined a Doomsday cult. For people working in existential risk and other "weird" areas - how do you usually talk about your work when it comes up in conversation?
Brian Christian is incredibly good at tying the short-term concerns everyone already knows about to the long-term concerns. He's done tons of talks and podcasts - not sure which is best, but if 3 hours of heavy content isn't a problem, the 80k one is good.
There's already a completely mainstream x-risk: nuclear weapons (and, popularly, climate change). It could be good to compare AI to these accepted handles. The second species argument can be made pretty intuitive too.
Bonus: here's what I told my mum.