On August 1, I'll be moderating a panel at EA Global on the relationship between effective altruism, astronomical stakes, and artificial intelligence. The panelists will be Stuart Russell (UC Berkeley), Nick Bostrom (Future of Humanity Institute), Nate Soares (Machine Intelligence Research Institute), and Elon Musk (SpaceX, Tesla). I'm very excited to have this conversation with some of the leading figures in AI safety!
As part of the panel, I'd love to ask our panelists some questions from the broader EA community. To that end, please submit questions below that you'd like to be considered for the event. I'll be selecting a set of these questions and integrating them into our discussion. I can't guarantee that every question will fit into the time allotted, but I'm confident that you can come up with some great questions to facilitate high-quality discussion among our panelists.
Thanks in advance for your questions, and looking forward to seeing some of you at the event!
It seems to me that even the most optimistic versions of friendly super-AI are discordant with current values across society. Why isn't there more discussion about how AI development itself can be regulated, delayed and stopped? What's going on in this space? What might work?