On August 1, I'll be moderating a panel at EA Global on the relationship between effective altruism, astronomical stakes, and artificial intelligence. The panelists will be Stuart Russell (UC Berkeley), Nick Bostrom (Future of Humanity Institute), Nate Soares (Machine Intelligence Research Institute), and Elon Musk (SpaceX, Tesla). I'm very excited to have this conversation with some of the leading figures in AI safety!
As part of the panel, I'd love to ask our panelists some questions from the broader EA community. To that end, please submit questions below that you'd like to be considered for the event. I'll be selecting a set of these questions and integrating them into our discussion. I can't guarantee that every question will fit into the time allotted, but I'm confident that you can come up with some great questions to facilitate high-quality discussion among our panelists.
Thanks in advance for your questions, and looking forward to seeing some of you at the event!
Would it be valuable to develop a university level course on AI safety engineering to be implemented in hundreds of universities that use Russell's book worldwide, to attract more talented minds to the field? Which are the steps that would cause this to happen?