Tl;dr: If you want to publicly debate AI risk with us, send us an email at firstname.lastname@example.org with information about you, suggested topics, and the suggested platform.
Public debates strengthen society and public discourse. They spread truth by testing ideas and filtering out weaker arguments. Moreover, debating ideas publicly forces people to be coherent over time, or to adjust their beliefs faced with new evidence.
This is why we need more public debates on AI development, as AI will fundamentally transform our world, for better or worse.
Most of us at Conjecture expect advanced AI to be catastrophic by default, and that the only path to a good future goes through solving some very hard technical and social challenges.
However, many others inside and outside of the AI field have very different expectations! Some think very powerful AI systems are coming soon, but it will be easy to control them. Others think very powerful AI systems are just very far away, and there's no reason to worry yet.
Open debate about AI should start now, to discuss these and many more issues.
As Conjecture, we have a standing offer to publicly debate AI risk and progress in good faith.
If you want to publicly debate AI risk with us, send us an email at email@example.com with information about you, suggested topics, and the suggested platform. By default, we prefer the debate to be a live discussion streamed on Youtube or Twitch. Given our limited time, we won't be able to accept all requests, but we'll explain in cases where we reject. As a rule of thumb, we will give priority to people with more reach and/or prominence.
Some relevant topics can include:
- What are reasons for and against expecting that the default outcome of developing powerful AI systems is human extinction?
- Is open source development of powerful AI systems a good or bad idea?
- How far are we from existentially dangerous AI systems?
- Should we stop development of more powerful AI, or continue development towards powerful general AI and superintelligence?
- Is a global moratorium on development of superintelligence feasible?
- How easy or hard is it going to be to control powerful AI systems?
Here's a recent debate between Connor Leahy (Conjecture CEO) and Joseph Jacks (open source software investor) on whether AGI is an existential risk, and a debate between Robin Hanson (Prof. of Economics at GMU) and Jaan Tallinn (Skype co-founder, AI investor) on whether we should pause AI research.
We ran a debate initiative in the past, but it was focused on quite technical discussions with people already deep in the field of AI alignment. As AI risk gets into the mainstream, the conversation should become much broader.
Two discussions that we published from that initiative:
- Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes
- Shah (DeepMind) and Leahy (Conjecture) Discuss Alignment Cruxes
If the linked page doesn't load on your browser, try CMD + Shift + R on Mac or CTRL + F5 on Windows to hard reload the page.