Impact Academy is a non-profit organization that enables people to become world-class leaders, thinkers, and doers who are using their careers and character to solve our most pressing problems and create the best possible future.
I also work as an impact-driven and truth-seeking coach for people who are trying to do the most good.
I'm also a medical doctor, author, and former visiting researcher (biosecurity) at Stanford.
I appreciate feedback - especially about how I'm falling short. Please do me a favor and leave https://www.admonymous.co/sebastian_schmidt
Thanks so much for this blog post. As you know, I've been attempting to understand Cooperative AI a bit better over the past weeks.
More concretely, I found it helpful as a conceptual exploration of what Cooperative AI is. Including how it relates to other adjacent (sub)fields - including the diagram! I also appreciated you flagging the potential dual-use aspect of cooperative intelligence - especially given the fact that you're working in this field and therefore be prone to wishful thinking.
That said, I would've appreciated if you covered a bit more about:
- Why Cooperative AI is important. I personally think Cooperative AI (at least as I currently understand it) is undervalued on the margin and that we need more focus on potential multi-agent scenarios and complex human interactions.
- What people in the field of Cooperative AI are actually doing - including how they navigate the dual-use considerations.
Thanks for sharing. This is a very insightful piece. Im surprised that folks were more concerned about larger scale abstract risks compared to more well defined and smaller scale risks (like bias). I'm also surprised that they are this pro regulation (including a Sox months pause). Given this, I feel a bit confused that they mostly support the development of AI and I wonder what had most shaped their view.
Overall, I mildly worry that the survey led people to express more concern than they feel. Because this seems surprisingly close to my perception of the views of many existential risks "experts". What do you think?
Would love to see this for other countries too. How feasible do you think that would be?
Thanks for this Ulrik. It's a great initiative. +1 to Henri's comment (I also signed up a while back).