Hello Matt, thanks for the kind words and glad you liked it. With regards to length, it's something that I'm grappling with. The main points can definitely be made more concisely (I'm guessing most EAs could skip the entire first part) but I've also been told by those less familiar with the topics like AGI to beef it up and add a lot more examples and descriptions to help people understand better. So I may end up making the book longer (to make it accessible to a greater number of people) and then creating a shorter summary version for communities like EA.
Also, points very well taken re the book's limitations as a work of political philosophy. This is where my lack of academic credentials (no formal training in philosophy/ethics/politics, just a bachelor's in engineering) lets me down. I'm not sure what the best solution to this is, although I might try to team up with/get help from people with PhDs.
Cheers, I appreciate the open feedback.
The thinking on the prioritisation is that the transition to the digital age represents a major shift akin to the hunter-gatherer/agricultural or agricultural/industrial ones. Hence it being considered 'more impactful' on our long-term future than even large recurrent events like (non-X-risk) terrorism.
As for making life a priority over even things like extreme suffering, I did not come to that conclusion lightly - and I would recommend reading the chapters on 'Our physical landscape' and 'First you must exist' to better understand the rationale.
All that said, thanks again for sharing your reaction! I will definitely keep that in mind as I continue to improve the book.
Yes we've heard this concern as well, and it's a fair one. The challenge is that public outreach on AI has already begun (witness Elon Musk's warnings) and holding back won't stop that.
Our approach is to engage with people across the political spectrum (framing the issue accordingly) and reinforce the message that when it comes to ASI risks we're quite literally all in this together.
As for specific government actions we'd be advocating for, this is something we are currently defining but the three areas we've flagged as most likely to help human success this century are technology governance, societal resilience and global coordination.
Very interesting! Please share your findings when they're ready. Would love to know more.
Good question. Right now, our team has a wealth of organisational knowledge, but the political experience comes from me - I am a former climate change advocate and three-time political candidate. To get a sense of what that involved, this is a speech I gave at a climate rally in 2015: https://vimeo.com/124727472 (note: I am no longer a member of any party and the CHS is strictly non-partisan)
I also have a bachelor's in mechanical engineering, am fluent in French (important in national media & politics), and have a track record of leading teams of volunteers.
I learned the hard way how difficult it can be to get a complex global challenge like climate change into the political debate, and there are many lessons learned I intend to apply to campaigning on AI and technological unemployment.
All that said, expanding our team and circle of advisors is essential for us to succeed, and this our #1 priority at this stage.
Indeed. Getting in early in the debate also means taking on extra responsibility when it comes to framing and being able to respond to critics. It is not something we take lightly.
Our current strategy is to start with technological unemployment and experiment, build capacity & network with that first before taking on ASI, similar to your suggestion.
This also fits with the election cycle here as there is a provincial election in Ontario in 2018 (which has more jurisdiction over labour policies) before the federal one in 2019 (where foreign policy/global governance is addressed).
The challenge remains that no one knows when the issue of ASI will become mainstream. There are rumours of an "Inconvenient Truth"-type documentary on ASI coming out soon, and with Elon Musk regularly making news and the plethora of books & TED talks being produced, no one has the time to wait for a perfect message, team or strategy. Some messiness will have to be tolerated (as is always the case in politics).
Great questions! My name is Wyatt Tessari and I am the founder.
1) We are doing that right now. Consultations is a top priority for us before we start our advocacy efforts. It's also part of the reason we're reaching out here.
2) Our main comparative advantage is that (to the best of our research) there is no one else in the political/advocacy sphere openly talking about the issue in Canada. If there are better organisations than us, where are they? We'd gladly join or collaborate with them.
3) There are plenty of risks - causing fear or misunderstanding, getting hijacked by personalities or adjacent causes, causing backlash or counterproductive behaviour - but the reality is they exist anyway. The general public will eventually clue in to the stakes around ASI and AI safety and the best we can do is get in early in the debate, frame it as constructively as possible, and provide people with tools (petitions, campaigns) that will be an effective outlet for their concerns.
4) This is a tough question. There would likely be a number of metrics - feedback from AI & governance experts, popular support (or lack thereof), and a healthy dose of ongoing critical thought. But if you (or anyone else reading this) has better ideas we'd love to hear them.
In any case, thanks again for your questions and we'd love to hear more (that's how we're hoping to grow...).