Note: Aaron Gertler, a Forum moderator, is posting this with Toby's account. (That's why the post is written in the third person.)
This is a Virtual EA Global AMA: several people will be posting AMAs on the Forum, then recording their answers in videos that will be broadcast at the Virtual EA Global event this weekend.
Please post your questions by 10:00 am PDT on March 18th (Wednesday) if you can. That's when Toby plans to record his video.
About Toby
Toby Ord is a moral philosopher focusing on the big picture questions facing humanity. What are the most important issues of our time? How can we best address them?
His earlier work explored the ethics of global health and global poverty, which led him to create Giving What We Can, whose members have pledged hundreds of millions of pounds to the most effective charities helping to improve the world. He also co-founded the wider effective altruism movement.
His current research is on avoiding the threat of human extinction, which he considers to be among the most pressing and neglected issues we face. He has advised the World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council, the UK Prime Minister’s Office, Cabinet Office, and Government Office for Science. His work has been featured more than a hundred times in the national and international media.
Toby's new book, The Precipice, is now available for purchase in the UK and pre-order in other countries. You can learn more about the book here.
You break down a "grand strategy for humanity" into reaching existential security, the long reflection, and then actually achieving our potential. I like this, and think it would be a good strategy for most risks.
But do you worry that we might not get a chance for a long reflection before having to "lock in" certain things to reach existential security?
For example, perhaps to reach existential security given a vulnerable world, we put in place "greatly amplified capacities for preventive policing and global governance" (Bostrom), and this somehow prevents a long reflection - either through permanent totalitarianism or just through something like locking in extreme norms of caution and stifling of free thought. Or perhaps in order to avoid disastrously misaligned AI systems, we have to make certain choices that are hard to reverse later, so we have to have at least some idea up-front of what we should ultimately choose to value.
(I've only started the book; this may well be addressed there already.)
I had a similar question myself. It seems like believing in a "long reflection" period requires denying that there will be a human-aligned AGI. My understanding would have been that once a human-aligned AGI is developed, there would not be much need for human reflection—and whatever human reflection did take place could be accelerated through interactions with the superintelligence, and would therefore not be "long." I would have thought, then, that most of the reflection on our values would need to have been completed before the creation of an AGI. From what I've read of The Precipice, there is no explanation for how a long reflection is compatible with the creation of a human-aligned AGI.