Note: Aaron Gertler, a Forum moderator, is posting this with Toby's account. (That's why the post is written in the third person.)
This is a Virtual EA Global AMA: several people will be posting AMAs on the Forum, then recording their answers in videos that will be broadcast at the Virtual EA Global event this weekend.
Please post your questions by 10:00 am PDT on March 18th (Wednesday) if you can. That's when Toby plans to record his video.
About Toby
Toby Ord is a moral philosopher focusing on the big picture questions facing humanity. What are the most important issues of our time? How can we best address them?
His earlier work explored the ethics of global health and global poverty, which led him to create Giving What We Can, whose members have pledged hundreds of millions of pounds to the most effective charities helping to improve the world. He also co-founded the wider effective altruism movement.
His current research is on avoiding the threat of human extinction, which he considers to be among the most pressing and neglected issues we face. He has advised the World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council, the UK Prime Minister’s Office, Cabinet Office, and Government Office for Science. His work has been featured more than a hundred times in the national and international media.
Toby's new book, The Precipice, is now available for purchase in the UK and pre-order in other countries. You can learn more about the book here.
In an 80,000 Hours interview, Tyler Cowen states:
How likely do you think it is that humans (or post-humans) will get to a point where existential risk becomes extremely low? Have you looked into the question of whether interstellar colonization will be possible in the future, and if so, do you broadly agree with Nick Beckstead's conclusion in this piece? Do you think Cowen's argument should push EAs towards forms of existential risk reduction (referenced by you in your recent 80,000 Hours interview) that are "not just dealing with today’s threats, [but] actually fundamentally enhancing our ability to understand and manage this risk"? Does positively shaping the development of artificial intelligence fall into that category?
Edit (likely after Toby recorded his answer): This comment from Pablo Stafforini also mentions the idea of "reduc[ing] the risk of extinction for all future generations."
I've seen and liked that book. But i don't think there really is enough information about risks (eg earth being hit by a comet or meteor that kills everything) to really say much---maybe if cosmology makes major advances or in other fields one can say somerthing but that might takes centuries.