Epistemic status: I recently read Toby Ord's "The Precipice" as part of a local EA book club. I'm very convinced by EA arguments on global poverty and animal welfare, but less so on the long-term future. I decided to write my thoughts down into a book review style post. I'd like to improve my blog writing, possibly as a way to do EA outreach. I would appreciate any feedback on my writing, as well as the points I've made.


Toby Ord rates the probability of human extinction in the next centry as 1 in 6. This probability is assuming radical collective action by humanity to avoid extinction, without such actions, Ord places the risk at 1 in 3. These numbers are not picked out of thin air; half The Precipice consists of appendices, detailing analysis by some of the leading experts in the field of existential risk. Such numbers are shocking, and especially so when stated in such a calculated manner by a member of a mainstream academic establishment (Ord is a senior research fellow at Oxford University's Future of Humanity Institute). Ord’s manner and background is a big change from more radical groups who talk of human extinction – be that Extinction Rebellion activists, 1960s nuclear disarmament protesters or even cult leaders proselytising an upcoming apocalypse.

While Ord’s numbers may be radical, his solutions are not. He warns specifically against things that those who worry about existential risk should not do- "don't act unilaterally", "don't act without integrity", "don’t go on about it". Probably his most controversial policy recommendation is his endorsement of a form of world government to coordinate the actions needed for humanity to avoid extinction. Such a recommendation will not be popular in a current political climate of nationalist protectionism and increasing scepticism of international institutions.

Ord’s chief reason for worrying about human extinction is that it would wipe out our ‘future potential’. This argument will resonate most deeply with people who see human existence as a positive, flourishing thing, that must be preserved at all costs. But the end of humanity would also mean an end to a lot of suffering- both the suffering of humans and the suffering humans inflict on animals. For people who are most concerned with reducing suffering, Ord may have to work to get them on board this vision of a positive ‘future potential’ that must be protected at all costs.

There is still much in Ord’s predictions that should give such people cause concern- because any of the risks Ord describes, from a future planet made unlivable by climate change, to a killer engineered pandemic- would bring about suffering on an unimaginable scale. But for Ord this is not the main point. Ord goes to great lengths to emphasise that his focus is not on merely catastrophic risks but existential risks- those that would completely wipe out the future of humanity. For Ord, an event that would wipe out 100% of the human population is much worse than one that would wipe out only 99% of humanity, because the former would destroy our "future potential".

A particular problem Ord and the ‘future protectionists’ face is that their vision of a positive future of humanity is poorly defined. This ambiguity to what a flourishing future humanity could look like is probably an intentional feature of Ord’s writing, to avoid creating a polarising or one-sided vision of humanity’s potential. Indeed, Ord writes of not wanting to lock our current values into the future, as we may have a better moral understanding in the future. While this open-minded approach to the future is commendable, it does make it hard for the reader to imagine what we’re fighting to save. How do we fight to preserve humanity’s potential, if we can’t imagine what that potential may look like? Without a shared vision, it may be hard for Ord to redirect a reader’s attention from the more emotionally salient problems of the current day, and to create the sense of urgency that Ord argues we need, if we are to avoid our own downfall.

Another problem in the idea of ‘protecting humanity’s potential’ is touched upon briefly, under a section titled ‘Population Ethics’ The problem can be illustrated with the classic sci-fi time travelling paradox – the main character goes back in time, then inadvertently interferes in his parents’ budding romance and prevents himself from coming into existence. A similar line of thought can be carried into the future from now- our actions now may define who does and doesn’t come into existence. How do we think about ethics regarding people who don’t even exist yet?

One solution to this problem is to say that a action can only have a moral value if it affects someone- and since future generations don’t exist (yet), we shouldn’t worry about the effects of our actions on them. This is known as the ‘person affecting’ viewpoint on population ethics. Even if the reader has not explicitly formed her views in this way, she might find it hard for her to empathise with humans that haven’t even been born yet, less so those who may or may not be born for thousands of years. Favouring the present in this way makes perfect sense when thinking about money- having £100 today is more useful than having £100 in a years time, because you can invest the money you have today, and earn interest on it. But Ord argues that this kind of ‘temporal discounting’ should not be applied to morality, and that all lives are equally valuable no matter where in time they stand. Getting a reader to overcome this sense of antipathy to people who do not yet exist may be one of the biggest challenges Ord’s philosophy faces. It’s another reason why a reader may prioritise improving the well-being of current beings over fighting against the possible non-existence of future generations.

One of the biggest surprises of the book is the discrepancy between natural occurring risks and human caused risks. Ord rates naturally caused risks as having a 1 in 10 000 chance of causing human extinction in the next century, whilst human-created risks as being between 1 in 3 and 1 in 6. Given this, you might be thinking that Ord would advise applying the brakes on human technological progress. But Ord is reluctant to advocate for the slowing of technological progress, which seems incredibly risky when contrasted with Ord’s own view on the risks of advanced AI. Ord rates the risk as humanity being wiped out by a future superintelligent AI as 1 in 10. Ord rightly points out that halting or slowing human technological progress may be difficult, as it would require almost everyone in the world to refrain from developing technology. But he has perhaps under-estimated here the strength of feeling in the recent popular movement against tech giants and social media companies. This is not a luddite movement, and you don’t have to be a technophobe to think that advocating the slowing of technological development could be wise.

But Ord writes that it may not be not be advisable, for it would involve us curtailing our future potential. Here again, the slippery concept of ‘humanities’ potential’ appears. Humanities’ potential, it seems, relies on technological innovation. And this is perhaps the central moral concern of the Precipice- the idea that humanity should reach its ‘full potential’. Ord writes a clear outline of the risks that we as a race face, and begins to sketch some of the steps we can take to preserve our future. But instilling the urgency to do so may require another type of writing-that of science fiction, of more creative visionaries who are willing to paint in vivid detail a picture of what a flourishing human future could be.

14

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:
But instilling the urgency to do so may require another type of writing-that of science fiction, of more creative visionaries who are willing to paint in vivid detail a picture of what a flourishing human future could be.

If it's emotive force you're after, you may be interested in this - Toby Ord just released a collection of quotations on Existential risk and the future of humanity, everyone from Kepler to Winston Churchill (in fact, a surprisingly large number are from Churchill) to Seneca to Mill to the Aztecs - it's one of the most inspirational things I have ever read, and makes it clear that there have always been people who cared about humanity as a whole. My all-time favourite is probably this by the philosopher Derek Parfit:

Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea. 
If we are the only rational beings in the Universe, as some recent evidence suggests, it matters even more whether we shall have descendants or successors during the billions of years in which that would be possible. Some of our successors might live lives and create worlds that, though failing to justify past suffering, would have given us all, including those who suffered most, reasons to be glad that the Universe exists.
Curated and popular this week
Relevant opportunities