Hide table of contents

I've written the below as a response to Steven Pinker's new book "Enlightenment Now" and specifically a dedicated chapter he wrote on existential threats.

 

Enlightenment Now

Steven Pinker’s works span the fields of cognitive psychology, linguistics and human nature. His book “The Better Angels of Our Nature: Why Violence Has Declined” provided a strong voice against the prevailing rhetoric that human violence has been increasing. His assertions have not been without criticism, but for many, his evidence-based arguments have been a welcome dissident voice filled with seemingly justified optimism against prevailing unsupported cynicism about human nature.

Pinker’s latest book “Enlightenment Now: The Case for Reason, Science, Humanism and Progress” is a continuation within the theme of optimism. Starting with a summary of the building block ideals of the Enlightenment, he weaves data and statistics to tell a story of positive human progress. Pinker lays a foundation for discussing contemporary topics of concern, ranging from inequality to terrorism, the environment to the quality of life, arguing that by all relevant measures most human affairs have improved drastically, and asserting that knowledge and technology will alleviate most of our persisting worries in time. He writes, “… there is no limit to the betterments we can attain if we continue to apply knowledge to enhance human flourishing.”

 

Existential Threats

However, carrying this optimism forward brazenly, he is quick to dismiss the notion that humans are at risk of future catastrophes. In a dedicated chapter entitled “Existential Threats” he caricatures those worried about global catastrophic risks as Luddites who are sounding an unfounded alarm about emerging and anthropogenic threats to humanity. He lists the Future of Humanity Institute as one of the “Techno-philanthropist […] bankrolled research institutes dedicated to discovering new existential threats and figuring out how to save the world from them.”

This focus on existential threats seems potentially dangerous from Pinker’s point of view, and he argues that concerns about catastrophic risks can itself have unintended negative consequences. He cites the 1960’s nuclear arms race and the 2003 invasion of Iraq as examples of times that fears about hypothetical disasters endangered humanity instead of protecting it. He later argues that human behaviour changes for the worse when contemplating possible demise, and that the “cumulative psychological effects of the drumbeat of doom” should be given consideration. He does not, however, suggest a mechanism by which to weigh these potentially negative consequences against possible catastrophic events, or suggest a threshold by which we should concern ourselves with potential existential threats.  

Pinker goes on to argue that the notion of civilisation able to destroy itself is misconceived.  He likens present concerns about existential risks caused by Artificial Intelligence (AI) as a “21st-century version of the Y2K bug.” He seems to misunderstand some basic tenets of AI safety research, citing that “Understanding does not obey Moore’s Law: knowledge is acquired by formulating explanations and testing them against reality, not by running an algorithm faster and faster.” He suggests the main concern in AI safety is the development of Artificial General Intelligence (AGI) and infers that the relative temporal distance of AGI makes it of little concern. With regards to developing a ‘Doomsday Computer’ he suggest “The way to deal with this threat is straightforward: don’t build one.” However, this negates the concerns within the AI safety community that such technological developments may not be intentional or foreseen, but inadvertently created.

 

Global Catastrophic Biological Risks

In Pinker’s discussion on the topic of bioterrorism, he becomes ensnared in the Normalcy bias, purporting that the lack of role that biological weapons has played in modern warfare since the international prohibition in 1972 by the Biological Weapons Convention is sufficient reason to believe they will never pose an existential threat. This is a common problem when thinking about unprecedented risks, because their never-before-seen characteristic makes them difficult for most humans to consider.  Nonetheless, the counsel of science suggests our intuition can be an unreliable guide to reality, and the counsel of history warns that the past does not always augur the future well.

Engineered pathogens of pandemic potential are a novel threat with consequences that are difficult to ascertain, but may be trajectory-altering for humankind. While Pinker discusses possible augmentations to pathogens that increase their transmissibility, virulence and durability, he also states “… breeding such a fine-tuned germ would require Nazi-like experiments on living humans that even terrorists (to say nothing of teenagers) are unlikely to carry off.” This ignores the recent apposite case of horsepox virus being synthesized de novo and resurrected by a team of scientists in Canada for $100,000. Small-scale actors engineering “fine-tuned germs” seems more conceivable everyday.

Pinker asserts that risk assessments, which vary by orders of magnitude for highly improbable events, is a reason in itself to set aside worries about global catastrophic events. He states bluntly “You can’t worry about everything.” This is true to perhaps to a technical extent, but there is a large difference between “everything” and low probability, high impact risks. Indeed, it has been demonstrated that uncertainty in risk assessments does not negate cause for consideration, but instead adds to the reason for concern.1 Furthermore, there is a strong case for considering low probability, high impact risks that may threaten human extinction and all future generations.2 Given the astronomical number of lives that could exist in the future,3 reducing the risk of their non-existence even by a small factor is a worthwhile endeavour.

 

Conclusion

Global Catastrophic Risks are an emerging phenomenon in a fast-evolving risk landscape filled with uncertainty. As with all new technologies and unprecedented advances, the lack of historical precedence carries little weight against their consideration as a plausible existential threat to humanity. Criticism is a valuable tool that invokes debate and refinement, however, hasty dismissal does not allow for balanced discourse on a topic that potentially could be of existential importance.

Pinker concedes that he can offer no assurance that global catastrophe will never happen, instead insisting that “we can treat them not as apocalypses in waiting but as problems to be solved.” However, these solutions rely upon the hard work and due concern of individuals to mitigate risks that threaten the existence of civilisation and future generations. 

 

References

Ord T, Hillerbrand R, Sandberg A. Probing the improbable: methodological challenges for risks with low probabilities and high stakes. Journal of Risk Research. 2010 Mar 1;13(2):191-205. 

2 Millett P, Snyder-Beattie A. Existential Risk and Cost-Effective Biosecurity. Health security. 2017 Aug 1;15(4):373-83. 

3 Bostrom N. Existential risk prevention as global priority. Global Policy. 2013 Feb 1;4(1):15-31. 

 

Thanks to Greg Lewis for his comments.

15

0
0

Reactions

0
0

More posts like this

Comments7
Sorted by Click to highlight new comments since: Today at 4:07 PM

Oh, I didn't expect Pinker to hold that position; it's quite disappointing. But it's hopefully a topic we will see addressed in a future conversation with Sam Harris who should push back on the "AI cannot be a threat"-narrative. Have you tweeted/mailed/whatnot him this response?

I agree, I found it surprising as well that he has taken this view. It seems like he has read a portion of Bostrom's Global Catastrophic Risks and Superintelligence, has become familiar with the general arguments and prominent examples, but then has gone on to dismiss existential threats on reasons specifically addressed in both books.

He is a bit more concerned about nuclear threats than other existential threats, but I wonder if this is the availability heuristic at work given the historical precedent instead of a well-reasoned line of argument.

Great suggestion about Sam Harris - I think Steven Pinker and him had a live chat just the other day (March 14) so may have missed this opportunity. I'm still waiting for the audio to be uploaded on Sam's podcast, but I wonder given Sam's positions if he questions Pinker on this as well.

I think part of the problem is that he expressed a very dismissive stance towards AI/x-risk positions publicly, seemingly before he'd read anything about them. Now people have pushed back and pointed out his obvious errors and he's had to at least somewhat read about what the positions are, but he doesn't want to backtrack at all from his previous statement of extreme dismissiveness.

I agree and that appears the likely sequelae. I find it a bit disappointing that he went into this topic with his view already formed, and used the prominent contentious points and counterarguments to reinforce his preconceptions without becoming familiar with the detailed refutations already out there. It's great to have good debate and opposing views presented, but his broad stroke dismissal makes it really difficult.

Sam Harris did ask Steven Pinker about AI safety. If anybody gets around listening to that, it starts at 1:34:30 and ends at 2:04, so that's about 30 minutes about risks from AI. Harris wasn't his best in that discussion and Pinker came off much more nuanced and evidence and reason based.

I agree with the characterization of the discussion, but regardless, you can find it here: https://www.youtube.com/watch?v=H_5N0N-61Tg&t=86m12s

Hey Cassidy,

Very well written post! I didn't read his book, but just going off your summary of his view where you characterize him as "asserting that knowledge and technology will alleviate most of our persisting worries in time" and where you quote him saying, “… there is no limit to the betterments we can attain if we continue to apply knowledge to enhance human flourishing.”, I am curious how much weight Pinker as well as you give to

1) empathy (i.e. the ability to imagine oneself in the shoes of another - to imagine what it might be like for another) and/or

2) caring for strangers and/or

3) fair-minded-ness (e.g., intellectual humility, critical thinking skills, listening skills, etc).

in the solution to making the world a better place of a lasting nature.

My own opinion is that knowledge and technology alone cannot solve many of the problems that make our world a less than ideal place such as wars or long standing conflicts like the Israeli-Palestinian conflict or the drug cartel problem or religiously motivated terrorism. Knowledge and technology might solve poverty and disease, but I don't see them solving many great sources of suffering for innocent people.

From this point of view, I find that one of the biggest gaps in our education systems these days is a lack of emphasis on teaching/instilling the things I've mentioned above. Having said that, I am tempted by the idea that one of the best ways to make the world a better place in the future is to donate to organizations that try to promote those things in school. I wonder what your opinion on that is.