Hide table of contents

Word count: ~1900

Reading time: ~9 mins

Keywords: Human progress, existential risk, geopolitics, climate change, nuclear war, artificial intelligence, cognition, democracy, surveillance, fake news, technological disruption, politicisation of academia.


Harari and Pinker are well-known authors of macro-history and I think the discussion has interesting implications for how we think about the long-run future. I found this conversation interesting and wanted to share the key points in an accessible format, quicker to absorb than a 43 minute video.

I wanted to produce this article in as short a time as possible. I ordered a transcription from Rev.com's algorithmic speech recognition (which ended up being free), and spent 5 hours writing this summary and formatting the post.


Steven Pinker is a cognitive psychologist, linguist, and popular science author. He is best known for The Better Angels of Our Nature, which argues that violence in the world has declined suggests explanations as to why this has occurred. The book has been a bestseller, with endorsements from people including Bill Gates and Mark Zuckerberg.

Yuval Noah Harari, is a lecturer at the Department of History at the Hebrew University of Jerusalem. His books Sapiens, Homo Deus, and 21 Lessons for the 21st Century have sold over 23 million copies worldwide. His writings examine global history, technology, free will, consciousness, suffering, intelligence and happiness.

Both Pinker and Harari have written macro-histories with significant thematic breadth that have influenced popular culture. Better Angels is Nick Beckstead’s top recommended audiobook. Sapiens is in Robert Wiblin’s Top 9 books.

Key points

Both writers share concerns about climate change, nuclear war, and technological disruption. Pinker tends to take optimistic stances, arguing that past improvements suggest humanity could continue to make progress in the future. He is sceptical of the potential speed of technological development, and sees human society as robust and progressive.

Harari raises long-term questions, and frets that we are approaching potential tipping points of technological disruption. He voices concerns about the loss of individual autonomy and the potential rise of digital dictatorships.

Pinker and Harari find agreement on several topics, chiefly that they share long-run uncertainty over the future. Harari’s website recommends Enlightenment Now as part of his list, A Haphazard Guided Tour of Humanity on the Brink:

“Pinker extols the amazing achievements of modernity, and demonstrates that humankind has never been so peaceful, healthy and prosperous. There is of course much to argue about, but that’s what makes this book so interesting”.

Part of this is resolved by Harari allowing for flexibility over timescales, arguing that the 50-100 year timescales are short relative to human history. My view is that what distinguishes them is tone and emphasis. Should we be optimists, pessimists, or realists?

Potential implications for the effective altruism community

If we see the future of humanity as positive, then Nick Bostrom suggests that we want to act to reduce existential risk. Nick Beckstead’s PhD (p. 85) makes a similar claim:

'The key claims are that humanity could survive for a very long time, with an expected duration on the order of billions of years or more; that the future is overwhelmingly important if my normative assumptions are true; that we could potentially shape the future for the better by speeding up progress, reducing existential risk, or producing other positive trajectory changes; and that what matters most for shaping the far future is creating positive trajectory changes. The best ways of shaping the far future could be very broad or very targeted, and knowing which would be very valuable.'

If we assume that a Pinker trajectory continues, and stuff gets better, then reducing big, sexy risks like AI, nuclear, and biosecurity seem important. But, if we take a Harari view that ‘things might get much, much worse’, then perhaps some EAs might also prioritise shaping the trajectories of topics like democracy and surveillance, while others focus on AI, bio, and nuclear.

See my further reading list below!

Selected quotes

Optimism vs pessimism


“We have the ability to think up solutions to problems to share them via language”
“Our lifespans have more than doubled… a race of death and war has come down, [and declining] rates of death and homicide, violence against women, disease [all point] out that we have made progress in the past.”
“Whether [there’s] cause for optimism in the future is impossible to say. No one is a prophet that we're doomed… Maybe things will get worse, but it won't necessarily get worse given that we know that we've solved problems in the past.”


“[I would] summarize the current human condition in three brief sentences: things for humans are better than ever; things are still quite bad; and things can get much, much worse.”

Outlook on the future


“Climate change is the most obvious [threat to humanity]. We're not on track to solving it, and there's every reason to believe that the consequences could be terrible.”
“And the threat of nuclear war… it's not negligibly unlikely. It's a high enough probability that we should worry about it. As with climate change, the direction that we moved in in the last five years has not been positive.”


“The risk of disruptive technologies, especially artificial intelligence and which of course hold also enormous promises to humankind, but also some very serious threats, whether it's a complete, a social upheaval as a result of changing the job market very, very quickly, whether it's the rise of new digital dictatorships and totalitarian regimes worse than anything we've seen before in history.”
“And maybe the biggest problem with all that is that for all three threats, whether we talk about nuclear war or climate change or the rise of disruptive technologies to do something effective against the threat, you need global cooperation”
“And I sometimes have a suspicion that we are like running on the last gas in the gas tank in our philosophical gap… climate change and nuclear war in a way are kind of easy problems because, we know want to do about it. We need to prevent them. It's very easy. Maybe we, not everybody agrees that it's a real threat. Maybe not everybody agrees how to stop it. But in principle nobody says, Hey, climate change. That's great. Let's have more of that nuclear war. Yes, I'm in favour. Nobody says that. But with technological disruption, what to do is AI and bio-engineering, there is absolutely no agreed goal.”

Surveillance states, fake news


“I'm a bit more skeptical of how rapidly there'll be advances in artificial intelligence, genetic engineering of humans, and, psychological manipulation”
“Humans have a lot of squeamishness and taboos that often will retard technological process”
“The issue is, are the ordinary expectations of people in it who are not subject to occupation, who are living in a democracy going to be robust enough… to rise to the occasion of resisting that kind of constant surveillance?”
“Even the simple [AI] problems turn out to be harder than within than we think. When it comes to hacking human behaviour, it's all the more complex”
“The studies of the effects of fake news on social media showing that the effects are very small and probably did not influence the election. That most of the fake news went to people who are already highly partisan and whose minds weren't going to change. It's not as easy to manipulate human behaviour as we might fear in our dystopian nightmares.”

Algorithmic discrimination


“Clinical decision making.. Five predictors [can] make a decision much better than a typical human judge, or diagnosing disease.. we've known this for a 70 years almost… subjective impressions are subject to bias and error, including racist bias.. But we don't hand it over to algorithms.”


“I do think that there is a chance we'll see some version of digital dictatorships in totalitarian regimes based on this massive surveillance and analysis of humans”
“You just have machines going over all the data. And again, this is not science fiction. This is happening in various parts of the world. It's happening now in China. It's happening now in my home country, in Israel… you just have these very sophisticated algorithms going over enormous amounts of data over millions of people. And that's a complete game changer.”
“But what will happen if and when efficiency and ethics go in different directions, that totalitarianism becomes very effective, but it's still extremely unethical. Would our ethical kind of constraints and ideas hold in that situation?”
“So I'm not thinking about this science fiction scenario that an AI, that micromanages every movement of your day, it starts with far simpler things of just shifting more and more authority to the AI to decide who to accept the university, who to hire for the job and whom to date.”

Politicisation of academia


“There certainly is cause for concern about intellectual openness in the, uh, uh, institutions that are supposed to promote it, namely universities. There has been an ideological narrowing that is, universities are becoming more mono-cultures of left-wing thought.”
“On the other hand, there's some optimism in those of us who are worried about authoritarian populism in that it is kind of an old person's ideology and the support for populism falls off with generational cohort”


“We shouldn't generalize from the cultural war cultural Wars in the US to the world as a whole… far worse things are happening in places like Hungary, like Russia, where the suppression is definitely the other way around, that entire departments are being closed”
“The suppression of several departments at present in Hungary or Russia. So gender studies is being blamed for being, it's not science, it’s politics, it's ideology. But this will happen to more and more departments. We shouldn’t abandon the gender studies department in its fight because it will come to more and more departments. Now climate science is also politicized and soon computer science will be politicized.”

Responsibility of scientists


“First of all, scientists need to educate… you do need a better understanding of what's happening and what's coming, because it's very relevant to political decisions.”
“Secondly, scientists have to take greater responsibility for what they are doing. For example, if you're an engineer and you're developing some new tool in any field, so I would say take a few minutes or a few hours, think about the politician you most fear in the world and now think what will he or she do with my invention? The general tendency of engineers and entrepreneurs is to think about the best case scenarios."

Further reading

Of the recommendations above, these seem most relevant:

I recently enjoyed this, and it picks up many similar themes:

A talk on EAs and surveillance by Ben Garfinkel

Risk typology and cognitive biases





More posts like this

Sorted by Click to highlight new comments since: Today at 12:19 AM

Thanks for this. Minor: should be Steven Pinker, not Stephen.

Thanks, updated.

More from Ben
Curated and popular this week
Relevant opportunities