It's been a year, but I finally wrote up my critique of "longtermism" (of the Bostrom / Toby Ord variety) in some detail. I explain why this ideology could be extremely dangerous -- a claim that, it seems, some others in the community have picked up on recently (which is very encouraging). The book is on Medium here and PDF/EPUB versions can be downloaded here.

-9

0
0

Reactions

0
0
Comments11
Sorted by Click to highlight new comments since: Today at 4:46 AM

See my response to AlexHT for some of my overall thoughts. A couple other things that might be worth quickly sketching: 

The real meat of the book from my perspective were the contentions that (1) longtermist ideas, and particularly the idea that the future is of overwhelming importance, may in the future be used to justify atrocities, especially if these ideas become more widely accepted, and (2) that those concerned about existential risk should be advocating that we decrease current levels of technology, perhaps to pre-industrial levels. I would have preferred if the book focused more on arguing for these contentions. 

Questions for Phil (or others who broadly agree):  

  • On (1) from above, what credence do you place on 1 million or more people being killed sometime in the next century in a genocidal act whose public or private justifications were substantially based on EA-originating longtermist ideas?
  • To the extent you think such an event is unlikely to occur, is that mostly because you think that EA-originating longtermists won't advocate for it, or mostly because you think that they'll fail to act on it or persuade others?
  • On (2) from above, am I interpreting Phil correctly as arguing in Chapter 8 for a return to pre-industrial levels of technology?  (Confidence that I'm interpreting Phil correctly here: Low.)
  • If Phil does want us to return to a pre-industrial state, what is his credence that humanity will eventually make this choice? What about in the next century?

P.S. - If you're feeling dissuaded from checking out Phil's arguments because they are labeled as a 'book', and books are long, don't be - it's a bit long for an article, but certainly no longer than many SSC posts, for example. That said, I'm also not endorsing the book's quality. 

Worth highlighting the passage that the "mere ripples" in the title  refers to  for those  skimming the comments:

 Referring to events like “Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts [sic], World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS" Bostrom writes that
 
  these types of disasters have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people immediately affected, in the big picture of things—from the perspective of humankind as a whole—even the worst of these catastrophes are mere ripples on the surface of the great sea of life. They haven’t significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species. 

Mere ripples! That’s what World War II—including the forced sterilizations mentioned above, the Holocaust that killed 6 million Jews, and the death of some 40 million civilians—is on the Bostromian view. This may sound extremely callous, but there are far more egregious claims of the sort. For example, Bostrom argues that the tiniest reductions in existential risk are morally equivalent to the lives of billions and billions of actual human beings. To illustrate the idea, consider the following forced-choice scenario:

Bostrom’s altruist: Imagine that you’re sitting in front of two red buttons. If you push the first button, 1 billion living, breathing, actual people will not be electrocuted to death. If you push the second button, you will reduce the probability of an existential catastrophe by a teeny-tiny, barely noticeable, almost negligible amount. Which button should you push?

 For Bostrom, the answer is absolutely obvious: you should push the second button! The issue isn’t even close to debatable. As Bostrom writes in 2013, even if there is “a mere 1 per cent chance” that 10^54 conscious beings living in computer simulations come to exist in the future, then “the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point  is worth a hundred billion times as much as a billion human lives.” So, take a billion human lives, multiply it by 100 billion, and what you get is the moral equivalent of reducing existential risk on the assumption that there is a “one billionth of one billionth of one percentage point” that we run vast simulations in which 10^54 happy people reside. This means that, on Bostrom’s view, you would be a grotesque moral monster not to push the second button. Sacrifice those people! Think of all the value that would be lost if you don’t!

I've skimmed the book and it looks very interesting and relevant. It surprises me that people have downvoted this post - could someone who did so explain their reasoning?

I don’t have time to write a detailed and well-argued response, sorry. Here are some very rough quick thoughts on why I downvoted.  Happy to expand on any points and have a discussion.

In general, I think criticisms of longtermism from people who 'get' longtermism are incredibly valuable to longtermists.

One reason if that if the criticisms carry entirely, you'll save them from basically wasting their careers. Another reason is that you can point out weaknesses in longtermism or in their application of longtermism that they wouldn't have spotted themselves.  And a third reason is that in the worlds where longtermism is true, this helps longtermists work out better ways to frame the ideas to not put off potential sympathisers.

Clarity

In general, I found it hard to work out the actual arguments of the book and how they interfaced with the case for longtermism. 

Sometimes I found that there were some claims being implied but they were not explicit. So please point out any incorrect inferences I’ve made below!

I was unsure what was being critiqued: longtermism, Bostrom’s views, utilitarianism, consequentialism, or something else. 

The thesis of the book (for people reading this comment, and to check my understanding)

“Longtermism is a radical ideology that could have disastrous consequences if the wrong people—powerful politicians or even lone actors—were to take its central claims seriously.”

“As outlined in the scholarly literature, it has all the ideological ingredients needed to justify a genocidal catastrophe.”

Utilitarianism (Edit: I think Tyle has added a better reading of this section below)

  • This section seems to caution against naive utilitarianism, which seems to form a large fraction of the criticism of longtermim. I felt a bit like this section was throwing intuitions at me, and I just disagreed with the intuitions being thrown at me. Also, doing longtermism better obviously means better accounting for all the effects of our actions, which naturally pushes away from naive utilitarianism
  • In particular, there seems to be a sense of derision at any philosophy where the ‘means justify the end’. I didn't really feel like this was argued for (please correct me if I'm wrong!)
  • I don’t know whether that meant the book was arguing against consequentialism in general, or arguing that longtermism overweights consequences in the longterm future compared to other consequences, but is right to focus on consequences generally
  • I would have preferred if these parts of the book were clear about exactly what the argument was
  • I would have preferred if these parts of the book did less intuition-fighting (there’s a word for this but I can’t remember it)

Millennialism

  • “A movement is millennialist if it holds that our current world is replete with suffering and death but will soon “be transformed into a perfect world of justice, peace, abundance, and mutual love.” (pg.24 of the book)
  • Longtermism does not say our current world is replete with suffering and death
  • Longtermism does not say the world will be transformed soon
  • Longtermism does not say that if the world is transformed it will be into a world of justice, peace, abundance, and mutual love.
  • Therefore, longtermism does not meet the stated definition of a millennialist movement
  • Granted, there are probably longtermists that do hold these views, but these views are not longtermism. I don’t know whether Bostrom (whose views seems to be the focus of the book) holds these views. Even if he does, these views are not longtermism

Mere Ripples

  • Some things are bigger than other things
  • That doesn’t mean that the smaller things aren’t bad or good or important- they are just smaller than the bigger things
  • If you can make a good big thing happen or make a good small thing happen you can make  more good by making the big thing happen
  • That doesn't mean the small thing is not important, but it is smaller than the big thing
  • I feel confused

White Supremacy

  • The book quotes this section from Beckstead’s Thesis:

Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.

The book goes on to say:

In a phrase, they support white supremacist ideology. To be clear, I am using this term in a technical scholarly sense. It denotes actions or policies that reinforce “racial subordination and maintaining a normalized White privilege.” As the legal scholar Frances Lee Ansley wrote in 1997, the concept encompasses “a political, economic and cultural system in which whites overwhelmingly control power and material resources,” in which “conscious and unconscious ideas of white superiority and entitlement are widespread, and relations of white dominance and non-white subordination are daily reenacted across a broad array of institutions and social settings.”

On this definition, the claims of Mogensen and Beckstead are clearly white supremacist: African nations, for example, are poorer than Sweden, so according to the reasoning above we should transfer resources from the former to the latter. You can fill in the blanks. Furthermore, since these claims derive from the central tenets of Bostromian longtermism itself, the very same accusation applies to longtermism as well. Once again, our top four global priorities, according to Bostrom, must be to reduce existential risk, with the fifth being to minimize “astronomical waste” by colonizing space as soon as possible. Since poor people are the least well-positioned to achieve these aims, it makes perfect sense that longtermists should ignore them. Hence, the more longtermists there are, the worse we might expect the plight of the poor to become.

  • I'm pretty sure the book isn't using 'white supremacist' in the normal sense of the phrase. For that reason, I'm confused about this, and would appreciate answers to these questions
    • The Beckstead quote ends ‘other things being equal’. Doesn't that imply that the claim is not 'overall, it's better to save lives in rich countries than poor countries' but 'here is an argument that pushes in favour of saving lives in rich countries over poor countries'?
    • Imagine longtermism did imply helping rich people instead of helping poor people, and that that made it white supremacist. Does that mean that anything that helps rich people is white supremacist (because the resources could have been used to help poor people)?
      • What if the poor people are white and the rich people are not white?
      • Why do  rich-nation government health services not meet this definition of white supremacy?
  • I'd also have preferred if it was clear how this version of white supremacy interfaces with the normal usage of the phrase

Genocide (Edit: I think Tyle and Lowry have added good explanations of this below)

  • The book argues that a longtermist would support a huge nuclear attack to destroy everyone in Germany if there was a less than one-in-a-million chance of someone in Germany building a nuclear weapon. (Ch.5)
  • The book says that maybe a longtermist could avoid saying that they would do this if they thought that the nuclear attack would decrease existential risk
  • The book says that this does not avoid the issue though and implies that because the longtermist would even consider this action, longtermism is dangerous (please correct me if I’m misreading this)
  • It seems to me that this argument is basically saying that because a consequentialist weighs up the consequences of each potential action against other potential actions, they at least consider many actions, some of which would be terrible (or at least would be terrible from a common-sense perspective). Therefore, consequentialism is dangerous. I think I must be misunderstanding this argument as it seems obviously wrong as stated here. I would have preferred if the argument here was clearer

I had left this for a day and had just come back to write a response to this post but fortunately you've made a number of the points I was planning on making.

I think it's really good to see criticism of core EA principles on here, but I did feel that a number of the criticisms might have benefited from being fleshed out more fully .  

OP made it clear that he doesn't agree with a number of Nick Bostrom's opinions but I wasn't entirely clear (I only read it the once and quite quickly, so it may be the case that I missed this) where precisely the main disagreement lay. I wasn't sure if it whether OP was disagreeing with: 

  1. That there was a theoretical case to be made for orienting our actions with a view to the long term future/placing a high value on future human potential
  2. High profile longtermists' subsequent inferences based on longtermist values and/or the likelihoods they assign to achieving 'high human flourishing'/transhumanist oucomes (ie. we should place a much lower probability on realising these high-utility futures and therefore many longtermist arguments are weakened)
  3. The idea that longtermism can work as a practical guide in reality (ie. that longtermism may correctly identify the 'best' actions to take but due to misinterpretation and 'slippery slope' factors it acts as an information hazard and should therefore be avoided)

Re your response to the 'Genocide' section Alex: I think Phil's argument was that longtermism/transhumanist potential leads to a Pascal's mugging in this situation where very low probabilities of existential catastrophe can be weighted as so undesirable that they justify extraordinary behaviour (in this case killing large numbers of individuals in order to reduce existential risk by a very small amount). This doesn't seem to me to be an entirely ridiculous point but I believe this paints a slightly absurd picture where longtermists do not see the value in international laws/human rights and would be happy to support their violation in aid of very small reductions in existential risk. 

In the same way that consequentialists see the value in having a legal system based on generalised common laws, I think very few longtermists would argue for a wholesale abandonment of human rights.

As a separate point: I do think the use of 'white supremacist' is misleading, and is probably more likely to alienate then clarify. I think it could risk becoming a focus and detracting from some of the more substantial points being raised in the book.

I thought the book was an interesting critique though and forced me to clarify my thinking on a number of points. Would be interested to hear further.
 

I upvoted Phil's post, despite agreeing with almost all of AlexHT's response to EdoArad above. This is because I want to encourage good faith critiques, even those which I judge to contain serious flaws. And while there were elements of Phil's book that read to me more like attempts at mood affiliation than serious engagement with his interlocutor's views (e.g. 'look at these weird things that Nick Bostrom said once!'), on the whole I felt that there was enough effort at engagement that I was glad Phil took the time to write up his concerns. 

Two aspects of the book that I interpreted somewhat differently than Alex: 

  • The genocide argument that Alex expressed confusion about: I thought Phil's concern was not that longtermism would merely consider genocide while evaluating options, but that it seems plausible to Phil that longtermism (or a future iteration of it encountering different facts) could endorse genocide - i.e. that Phil is worried about genocide as an output of longtermism's decision process, not as an input. My model of Phil is that if he were confident that longtermism would always reject genocide, then he wouldn't be concerned merely that such possibilities are evaluated. Confidence: Low/moderate. 
  • The section describing utilitarianism: I read this section as merely aiming to describe an aspect of longtermism and to highlight features which might be wrong or counter-intuitive, not to actually make any arguments against the views he describes. This could explain Alex's confusion about what was being argued for (nothing) and feeling that intuitions were just being thrown at him (yes). I think Phil's purpose here is to lay the groundwork for his later argument that these ideas could be dangerous.  The only argument I noticed against utilitarianism comes later - namely, that together with empirical beliefs about the possibility of a large future it leads to conclusions that Phil rejects. Confidence: Low. 

I agree with Alex that the book was not clear on these points (among others), and I attribute our different readings to that lack of clarity. I'd certainly be happy to hear Phil's take. 

I have a couple of other thoughts that I will add in a separate comment. 

Granted, there are probably longtermists that do hold these views, but these views are not longtermism. I don’t know whether Bostrom (whose views seems to be the focus of the book) holds these views. Even if he does, these views are not longtermism

I haven't read the top-level post (thanks for summarising!); but in general, I think this is a weak counterargument. If most people in a movement (or academic field, or political party, etc) holds a rare belief X, it's perfectly fair to criticise the movement for believing X. If the movement claims that X isn't a necessary part of their ideology, it's polite for a critic to note that X isn't necessarily endorsed as the stated ideology, but it's important that their critique of the movement is still taken seriously. Otherwise, any movement can choose a definition that avoids mentioning the most objectionable part of their ideology without changing their beliefs or actions. (Similar to the motte-and-bailey fallacy). In this case, the author seems to be directly worried about longtermists' beliefs and actions; he isn't just disputing the philosophy.

Thanks for you comment, it makes a good point . My comment was hastily written and I think my argument that you're referring to is weak, but not as weak as you suggest.

At some points the author is specifically critiquing longtermism the philosophy (not what actual longtermists think and do) eg. when talking about genocide. It seems fine to switch between critiquing the movement and critiquing the philosophy, but I think it'd be better if the switch was made clear. 

There are many longtermists that don't hold these views (eg. Will MacAskill is literally about to publish the book on longtermism and doesn't think we're at an especially influential time in history, and patient philanthropy gets taken seriously by lots of longtermists). 

I'm also not sure that lots of longtermists (even of the Bostrom/hinge of history type) would agree that the quoted claim accurately represent their views

 our current world is replete with suffering and death but will soon “be transformed into a perfect world of justice, peace, abundance, and mutual love.”

But, I do agree that some longtermists do think 

  • there are likely to be very transformative events soon eg. within 50 years
  • in the long run, if they go well, these events will massively improve the human condition 

And there's some criticisms you can make of that kind of ideology that are similar to the criticisms the author makes. 

It seems fine to switch between critiquing the movement and critiquing the philosophy, but I think it'd be better if the switch was made clear.

Agreed.

There are many longtermists that don't hold these views (eg. Will MacAskill is literally about to publish the book on longtermism and doesn't think we're at an especially influential time in history, and patient philanthropy gets taken seriously by lots of longtermists).

Yeah this seems right, maybe with the caveat that Will has (as far as I know) mostly expressed skepticism about this being the most influential century, and I'd guess he does think this century is unusually influential, or at least unusually likely to be unusually influential.

And yes, I also agree that the quoted views are very extreme, and that longtermists at most hold weaker versions of them.

Thanks, this comment saved me time/emotional energy from reading the post myself.

[Responding to Alex HT above:]

I'll try to find the time to respond to some of these comments. I would strongly disagree with most of them. For example, one that just happened to catch my eye was: "Longtermism does not say our current world is replete with suffering and death."

So, the target of the critique is Bostromism, i.e., the systematic web of normative claims found in Bostrom's work. (Just to clear one thing up, "longtermism" as espoused by "leading" longtermists today has been hugely influenced by Bostromism -- this is a fact, I believe, about intellectual genealogy, which I'll try to touch upon later.)

There are two main ingredients of Bostromism, I argue: total utilitarianism and transhumanism. The latter absolutely does indeed see our world the way many  religious traditions have: wretched, full of suffering, something to ultimately be transcended (if not via the rapture or Parousia then via cyborgization and mind-uploading). This idea, this theme, is so prominent in transhumanist writings that I don't know how anyone could deny it.

Hence, if transhumanism is an integral component of Bostromism (and it is), and if Bostromism is a version of longtermism (which it is, on pretty much any definition), then the millennialist view that our world is in some sort of "fallen state" is an integral component of Bostromism, since this millennialist view is central to the normative aspects of transhumanism.

Just read "Letter from Utopia." It's saturated in a profound longing to escape our present condition and enter some magically paradisiacal future world via the almost supernatural means of radical human enhancement. (Alternatively, you could write a religious scholar about transhumanism. Some have, in fact, written about the ideology. I doubt you'd find anyone who'd reject the claim that transhumanism is imbued with millennialist tendencies!)