Proving too much: A response to the EA forum

You're correct, in practice you wouldn't - that's the 'instrumentalist' point made in the latter half of the post 

Proving too much: A response to the EA forum

Both actually! See section 6 in Making Ado Without Expectations - unmeasurable sets are one kind of expectation gap (6.2.1) and 'single-hit' infinities are another (6.1.2)

Were the Great Tragedies of History “Mere Ripples”?

Worth highlighting the passage that the "mere ripples" in the title  refers to  for those  skimming the comments:

 Referring to events like “Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts [sic], World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS" Bostrom writes that
  these types of disasters have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people immediately affected, in the big picture of things—from the perspective of humankind as a whole—even the worst of these catastrophes are mere ripples on the surface of the great sea of life. They haven’t significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species. 

Mere ripples! That’s what World War II—including the forced sterilizations mentioned above, the Holocaust that killed 6 million Jews, and the death of some 40 million civilians—is on the Bostromian view. This may sound extremely callous, but there are far more egregious claims of the sort. For example, Bostrom argues that the tiniest reductions in existential risk are morally equivalent to the lives of billions and billions of actual human beings. To illustrate the idea, consider the following forced-choice scenario:

Bostrom’s altruist: Imagine that you’re sitting in front of two red buttons. If you push the first button, 1 billion living, breathing, actual people will not be electrocuted to death. If you push the second button, you will reduce the probability of an existential catastrophe by a teeny-tiny, barely noticeable, almost negligible amount. Which button should you push?

 For Bostrom, the answer is absolutely obvious: you should push the second button! The issue isn’t even close to debatable. As Bostrom writes in 2013, even if there is “a mere 1 per cent chance” that 10^54 conscious beings living in computer simulations come to exist in the future, then “the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point  is worth a hundred billion times as much as a billion human lives.” So, take a billion human lives, multiply it by 100 billion, and what you get is the moral equivalent of reducing existential risk on the assumption that there is a “one billionth of one billionth of one percentage point” that we run vast simulations in which 10^54 happy people reside. This means that, on Bostrom’s view, you would be a grotesque moral monster not to push the second button. Sacrifice those people! Think of all the value that would be lost if you don’t!

A case against strong longtermism

Nice yeah Ben and I will be there! 

Strong Longtermism, Irrefutability, and Moral Progress

What is your probability distribution across the size of the future population, provided there is not an existential catastrophe? 

Do you for example think there is a more than 50% chance that it is greater than 10 billion?


I don't have a probability distribution across the size of the future population. That said, I'm happy to interpret the question in the colloquial, non-formal sense, and just take >50% to mean "likely". In that case, sure, I think it's likely that the population will exceed 10 billion. Credences shouldn't be taken any more seriously than that - epistemologically equivalent to survey questions where the respondent is asked to tick a very unlikely, unlikely, unsure, likely, very likely  box.

Strong Longtermism, Irrefutability, and Moral Progress

Granted any focus on AI work necessarily reduces the amount of attention going towards near-term issues, which I suppose is your point. 

 Yep :) 

Strong Longtermism, Irrefutability, and Moral Progress

I don't consider human extermination by AI to be a 'current problem' - I think that's where the disagreement lies.  (See my blogpost for further comments on this point) 

Strong Longtermism, Irrefutability, and Moral Progress

(as far as I can tell their entire point is that you can always do an expected value calculation and "ignore all the effects contained in the first 100" years)


Yes, exactly. One can always find some  expected value calculation that allows one to ignore present-day suffering. And worse, one can keep doing this between now and eternity, to ignore all suffering forever. We can describe this using the language of "falsifiability" or "irrefutability" or  whatever - the word choice doesn't really matter here. What matters is that this is a very dangerous game to be playing.

Strong Longtermism, Irrefutability, and Moral Progress

Firstly, you and vadmas seem to assume number 2 is the case.


Oops nope the exact opposite! Couldn't possibly agree more strongly with

Working on current problems allows us to create moral and scientific knowledge that will help us make the long-run future go well

Perfect, love it, spot on. I'd be 100%  on board with longtermism if this is what it's about - hopefully conversations like these can move it there. (Ben makes this point near the end of our podcast conversation fwiw)

Do you in fact think that knowledge creation has strong intrinsic value? I, and I suspect most EAs, only think knowledge creation is instrumentally valuable. 

Well, both. I do think it's intrinsically valuable to learn about reality, and I support research into fundamental physics, biology, history, mathematics, ethics etc for that reason. I think it would be intellectually impoverishing to only support research that has immediate and foreseeable practical benefits. But fortunately knowledge creation also  has enormous  instrumental value. So it's not a one-or-the other thing. 

Load More