V

vadmas

173 karmaJoined Dec 2020

Comments
32

Hey! Can't respond most of your points now unfortunately,  but just a few quick things :) 

(I'm working on a followup piece at the moment and will try to respond to some of your criticisms there) 

My central point is the 'inconsequential in the grand scheme of things' one you highlight here. This is why I end the essay with this quote:

> If among our aims and ends there is anything conceived in terms of human happiness and misery, then we are bound to judge our actions in terms not only of possible contributions to the happiness of man in a distant future, but also of their more immediate effects. We must not argue that the misery of one generation may be considered as a mere means to the end of securing the lasting happiness of some later generation or generations; and this argument is improved neither by a high degree of promised happiness nor by a large number of generations profiting by it. All generations are transient. All have an equal right to be considered, but our immediate duties are undoubtedly to the present generation and to the next. Besides, we should never attempt to balance anybody’s misery against somebody else’s happiness. 

The "undefined" bit also "proves too much"; it basically says we can't predict anything ever, but actually empirical evidence and common sense both strongly indicate that we can make many predictions with better-than-chance accuracy

Just wanted to flag that I responded to the 'proving too much' concern here:  Proving Too Much

Very balanced assessment! Nicely done :) 

Oops sorry haha neither did I! "this" just meant low-engagement, not  your excellent advice about title choice. Updated :) 

Hehe taking this as a sign I'm overstaying my welcome. Will finish the last post of the series though and move on :) 

You're correct, in practice you wouldn't - that's the 'instrumentalist' point made in the latter half of the post 

Both actually! See section 6 in Making Ado Without Expectations - unmeasurable sets are one kind of expectation gap (6.2.1) and 'single-hit' infinities are another (6.1.2)

Worth highlighting the passage that the "mere ripples" in the title  refers to  for those  skimming the comments:

 Referring to events like “Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts [sic], World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS" Bostrom writes that
 
  these types of disasters have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people immediately affected, in the big picture of things—from the perspective of humankind as a whole—even the worst of these catastrophes are mere ripples on the surface of the great sea of life. They haven’t significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species. 

Mere ripples! That’s what World War II—including the forced sterilizations mentioned above, the Holocaust that killed 6 million Jews, and the death of some 40 million civilians—is on the Bostromian view. This may sound extremely callous, but there are far more egregious claims of the sort. For example, Bostrom argues that the tiniest reductions in existential risk are morally equivalent to the lives of billions and billions of actual human beings. To illustrate the idea, consider the following forced-choice scenario:

Bostrom’s altruist: Imagine that you’re sitting in front of two red buttons. If you push the first button, 1 billion living, breathing, actual people will not be electrocuted to death. If you push the second button, you will reduce the probability of an existential catastrophe by a teeny-tiny, barely noticeable, almost negligible amount. Which button should you push?

 For Bostrom, the answer is absolutely obvious: you should push the second button! The issue isn’t even close to debatable. As Bostrom writes in 2013, even if there is “a mere 1 per cent chance” that 10^54 conscious beings living in computer simulations come to exist in the future, then “the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point  is worth a hundred billion times as much as a billion human lives.” So, take a billion human lives, multiply it by 100 billion, and what you get is the moral equivalent of reducing existential risk on the assumption that there is a “one billionth of one billionth of one percentage point” that we run vast simulations in which 10^54 happy people reside. This means that, on Bostrom’s view, you would be a grotesque moral monster not to push the second button. Sacrifice those people! Think of all the value that would be lost if you don’t!

Nice yeah Ben and I will be there! 

What is your probability distribution across the size of the future population, provided there is not an existential catastrophe? 

Do you for example think there is a more than 50% chance that it is greater than 10 billion?

 

I don't have a probability distribution across the size of the future population. That said, I'm happy to interpret the question in the colloquial, non-formal sense, and just take >50% to mean "likely". In that case, sure, I think it's likely that the population will exceed 10 billion. Credences shouldn't be taken any more seriously than that - epistemologically equivalent to survey questions where the respondent is asked to tick a very unlikely, unlikely, unsure, likely, very likely  box.

Load more