I am trying to understand what it means to say that the future could contain a lot of value "in expectation." Does this mean that if the universe were played over and over again from right now to whenever it ends, the average value that it contains could or would be super-large? (I think that's a frequentist interpretation. What would a Bayesian interpretation look like?)

New Answer
Ask Related Question
New Comment

1 Answers sorted by

I think your description is exactly the meaning of “good in expectation”.

Here is the wiki page on Bayesian probability. I’m confused what this has to do with this question, or thinking more broadly. https://en.wikipedia.org/wiki/Bayesian_probability

Okay, and thank you very much. But how do we know that if the universe timeline were run over and over again that it would be positive in value? Why not think that the future's value "in expectation" is neutral or very negative? Everyone seems to assume that the future will be good! Why?

3Charles He4mo
This is really important and valid, but there's a lot going on in this question! For clarity, note that your first question seemed to be one about statistics or math. This new question is valid too. The short answer is that there is no way to be certain if the universe is good or bad in expectation (and this also depends on what you value, for example, “negative utilitarians” are worried about suffering more than other people). Yes, many people have this belief. There are principled answers why. For example, if you think humans are basically good, and can get more organized, good human activity and values could spread and produce better things in the future, as opposed to there being a lot of rocks in the universe or something. Maybe what you are getting at is some topics or focuses of “Longtermism”, which focuses on the long term future and longtermists often talks positively about its value. Yes, this is valid to believe. Note that actually, longtermism doesn’t need a positive future, just a future we can impact meaningfully. For example, some people focus on “s-risk”, because even a tiny chance of some system of suffering spreading across the universe seems incredibly important. For those people, or people who have “very short timelines”, they actually put zero or maybe negative value on the future. But there still seems like ways to impact the future for many of them.
4 comments, sorted by Click to highlight new comments since: Today at 1:19 PM

As a note, it's only ever the case that something is good "in expectation" from a particular person's point of view or from a particular epistemic state. It's possible for someone to disagree with me because they know different facts about the world, and so for instance think that different futures are more or less likely. 

In other words, the expected value referred to by the term "expectation" is subtly an expected value conditioned on a particular set of beliefs.

As a sanity check, do you understand what it means, from a Bayesian perspective, for a one-off bet to be a "good deal" in expectation? If so, can you explain it in your own words? 

(just wanted to quickly check if the issue is more with big futures specifically, or with Bayesian reasoning in general).

I don’t think they can answer your question, at least with the stipulation of coming from the “Bayesian perspective”. I think that the meaning of that stipulation is the thing they are unsure about.

I think you calculate expected value the same in any case, frequentist or Bayesian, it’s just the probabilities multiplied by the values?

I think Bayesian thinking involves constructing or updating the probabilities or underlying beliefs or parameters for them?

"probabilities" have a different meaning to the average Bayesian than the average frequentist, I think.