Hello again! A few months ago I posted A case against strong longtermism and it generated quite a lot of interesting feedback. I promised to write a response "in a few weeks", where by "few" I meant 9. 

Anyway, the response  ballooned out into multiple posts, and so this piece is the first in a three-part series. In the next post I'll discuss alternatives to decision theory, and the post after that will be on the subject of knowledge and long-term prediction. 

Looking forward to the discussion!

https://vmasrani.github.io/blog/2021/proving_too_much/

Comments7
Sorted by Click to highlight new comments since: Today at 10:31 AM

Nice post and useful discussion. I did think this post would be a meta-comment about the EA forum, not a (continued) discussion of arguments against strong longtermism. 

If, between your actions, you can carve out the undefined/infinite welfare parts so that they're (physically) subjectively identically distributed, then you can just ignore them, as an extension of expected value maximizing total utilitarianism, essentially using a kind of additivity/separability axiom. For example, if you're choosing between two actions A and B, and their payoffs are distributed like

A: X + Z, and

B: Y + Z,

then you can just ignore Z and compare the expected values of X and Y, even if Z is undefined or infinite, or its expectation is undefined or infinite. I would only do this if Z actually represents essentially the same distribution of local events in spacetime for each of A and B, though, since otherwise you can include more or less into X and Y arbitrarily and independently, and the reduction isn't unique.

Unfortunately, I think complex cluelessness should usually prevent us from being able to carve out matching problematic parts so cleanly. This seems pretty catastrophic for any attempts to generalize expected utility theory, including using stochastic dominance.

EDIT: Hmm, might be saved in general even if A's and B's Zs are not identical, but similar enough so that their expected difference is dominated by the expected difference between X and Y. You'd be allowed to the two Zs dependence on each other to match as closely as possible, as long as you preserve their individual distributions.

Technical nitpick: I don't think it's the fact that the set of possible futures is infinite that breaks things, it's the fact that the set of possible futures includes futures which differ infinitely in their value, or have undefined values or can't be compared, e.g. due to infinities, or conditional convergence and no justifiably privileged summation order. Having just one future with undefined value, or a future with  and another with  is enough to break everything; that's only 1 or 2 futures. You can also have infinitely many futures without things breaking, e.g. as long as the expectations of the positive and negative parts are finite, which doesn't require bounded value, but is guaranteed by it.

If a Bayesian expected utility maximizing utilitarian accepts Cromwell's rule, as they should, they can't rule out infinities, and expected utility maximization breaks. Stochastic dominance generalizes EU maximization and can save us in some cases.

Both actually! See section 6 in Making Ado Without Expectations - unmeasurable sets are one kind of expectation gap (6.2.1) and 'single-hit' infinities are another (6.1.2)

When would you need to deal with unmeasurable sets in practice? They can't be constructed explicitly, i.e. with just ZF without the axiom of choice, at least for the Lebesgue measure on the real numbers (and I assume this extends to , but I don't know about infinite-dimensional spaces). I don't think they're a problem.

You're correct, in practice you wouldn't - that's the 'instrumentalist' point made in the latter half of the post 

Thanks for posting a follow-up. My understanding of your claim is something like:

It's true that there is a nonzero probability of infinitely good or bad things happening over any timescale, making expected value calculations equally meaningless for short-term and long-term decisions. However, it's fine to just ignore those infinities in the short-term but incorrect to ignore them in the long term. Therefore, short-term thinking is okay but long-term thinking is not.

Is that accurate? If so, could you elaborate on why you see this distinction?

I see no particular reason to think Pasadena games are more likely one thousand years from now than they are today (and indeed even using the phrase "more likely today" seems to sink the approach of avoiding probability).

More from vadmas
Curated and popular this week
Relevant opportunities