Epistemic status: Just a thought that I have, nothing too rigorous
The reason Longtermism is so enticing (to me at least), is that the existence of so many future life hangs in the balance right now. It just seems to be a pretty good deed to me, to bring 10^52 people (or whatever the real number will turn out to be) into existence.
This hinges on the belief that Utility scales linearly with the number of QUALYs, so that twice as many people are also twice as morally valuable. My belief in this was recently shaken by this thought experiment:
***
You are a traveling EA on a trip to St. Petersburg. In a dark alley, you meet a Demon with the ability to create Universes and a serious gambling addiction. He says, he was about to create a universe with 10 happy people. But he gives you three fair dice and offers you a bet: You can throw the three dice and if they all come up 6, he refrains from creating a universe. If you roll anything else, he will double the number of people in the universe he will create.
You do the expected value calculation and figure out, that by throwing the dice you will create 696,8 QUALYs in expectation. You take the bet and congratulate yourself on your ethical decision.
After the good deed is done, and the demon has now committed to creating 20 happy people, he offers you the same bet again. Roll the 3 dice: he won't create a universe at 6,6,6 and doubles it at anything else. The demon tells you that he will offer you the same bet repeatedly. You do your calculations and throw the dice again and again, until, eventually, you throw all sixes, and the demon vanishes, without having to create any universe, in a cloud of sulfury mist and leaves you wondering if you should have done anything differently.
***
There are a few ways to weasel out of the demon's bet. You could say, that the strategy “always take the demons bet” has an expected value of 0 QUALYs, and so you should go with some tactic like “Take the first 20 bets, then call it a day”. But I think if you refuse a bet, you should be able to reject this bet without taking into account what bets you have taken in the past or are still taking in the future.
I think the only consistent way to refuse the Demons bets at some point is to have a bounded utility function. You might think it would be enough to have a utility function that does not scale linearly with the number of QUALYs, but logarithmically or something. But in that case, the demon can offer to double the amount of utility, instead of doubling the amount of QUALYs, and we are back in the paradox. At some point, you have to be able to say: “There is no possible universe that is twice as good as the one, you have promised me already”. So at some point, adding more happy people to the universe must have a negligible ethical effect. And once we accept that that must happen at some point, how confident are we, that 10^52 people are that much better than 8billion?
Overall I am still pretty confused about this subject and would love to hear more arguments/perspectives.
(Note: I've made several important additions to this comment within the first ~30 minutes of posting it, plus some more minor edits after.)
I think this is an important point, so I've given you a strong upvote. Still, I think total utilitarians aren't rationally required to endorse EV maximization or longtermism, even approximately except under certain other assumptions.
Tarsney has also written that stochastic dominance doesn't lead to EV maximization or longtermism under total utilitarianism, if the probabilities (probability differences) are low enough, and has said it's plausible the probabilities are in fact that low (not that he said it's his best guess they're that low). See "The epistemic challenge to longtermism", and especially footnote 41.
It's also not clear to me that we shouldn't just ignore background noise that's unaffected by our actions or generally balance other concerns against stochastic dominance, like risk aversion or ambiguity aversion, particularly with respect to the difference one makes, as discussed in "The case for strong longtermism" by Greaves and MacAskill in section 7.5. Greaves and MacAskill do argue that ambiguity aversion with respect to the outcomes doesn't point against existential risk reduction, and if I recall correctly from following citations, that ambiguity aversion with respect to the difference one makes is too agent-relative.
On the other hand, using your own precise subjective probabilities to define rational requirement seems pretty agent-relative to me, too. Surely, if the correct ethics is fully agent-neutral, you should be required to do what actually maximizes value among available options, regardless of your own particular beliefs about what's best. Or, at least, precise subjective probabilities seem hard to defend as agent-neutral, when different rational agents could have different beliefs even with access to the same information, due to different priors or because they weigh evidence differently.
Plus, without separability (ignoring what's unaffected) in the first place, the case for utilitarianism itself seems much weaker, since the representation theorems that imply utilitarianism, like Harsanyi's (and generalization here) and the deterministic ones like the one here, require separability or something similar.