New GPI paper: Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term'. Abstract:
The Asymmetry is the view in population ethics that, while we ought to avoid creating additional bad lives, there is no requirement to create additional good ones. The question is how to embed this view in a complete normative theory, and in particular one that treats uncertainty in a plausible way. After reviewing the many difficulties that arise in this area, I present general ‘supervenience principles’ that reduce arbitrary choices to uncertainty-free ones. In that sense they provide a method for aggregating across states of nature. But they also reduce arbitrary choices to one-person cases, and in that sense provide a method for aggregating across people. The principles are general in that they are compatible with total utilitarianism and ex post prioritarianism in fixed-population cases, and with a wide range of ways of extending these views to variable-population cases. I then illustrate these principles by writing down a complete theory of the Asymmetry, or rather several such theories to reflect some of the main substantive choice-points. In doing so I suggest a new way to deal with the intransitivity of the relation ‘ought to choose A over B’. Finally, I consider what these views have to say about the importance of extinction risk and the long-run future.
Interesting!
What if we redefine rationality to be relative to choice sets? We might not have to depart too far from vNM-rationality this way.
The axioms of vNM-rationality are justified by Dutch books/money pumps and stochastic dominance, but the latter can be weakened, too, since many outcomes are indeed irrelevant, so there's no need to compare to them all. For example, there's no Dutch book or money pump that only involves changing the probabilities for the size of the universe, and there isn't one that only involves changing the probabilities for logical statements in standard mathematics (ZFC); it doesn't make sense to ask me to pay you to change the probability that the universe is finite. We don't need to consider such lotteries. So, if we can generalize stochastic dominance to be relative to a set of possible choices, then we just need to make sure we never choose an option which is stochastically dominated by another, relative to that choice set. That would be our new definition of rationality.
Here's a first attempt:
Let C be a set of choices or probabilistic lotteries over outcomes (random variables), and let O be the set of all possible outcomes which have nonzero probability in some choice from C (or something more general to accommodate general probability measures). Then for X,Y∈C , we say X stochastically dominates Y with respect to C if:
for all z∈O, and the inequality is strict for some z∈O. This can lift comparisons using <C, a relation ⊆O×O, between elements of O to random variables over the elements of O. <C need not even be complete over O or transitive, but stochastic dominance thus defined will be transitive (perhaps at the cost of losing some comparisons). <C could also actually be specific to C, not just to O.
We could play around with the definition of O here.
When we consider choices to make now, we need to model the future and consider what new choices we will have to make, and this is how we would avoid Dutch books and money pumps. Perhaps this would be better done in terms of decision policies rather than a single decision at a time, though.
(This approach is based in part on "Exceeding Expectations: Stochastic Dominance as a General Decision Theory" by Christian Tarsney, which also helps to deal with Pascal's wager and Pascal's mugging.)