I tend to be sceptical of appeals to value as option-set dependent as a means of defending person-affecting views, for the reason that we needn't imagine outcomes as things that someone is able to choose to bring about, as opposed to just something that happens to be the case. If you imagine the possible outcomes this way, then you can't appeal to option-set dependence to block the various arguments, since the outcomes are not options for anyone to realize. And if, say, it makes the outcome better if an additional happy person happens to exist without anyone making it so, then it is hard to see why it should be otherwise when someone brings about that the additional happy person exists. (Compare footnote 9 in this paper/report.)
If we assume that natural risks are negligible, I would guess that this probably reduces to something like the question of what probability you put on extinction or existential catastrophe due to anthropogenic biorisk? Since biorisk is likely to leave much of the rest of Earthly life unscathed, it also hinges on what probability you assign to something like "human level intelligence" evolving anew. I find it reasonably plausible that a cumulative technological culture of the kind that characterizes human beings is unlikely to be a convergent evolutionary outcome (for the reasons given in Powell, Contingency and Convergence), and thus if human beings are wiped out, there is very little probability of similar traits emerging in other lineages. So human extinction due to a bioengineered pandemic strikes me as maybe the key scenario for the extinction of earth-originating intelligent life. Does that seem plausible?
I don't remember 100%, but I think that Thomas and Pummer might both not be arguing for or articulating an axiological theory that ranks outcomes as better or worse, but rather a non-consequentialist theory of moral obligations/oughts. For my own part, I think views like that are a lot more plausible, but the view that it doesn't make the outcome better to create additional happy lives seems to me very hard to defend.
One reason you might believe in a difference in terms of tractability is the stickiness of extinction, and the lack of stickiness attaching to things like societal values. Here's very roughly what I have in mind, running roughshod over certain caveats and the like.
The case where we go extinct seems highly stable, of course. Extinction is forever. If you believe some kind of 'time of perils' hypothesis, surviving through such a time should also result in a scenario where non-extinction is highly stable. And the case for longtermism arguably hinges considerably on such a time of perils hypothesis being true, as David argues.
By contrast, I think it's natural to worry that efforts to alter values and institutions so as to beneficially effect the very long-run by nudging us closer to the very best possible outomces are far more vulnerable to wash-out. The key exception would be if you suppose that there will be some kind of lock-in event.
So does the case for focusing on better futures work hinge crucially, in your view, on assigning significant confidence to lock-in events occuring within the near-term?
I tend to think that the arguments against any theory of the good that encodes the intuition of neutrality are extremely strong. Here's one that I think I owe to Teru Thomas (who may have got it from Tomi Francis?).
Imagine the following outcomes, A - D, where the columns are possible people, the numbers represent the welfare of each person when they exist, and # indicates non-existence.
A 5 -2 # #
B 5 # 2 2
C 5 2 2 #
D 5 -2 6 #
I claim that if you think it's neutral to make happy people, there's a strong case that you should think that B isn't better than A. In other words, it's not better to prevent someone from coming to exist and enduring a life that's not worth living if you simultaneously create two people with lives worth living. And that's absurd. I also think it's really hard to believe if you believe the other side of the asymmetry: that it's bad to create people whose lives are overwhelmed by suffering.
Why is there pressure on you to accept that B isn't better than A? Well, first off, it seems plausible that B and C are equally good, since they have the same number of people at the same welfare levels. So let's assume this is so.
Now, if you accept utilitarianism for a fixed population, you should think that D is better than C, since all the same people exist in these outcomes, and there's more total/average welfare. (I'm pretty sure you can support this kind of verdict on weaker assumptions if necessary.)
So let's suppose, on this basis, that D is better than C. B and C are equally good. I assume it follows that D is better than B.
Suppose that B were better than A. Since D is better than B, it would follow that D is better than A as well. But we know this can't be so, if it's neutral to make happy people, because D and A differ only in the existence of an extra person who has a life worth living. The neutrality principle entails that D isn't better than B. But it's absurd to think that B isn't better than A.
Arguments like this make me feel pretty confident that the intuition of neutrality is mistaken.
I wasn't sure if it's really useful to think about value being linear in resources on some views. If you have a fixed population and imagine increasing the resources they have available, I assume that the value of the outcome is a strictly concave function of the resource base. Doubling the population might double the value of the outcome, although it's not clear that this constitutes a doubling of resources. And why should it matter if the relationship between value and resources is strictly concave? Isn't the key question something like whether there are potentially realizable futures that are many orders of magnitude more valuable than the default or where we are now? Answering yes seems compatible with thinking that the function relating resources to value is strictly concave and asymptotes, so long as it asymptotes somewhere suitably high up on the scale of value.
Could you clarify what you mean by 'converge'? One thing that seems somewhat tricky to square is believing that convergence is unlikely, but that value lock-in is likely. Should we understand convergence as involving agreement in views facilitated by broadly rational processes, or something along those lines, to be contrasted with general agreement in values that might be facilitated by irrational or arational forces, of the kind that might ensure uniformity of views following a lock-in scenario?
I'm pretty sure that Broome gives an argument of this kind in Weighing Lives!