Do you intend for the population to recover in B, or extinction with no future people? In the post, you write that the second virus "will kill everybody on earth". I'd assume that means extinction.
If B (killing 8 billion necessary people) does mean extinction and you think B is better than A, then you prefer extinction to extra future deaths. And your argument seems general, e.g. we should just go extinct now to prevent the deaths of future people. If they're never born, they can't die. You'd be assigning negative value to additional deaths, but no positive value to additional lives. The view would be antinatalist.
Or, if you think B is just no worse than A (equivalent or incomparable), then extinction is permissible, in order to prevent the deaths of future people.
If you allow population recovery in B, then (symmetric) wide person-affecting views can say B is better than A, although it could depend on how many future/contingent people will exist in each scenario. If the number is the same or larger in B and dying earlier is worse than dying later, then B would be better. If it's lower in B, then you may need to discount some of the extra early deaths in A.
If additional human lives have no value in themselves, that implies that the government would have more reason to take precautionary measures against a virus that would kill most of us than one that would kill all of us, even if the probabilities were equal.
Maybe I'm misunderstanding, but if
then, conditional on the given virus mutating
2 kills more present/necessary people, so we'd want to prevent it.
EDIT: It looks like you pointed out something similar here.
I don't think 6 follows. Preventing the early deaths of future people does not imply creating new lives or making happy people. The two statements in each version of the intuition of neutrality separated by the "but" here are not actually exhaustive of how we should treat future people.
Your argument would only establish that we shouldn't be indifferent to (or discount or substantially discount) future lives, not that we have reason to ensure future people are born in the first place or to create people. Multiple views that don't find extinction much worse than almost everyone dying + population recovery could still recommend avoiding the extra deaths of future people. Especially "wide" person-affecting views.[1]
On "wide" person-affecting views, if you have one extra person Alice in outcome A, but a different extra person Bob in outcome B, and otherwise the same people in both, then you treat Alice and Bob like the same person across the two outcomes. They're "counterparts". For more on this, and how to extend to different numbers of non-overlapping people between A and B, see Meacham, 2012, section 4 (or short summary in Koehler, 2021) and Thomas, 2019, section 5.3. I also discuss some different person-affecting views here.
Under wide views, with the virus that kills more people, the necessary people+matched counterparts are worse off than with the virus that kills fewer people.
(I'd guess there are different ways to specify the intuition of neutrality; your argument might succeed against some but not others.)
Some versions of negative preference utilitarianism or views that minimize aggregate DALYs might, too, but if the extra early deaths prevent additional births, then in fact killing more people with the viruses could prevent more deaths overall, and that could be better on these views. These are pretty antinatalist views. That being said, I am fairly sympathetic to antinatalism about future people, but more so because I don't think good lives can make up for bad ones.
Dasgupta's view makes ethics about what seems unambiguously best first, and then about affecting persons second. It's still person-affecting, but less so than necessitarianism and presentism.
It could be wrong about what's unambiguously best, though, e.g. we should reject full aggregation, and prioritize larger individual differences in welfare between outcomes, so A+' (and maybe A+) looks better than Z.
Do you think we should be indifferent in the nonidentity problem if we're person-affecting? I.e. between creating a person a person with a great life and a different person with a marginally good life (and no other options).
For example, we shouldn’t care about the effects of climate change on future generations (maybe after a few generations ahead), because future people's identities will be different if we act differently.
But then also see the last section of the post.
I largely agree with this, but
2 motivates applying impartial norms first, like fixed population comparisons insensitive to who currently or necessarily exists, to rule out options, and in this case, A+, because it's worse than Z. After that, we pick among the remaining options using person-affecting principles, like necessitarianism, which gives us A over Z. That's Dasgupta's view.
You could also do brain imaging to check for pain responses.
You might not even need to know what normal pain responses in the species look like, because you could just check normally painful stimuli vs control stimuli.
However, knowing what normal pain responses in the species look like would help. Also, across mammals, including humans and raccoons, the substructures responsible for pain (especially the anterior cingulate cortex) seem roughly the same, so I think we'd have a good idea of where to check.
Maybe one risk is that the brain would just adapt and recruit a different subsystem to generate pain, or use the same one in a diffefdnt way. But control stimuli could help you detect that.
Another behavioural indicator would be (learned) avoidance of painful stimuli.
The arguments for unfairness of X relative to Y I gave in my previous comment (with the modified welfare levels, X=(3, 6) vs Y=(5,5)) aren't sensitive to the availability of other options: Y is more equal (ignoring other people), Y is better according to some impartial standards, and better if we give greater priority to the worse off or larger gains/losses.
All of these apply also substituting A+ for X and Z for Y, telling us that Z is more fair than A+, regardless of the availability of other options, like A, except for priority for larger gains/losses (each of the 1 million people has more to lose than each of the extra 99 million people, between A+ and Z).
Fairness is harder to judge between populations of different sizes (the number of people who will ever exist), and so may often be indeterminate. Different impartial standards, like total, average and critical-level views will disagree about A vs A+ as well as about A vs Z. But A+ and Z have the same population size, so there's much more consensus in favour of Z>A+ (although necessitarianism, presentism and views that especially prioritize more to lose can disagree, finding A+>Z).
X isn't so much bad because it's unfair, but because they don't want to die. After all, fairly killing both people would be even worse.
Everyone dies, though, and their interests in not dying earlier trade off against others, as well as other interests. And we can treat those interests more or less fairly.
There are also multiple ways of understanding "fairness", not all of which would say killing both is more fair than killing one:
Y is more fair than X under 1, just considering the distribution of welfares. But Y is also more fair according to prioritarianism (3). I can also make it better according to other impartial standards (2), like average lifetime welfare or total lifetime welfare, and with greater priority for bigger losses/gains (4):
But in cases where it is decided whether lives are about to be created, the subjects don't exist yet, and not creating them can't be unfair to them.
What I'm interested in is A+ vs Z, but when A is also an option. If it were just between A+ and Z, then the extra people exist either way, so it's not a matter of creating them or not, but just whether we have a fairer distribution of welfare across the same people in both futures. And in that case, it seems Z is better (and more fair) than A+, unless you are specifically a presentist (not a necessitarian).
When A is an option, there's a question of its relevance for comparing A+ vs Z. Still, maybe your judgement about A+ vs Z is different. Necessitarians would instead say A+>Z. The other person-affecting views I covered in the post still say Z>A+, even with A.
Then, I think there are ways to interpret Dasgupta's view as compatible with "ethics being about affecting persons", step by step:
These other views also seem compatible with "ethics being about affecting persons":
Anyway, I feel like we're nitpicking here about what deserves the label "person-affecting" or "being about affecting persons".