As per usual, Scott Alexander has a humorous take on this problem here (you need to be an ACX subscriber).
But as a general response, this is why we need to try and develop an accepted theory of consciousness. The problem you raise isn't specific to digital minds, it's the same problem when considering non-adult-human consciousness. Maybe most animals aren't conscious and their wellbeing is irrelevant? Maybe plants are conscious to a certain degree and we should be concerned with their welfare (they have action potentials after all)? Open Philanthropy also ...
The short reply to this is that there are already circumstances where people have brains that have completely ceased all (electrical) activity and we don't normally consider people who've gone through these processes to have been "destroyed" and then "recreated".
This can happen in both cold-water drowning and in a surgical procedure called deep-hypothermic circulatory arrest. In both circumstances, a person's body temperature is brought below 20C and their brain completely stops all electrical activity for ~30 min. When later brought out of this stat...
Thanks for the effortful post Andy! I agree so strongly with the importance of exploring this topic that I am halfway through writing a book on the subject. I'll respond to the technical points first, than the ethical ones.
Regarding some of the technical points:
Holden, have you had a look at the Terra Ignota series by Ada Palmer? It's one of the better explorations of a not-quite-Utopia-but-much-better-than-our-world that I've come across, and it certainly contains a large degree of diversity. It also doesn't escape being alien, but perhaps it's not so alien as to lose people completely. My one caveat is that it is comprised of four substantial books, so it's quite the commitment to get through if you're not doing it for your own leisure.
This is an interesting essay, but I feel the lack of focus on norms and outcome probabilities is what really drives the distinction in intuition between the two cases, rather than a difference in what matters to the victim or an omission/commission distinction.
* In case 1, Maria is imminently dying and Wilfred is imminently dying. Both need to make it to the hospital to live. In the real world, this means both have a pretty good chance of dying - medical care isn't that great, there's no guarantee either will survive even if they make it. If Maria's ...
I go back and forth between person-affecting (hedonic) consequentialism and total (hedonic) utilitarianism on about a six-monthly basis, so I sure understand what you're struggling with here.
I think there's a stronger intuition that can be made to argue for a person-affecting view though, which is that the idea of standing 'outside of the universe' and judging between world A, B & C is entirely artificial and impossible. In reality, no moral choices that impact axiological choices can be made outside of a world where agents already...
I accept the first premise for the same reason as I'd accept the second premise - positive or negative wellbeing, is axiomatically, better or worse than no experience at all.
I don't need to reason as to why having happy feelings is better than feeling neutral - it just is in an immediate sense.
I struggle to understand why you don't believe that there should be symmetry for positive and negative experiences? I understand that it may be easier to achieve higher magnitude negative feelings (e.g. easier to torture someone than make them ecstatic), but given symmetric experiences why don't they have the same relevance with respect to not-existing?
Thanks! The link to Ara & Brazier (2010) is particularly helpful, as Figure 1 contains the information I need to calculate it for at least a UK citizen.
UK life expectancy is ~80. Eyeballing the figure suggests those <30 accrue ~0.95 QALYs/year, while those from 30-80 accrue ~0.85. Putting that together would suggest ~71 undiscounted QALYs, which agrees with your estimated of 70.
I'm aware that this is an extremely crude and rough way of doing things, but it's still helpful as a sanity check for the problem I'm currently working on. Thanks again!