All of Ariel_ZJ's Comments + Replies

Thanks! The link to Ara & Brazier (2010) is particularly helpful, as Figure 1 contains the information I need to calculate it for at least a UK citizen. 

UK life expectancy is ~80. Eyeballing the figure suggests those <30 accrue ~0.95 QALYs/year, while those from 30-80 accrue ~0.85. Putting that together would suggest ~71 undiscounted QALYs, which agrees with your estimated of 70.

I'm aware that this is an extremely crude and rough way of doing things, but it's still helpful as a sanity check for the problem I'm currently working on. Thanks again!

A simple yet inspiring post, much like the good work that you have wrought. Good job Henry!

As per usual, Scott Alexander has a humorous take on this problem here (you need to be an ACX subscriber).

But as a general response, this is why we need to try and develop an accepted theory of consciousness. The problem you raise isn't specific to digital minds, it's the same problem when considering non-adult-human consciousness. Maybe most animals aren't conscious and their wellbeing is irrelevant? Maybe plants are conscious to a certain degree and we should be concerned with their welfare (they have action potentials after all)? Open Philanthropy also ... (read more)

1
tobytrem
2y
Yep- I was going to have a 'what we should do' section and then realised that I had nothing very helpful to say. Thanks for those resources, I'll check them out. 

The short reply to this is that there are already circumstances where people have brains that have completely ceased all (electrical) activity and we don't normally consider people who've gone through these processes to have been "destroyed" and then "recreated". 

This can happen in both cold-water drowning and in a surgical procedure called deep-hypothermic circulatory arrest. In both circumstances, a person's body temperature is brought below 20C and their brain completely stops all electrical activity for ~30 min. When later brought out of this stat... (read more)

Thanks for the effortful post Andy! I agree so strongly with the importance of exploring this topic that I am halfway through writing a book on the subject. I'll respond to the technical points first, than the ethical ones.

Regarding some of the technical points:

  • Cryopreservation with cryoprotective agents, but without prior aldehyde fixation, produces unavoidable brain shrinkage of around 50%. Although it's possible that all important structural and biochemical information survives this shrinkage, it's very plausible that critical synaptic connection inform
... (read more)

Holden, have you had a look at the Terra Ignota series by Ada Palmer? It's one of the better explorations of a not-quite-Utopia-but-much-better-than-our-world that I've come across, and it certainly contains a large degree of diversity. It also doesn't escape being alien, but perhaps it's not so alien as to lose people completely. My one caveat is that it is comprised of four substantial books, so it's quite the commitment to get through if you're not doing it for your own leisure.

This is an interesting essay, but I feel the lack of focus on norms and outcome probabilities is what really drives the distinction in intuition between the two cases, rather than a difference in what matters to the victim or an omission/commission distinction.

 * In case 1, Maria is imminently dying and Wilfred is imminently dying. Both need to make it to the hospital to live. In the real world, this means both have a pretty good chance of dying - medical care isn't that great, there's no guarantee either will survive even if they make it. If Maria's ... (read more)

I go back and forth between person-affecting (hedonic) consequentialism and total (hedonic) utilitarianism on about a six-monthly basis, so I sure understand what you're struggling with here.

I think there's a stronger intuition that can be made to argue for a person-affecting view though, which is that the idea of standing 'outside of the universe' and judging between world A, B & C is entirely artificial and impossible. In reality, no moral choices that impact axiological choices can be made outside of a world where agents already... (read more)

Answer by Ariel_ZJAug 21, 201913
0
0

I accept the first premise for the same reason as I'd accept the second premise - positive or negative wellbeing, is axiomatically, better or worse than no experience at all.

I don't need to reason as to why having happy feelings is better than feeling neutral - it just is in an immediate sense.

I struggle to understand why you don't believe that there should be symmetry for positive and negative experiences? I understand that it may be easier to achieve higher magnitude negative feelings (e.g. easier to torture someone than make them ecstatic), but given symmetric experiences why don't they have the same relevance with respect to not-existing?

1
Tom_Alps
5y
I do accept that a pleasurable experience is better than a neutral experience; I'm not a negative Utilitarian. I just don't think that pleasurable experiences are preferable to nonexistence. For example, I would prefer a universe with a million happy people to a universe with a million neutral people. But, I'd be ambivalent about choosing between a universe with a million happy people and a universe with no people. (I would prefer a universe with no people to a universe with a million suffering people.) I don't have a "good" reason for treating positive and negative experiences asymmetrically when it comes to nonexistence - it's simply my intuition.