AZ

Ariel_ZJ

60 karmaJoined Apr 2018

Comments
10

Thanks! The link to Ara & Brazier (2010) is particularly helpful, as Figure 1 contains the information I need to calculate it for at least a UK citizen. 

UK life expectancy is ~80. Eyeballing the figure suggests those <30 accrue ~0.95 QALYs/year, while those from 30-80 accrue ~0.85. Putting that together would suggest ~71 undiscounted QALYs, which agrees with your estimated of 70.

I'm aware that this is an extremely crude and rough way of doing things, but it's still helpful as a sanity check for the problem I'm currently working on. Thanks again!

A simple yet inspiring post, much like the good work that you have wrought. Good job Henry!

As per usual, Scott Alexander has a humorous take on this problem here (you need to be an ACX subscriber).

But as a general response, this is why we need to try and develop an accepted theory of consciousness. The problem you raise isn't specific to digital minds, it's the same problem when considering non-adult-human consciousness. Maybe most animals aren't conscious and their wellbeing is irrelevant? Maybe plants are conscious to a certain degree and we should be concerned with their welfare (they have action potentials after all)? Open Philanthropy also has a report on this issue.

For the moment, the best we've got for determining whether a system is conscious is:

  • Subjective reports (do they say they're conscious?)
  • Analogous design to things we already consider conscious (i.e. adult human brains)

The field of consciousness science is very aware that this is not a great position to be in. Here's a paper on how this leads to theories of consciousness being either trivial or non-falsifiable. Here's a debate between various leading theories of consciousness that is deeply unsatisfying in providing a resolution to these issues.

Anyway, this is just to say there are a great deal of problems that stem from 'we don't know which systems are conscious' and sadly we're not doing a great job of being close to having a solution for that.

The short reply to this is that there are already circumstances where people have brains that have completely ceased all (electrical) activity and we don't normally consider people who've gone through these processes to have been "destroyed" and then "recreated". 

This can happen in both cold-water drowning and in a surgical procedure called deep-hypothermic circulatory arrest. In both circumstances, a person's body temperature is brought below 20C and their brain completely stops all electrical activity for ~30 min. When later brought out of this state, people retain their memories and sense of personal identity. Nobody typically treats these people as 'mere copies' of their previous selves.

Anyway, it's a reasonable question and not a "non-issue", but this plus other considerations make it seem not so problematic. Another consideration is the fact that over time you replace essentially all the components of your body through consumption and excretion, so survival can't be based purely on physical continuity either.

Thanks for the effortful post Andy! I agree so strongly with the importance of exploring this topic that I am halfway through writing a book on the subject. I'll respond to the technical points first, than the ethical ones.

Regarding some of the technical points:

  • Cryopreservation with cryoprotective agents, but without prior aldehyde fixation, produces unavoidable brain shrinkage of around 50%. Although it's possible that all important structural and biochemical information survives this shrinkage, it's very plausible that critical synaptic connection information will be lost due to non-uniform shrinkage, tearing or receptor hyperconcentration. Perhaps its better than no option at all, but aldehyde-stabilised cryopreservation provides a much higher guarantee of successful information preservation.
  • The long-term storage costs of an aldehyde-preserved brain is currently an open question, as we're unclear on how cold a brain has to be kept to prevent lipid drifting over the long-run. You do need to keep the brains below room temperature, as otherwise the lipids in the neuron cell membranes will slowly drift and obscure your synaptic information. However, it might be possible to slow this sufficiently at only -20C or so, which will make it much cheaper than the current requirement of -135C.
  • The point of information-theoretic death relative to current legal death very much depends on the condition of the patient during their final dying phase. For someone who suffers a sudden unexpected cardiac arrest, but who was otherwise healthy previously, evidence from the time to synapse degradation following loss of blood supply suggests it may be around 21 hours. For those who already had extensive health issues including poor circulation and liver failure, it may be much sooner. If you're interested, see my sample chapter below on 'What is Death?'

Regarding the ethical points, I mostly just agree with your comments. Deciding whether lives are fungible is a key part of the debate between 'person-affecting' and 'total' utilitarians, and as of-yet unsettled as I see it in the EA community. Even if one takes the total view though, your points that 1) 'people don't like dying' and 3) 'it might improve their long-term planning' are very compelling. 

I strongly agree with the comment Robin Hanson made as well that the current paucity of uptake both reduces the  chances of neuropreservation being successfully implemented (due to a lack of robust infrastructure and auditing) and makes everything far more expensive (due to a lack of economies of scale). I'm fairly certain that at mass scale the preservation procedure could be done for <$5000 and the storage costs would be only a few dollars per year, meaning that it would certainly be a competitive intervention.

If any comment readers are interested in reading a bit more on this, here are two sample chapters from my upcoming book: '1. Why Don't We Get More Time?' and '6. What Is Death?'. Also, I'm currently in the process of looking for a literary agent to get a publishing house to take up my book proposal, so please PM me if you know any agents who would be interested.

Holden, have you had a look at the Terra Ignota series by Ada Palmer? It's one of the better explorations of a not-quite-Utopia-but-much-better-than-our-world that I've come across, and it certainly contains a large degree of diversity. It also doesn't escape being alien, but perhaps it's not so alien as to lose people completely. My one caveat is that it is comprised of four substantial books, so it's quite the commitment to get through if you're not doing it for your own leisure.

This is an interesting essay, but I feel the lack of focus on norms and outcome probabilities is what really drives the distinction in intuition between the two cases, rather than a difference in what matters to the victim or an omission/commission distinction.

 * In case 1, Maria is imminently dying and Wilfred is imminently dying. Both need to make it to the hospital to live. In the real world, this means both have a pretty good chance of dying - medical care isn't that great, there's no guarantee either will survive even if they make it. If Maria's already the one in the car, might as well be her that makes it as getting her to stop to pull over and switch out for Wilfred probably just increases the chance neither will survive on net.

 * In case 2, Maria is imminently dying but Wilfred is not. Given Maria has a pretty good chance of dying soon anyway (as medical care isn't that great), but Wilfred will be fine if he can get off the road (seems eminently plausible), it seems like a bad choice from a consequentialist perspective to kill Wilfred and try and save Maria.

To reframe the case in a way that doesn't lead to such strong intuitions:

 * Case A: Maria and Wilfred are both dying of thirst on an island, but know rescue is imminent in a few days. There is enough water to sustain one, but not both of them, for that time (if they split the water they will both die). Maria and Wilfred are matched in every way that would predict life expectancy. Maria already has the water bottle. I have no intuition about who should get the water, but if Maria already has it,  it doesn't seem unreasonable for her to be the one to live.

 * Case B: Same as Case A, but this time Wilfred has the water bottle among his possessions but hasn't realised. Maria notices, and wants to take the water bottle and drink it to survive. Again, I don't have the intuition that it is unreasonable for her to do this and be the one to live.

In general,  I think what guides intuitions in these scenarios is norms and outcome probabilities sneaking in, even when we're supposed to assume these people live in a (social) frictionless vacuum with no  broader consequences for society of their actions. Reframing the situation often leads to a different intuition, which implies something fishy is going on...

I go back and forth between person-affecting (hedonic) consequentialism and total (hedonic) utilitarianism on about a six-monthly basis, so I sure understand what you're struggling with here.

I think there's a stronger intuition that can be made to argue for a person-affecting view though, which is that the idea of standing 'outside of the universe' and judging between world A, B & C is entirely artificial and impossible. In reality, no moral choices that impact axiological choices can be made outside of a world where agents already exist, so it's possible to question the fundamental assumption that comparisons between possible worlds is even legitimate, rather than just comparison between 'the world as it is' and 'the world as it would be if these different choices were made'. From that perspective, it's possible to imagine a world with one singular happy individual and a world with a billion happy individuals just being literally incomparable.

I ultimately don't buy this, as I do think divergence between axiology and morality is incoherent and that it is possible to compare between possible worlds. I'm just very uncomfortable with its implications of an 'obligation to procreate', but I'm less uncomfortable with that than the claim that a world with a billion happy people is incomparable to a world with a singular happy person (or indeed a world full of suffering).

Load more