Hide table of contents

Summary

I've written a book called The Future Loves You: How and Why We Should Abolish Death. It makes the case for providing the terminally ill with access to preservation procedures capable of putting them into a kind of stasis, with the expectation that future medical advances should be capable of restoring them to health. The book makes the case for why this is a reasonable perspective to hold from a neuroscientific, medical, and philosophical position, and also explores the related ethical/social/economic complications. 

It will be out in hardback/ebook/audiobook format on November 28.

Book Overview

Ask people how long they'd like to live for (with no further qualification), and most people state they'd like to live at least 10 years long than they're statistically likely to get. Specify that these years would be in good mental and physical health, and the majority state 150+ years. Go so far as to survey the remaining desire for life of the terminally ill, and you will find that around 70% of them still hold a strong will-to-live right up until the bitter end. 

People do not get to live as long and as healthily as they would like to. The 60 million human deaths that occur per year is no less tragic for its normality.

These innumerable personal tragedies unfold because, even though medicine has developed greatly between the 1920s and the 2020s, it is still far from maturity. Sure, we should be grateful for the many advances that have helped global life-expectancy to more than double from thirty-two in 1900 to seventy-two today. But, even so, the breakthroughs are not coming fast enough to give everyone as much time as they might want. Our grandparents, our parents, our friends, and sometimes even our children are still dying all around us. Metastatic cancer, dementia, end-stage heart failure, and every other invariably fatal disease stand as testimony to the fact that one day we, and everyone we love, will die.

Some disagree with this pessimism. For example, take people who pursue cryonics, the practice of having one’s body pumped full of antifreeze and suspended upside down in a flask of liquid nitrogen. Motivated by what they see as a clear historical trend of medical progress, its adherents believe that freezing the bodies of the clinically dead will allow for their eventual resurrection when more advanced technology one day becomes available. 

Still, the problem is that cryonics as practised has many of the hallmarks of a pseudoscience or cult, including: quasi-religious claims of an afterlife for those who perform arcane rituals, a lack of engagement with mainstream medicine or endorsement by esteemed scientists, and large upfront payments required for services that have no guarantee of success. None of these features inspire confidence that the frozen clients of cryonics will ever see another spring.

But even so, the fact that the claims of cryonicists are unsubstantiated doesn’t make the core idea of preserving the dying to enable their future revival fundamentally unsound. Within this science fiction is a kernel of truth: with sufficient understanding of how the brain enables a person to be who they are, it might be possible to place a dying individual in a state from which they could one day be revived. For it to actually work, though, we would need answers to two critical questions:

  1. How exactly does the human brain enable a person to be who they are?
  2. How does the brain decay during death?

Armed with answers to both of these, it’s possible that we really could devise a way to halt the decay of brain structures crucial to personhood, and keep someone in indefinite stasis until sufficient medical advances could one day restore them to health. Cryonicists likely fail here, as their methods harm the brain's structural integrity and do visible damage to its crucial circuitry. But perhaps neuroscientists could do better?

Certainly, in the past decade, it has become clear that neuroscience can offer sufficient answers to the two critical questions about personhood and death, even to the point of being able to directly manipulate the relevant brain structures. Neuroscientists now know how to erase, insert, and force the recall of specific memories. Doctors are increasingly providing prosthetic implants to functionally replace portions of the brain. The last few years have even seen the successful development of procedures arguably capable of perfectly preserving a human brain.

Perhaps, then, it is time to seriously evaluate the potential of a brain preservation procedure to buy more time for those we would otherwise consider beyond medical help. At the very least, the required starting assumption – that a person is to a large degree defined by the unique structure of their brain – is now entirely uncontroversial among neuroscientists and doctors. 

My book is a report of just such an investigation. And in the end, I conclude that:

  • Aldehyde-stabilised cryopreservation, a technique already existing today, provides a credible possibility of indefinitely delaying death
  • It does so at a marginal cost of ~$10,000/procedure and ~$1000/year in storage costs, well below the $50,000/QALY threshold used in many developed countries and arguably competitive with GiveWell's top recommendations.
  • Revival of individuals will likely only occur in a flourishing future, and having a personal stake in the future going well is naturally synergistic with longtermist thinking.

Table of Contents

UK Book Tour

Pre-Order Now - Out November 28 - (obviously I can't make you, but I'd really appreciate it!)

UK (Hardcover, Ebook, Audiobook)

US (Audiobook) - Hardcover/Ebook links to come

Australia/New Zealand (Hardcover, Ebook, Audiobook)

Questioning assumptions: Why the EA community should lead in the debate on brain preservation (May 17 2024)

Brain preservation to prevent involuntary death: a possible cause area (Mar 22 2022)

12

0
0

Reactions

0
0

More posts like this

Comments4


Sorted by Click to highlight new comments since:

Congrats! Your table of contents seems very thoughtfully structured and piques my interest. I'm curious how you arrived at the marginal cost of ~$10,000/procedure and ~$1000/year in storage costs and QALY estimations, but I assume that will be discussed more in detail in the book. 

Also as a heads up, your link to the study about erasing memories leads to a server error (at least on my end).

Thanks! Yes, chapter 8 is essentially an overview of how QALY calculations are performed in health economics and how brain preservation techniques fare against other therapies. Lots more details there.

Weird that the erasure link isn't working for you, it works fine when I click on it? Either way, the paper is: https://www.nature.com/articles/nature15257 'Labelling and optical erasure of synaptic memory traces in the motor cortex'

Congrats! One way I've been thinking about this recently -- if we expect most people will permanently die now (usually without desiring to do so), but at some point in the future, humanity will "cure death," then interventions to allow people to join the cohort of people who don't have to involuntarily die could be remarkably effective from a QALY perspective. As I've argued before, I think that key questions for this analysis are how many QALYs individuals can experience, whether humans are simply replaceable, and what is the probability that brain preservation will help people get there. Another consideration is that if it could be performed cheaply enough -- perhaps with robotic automation of the procedure -- it could also be used for non-human animals, with a similar justification. 

Yeah, as I see it, the motivations to pursue this differ in strength dramatically depending on whether one's flavour of utilitarianism is more inclined to a person-affecting view or a total hedonic view.

If you're inclined towards the person-affecting view, then preserving people for revival is a no-brainer (pun intended, sorry, I'm a terrible person).

If you hold more of a total hedonic view, then you're more likely to be indifferent to whether one person is replaced for any other. In that case, abolishing death only has value in so far as it reduces the suffering or increases the joy of people who'd prefer to hold onto their existing loved ones rather than have them changed out for new people over time. From this perspective, it'd be equally efficacious to just ensure no-one cared about dying or attachments to particular people, and a world in which everyone was replaced with new people of slightly higher utility would be a net improvement to the universe.

Back in the real world though, outside of philosophical thought experiments, I suspect most people aren't indifferent to whether they or their loved ones die and are replaced, so for humans at least I think the argument for preservation is strong. That may well hold for great ape cousins too, but it's perhaps a weaker argument when considering something like fish?

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr