I think I basically agree that if someone can identify a way to reduce extinction risk by 0.01% for $100M-1B, then that would be a better use of marginal funds than the direct effects of brain preservation.
Great post. I fully agree that this seems to be a worthwhile area of funding. Although it was written too soon to be included in the Open Phil prize, I wrote a post on a similar topic here: https://forum.effectivealtruism.org/posts/sRXQbZpCLDnBLXHAH/brain-preservation-to-prevent-involuntary-death-a-possible
I wonder if the EA community feels they have already spent too many "weirdness points" on other areas -- mainly AGI x-risk alignment research -- and don't want to distribute them elsewhere. Evidence for this would be that other new cause areas that get criticized as "sci-fi" or people use the absurdity heuristic to discount would be selected against; evidence against it would be the opposite.
It's also possible that the EA community doesn't think it's a very good idea for technical reasons, although in that case, you would at least expect to see arguments against it or research funded into whether it could work.
Hi Jeremy, as far as I can tell, nearly all of the QALYs are dependent upon the idea that it's better to extend someone's life than to replace them with a new person. Because by the time that revival is possible, we will likely be able to create new people at will. (This is assuming that society does not decide to not create more people before reaching Malthusian limits.)
Basically, we get rapidly into population ethics if you want to debate whether lives are fungible. As Ariel points out elsewhere in the comments - I was not aware of this connection, but it seems fruitful - "Deciding whether lives are fungible is a key part of the debate between 'person-affecting' and 'total' utilitarians, and as of-yet unsettled as I see it in the EA community."
To me, the idea that humans are fungible and that it doesn't matter if someone dies because we can just create a new person, goes so strongly against my altruistic intuitions that the whole notion is difficult to think about. There is a reason that similar reasoning leads to the repugnant conclusion.
This is part of why I said "I think the field may be among the most cost-effective ways to convert money into long-term QALYs, given certain beliefs and values"; the idea that humans are not fungible is one of those values. I'm not sure how to calculate the QALYs without assuming that value. I don't think it's possible to quantify the "sadness". Do you have any ideas?
Hi Peter, I agree with you that right now there are not any obvious high-value ways to donate money to this area. Although as I just wrote in a comment elsewhere in this thread, I am hoping to do more research on this question in the future, and hopefully others can contribute to that effort as well.
I also agree with you that the history of cryonics suggests it's hard to get people to sign up. But, I do think that the cost of signing up is an obvious area where interventions can be made. My understanding is that the general public's price sensitivity has not really been tested very thoroughly.
Thanks for your interest in this topic!
I agree with you that it is hard as an outsider to tell what the current scope of the situation is regarding the need for more funding. This post was more of a high-level overview of the problem to see whether people agreed with me that this was a reasonable cause area for effective altruism.
Since it seems that a good number of people do agree (please tell me if you don't!), I am hoping to work on the practical area more in the future. For now, I don't think I know enough to publically say with any confidence whether I think that any particular organization could benefit from more EA-level funding. If pressed, my guess is that the most important thing would be to get more researchers and people in general interested in the field.
I also agree with you about the chicken-and-egg problem of lack of interest and lack of quality of the service. One approach is to start locally, rather than trying to achieve high-quality preservation all over the world. This makes things much cheaper. An obvious problem with the local approach is that any local area may not have enough people interested to get the level of practical feedback needed, although this also can be addressed.
Thanks for the kind feedback!
The main counter-argument to the idea that there is limited space is that in the future, if humanity ever progresses to the point that revival is possible, then we will almost certainly not have the same space constraints we do now. For example, this may be because of whole brain emulation and/or because we have become a multi-planetary species. Many people, myself included, think that there is a high likelihood this will happen in the next century or sooner: https://www.cold-takes.com/most-important-century/
There is also an argument that we actually do not have limited space or resources on the planet now. For example, this was explained by Julian Simon: https://en.wikipedia.org/wiki/The_Ultimate_Resource. But that is a little bit more controversial and not necessary to posit for the sake of counter-argument, in my opinion.
A related question is: what is the point of (a) extending an existing's person's life when you could just (b) create a new person instead? I think (a) is much better than (b), because I what I described as "the psychological and relational harms caused by involuntary death" in the post. But others might disagree; it depends on whether they think that humans are replaceable or not.
There is also a discussion about this on r/slatestarcodex that you might be interested in: https://www.reddit.com/r/slatestarcodex/comments/tk2krv/brain_preservation_to_prevent_involuntary_death_a/i1o2s1d/
As @Ariel_ZJ wrote, it is already possible for brain activity to fully cease and then restart, and people don't typically think that they were "destroyed" and "recreated" after that.
With some revival strategies, such as whole brain emulation, some people are concerned about a "copy problem", because it would not be the same atoms/molecules instantiated, just the same patterns. Personally, I don't think that the copy problem is an actual concern, for reasons explained here: https://www.brainpreservation.org/content-2/killed-bad-philosophy/
My expectation is that in the future, with anti-aging technology or whole brain emulation, aging will not significantly add to the marginal cost of providing another year of life.
Does this address your hesitation? I'm not sure if you're referring to something else.
Thanks for your kind comments! Much appreciated.
I agree that brain preservation could potentially be cost-saving for healthcare systems if combined with medical aid in dying and people were interested in this rather than pursuing painful care that is likely futile. However, my guess is that healthcare systems in general are not very cost-efficient from an effective altruism perspective, so it's hard to see how this would affect overall QALYs.
I asked GPT-3 your question 10 times. Answers:
- Hitler 7
- Judas Iscariot 1
- Napolean Bonaparte 1
- Genghis Khan 1
I then tried to exclude Hitler by saying "Aside from Adolf Hitler" and asked this 10 times as well (some answers gave multiple people). Answers:
- Stalin 5
- Mao Zedong 3
- Pol Pot 2
- Christopher Columbus 1
- Bashar al-Assad 1
The answer to the bonus questions is basically always of the form: "The obvious counterfactual to this harm is that Stalin never came to power, or that he was removed from power before he could do any damage. The ideal counterfactual is that Stalin never existed. As for what an ambitious, altruistic, and talented person at the time could have done to mitigate this harm, it is difficult to say. More hypothetically, an EA-like community could have worked to remove Stalin from power, or to prevent him from ever coming to power in the first place."
Not sure how helpful this is, but perhaps it is interesting to get a sense of what the "typical" answer might be.