In effective altruism, anti-aging research is usually discussed as an exclusively short-term human-focused cause area. Most existing discussion about anti-aging focus on the direct effects that may result if effective therapies are released: we get longer healthier lifespans.
However, it seems reasonable to think that profound structural changes would occur at all levels of society if aging were to be cured, especially if it happened before something else more transformative such as the creation of superintelligent AI. The effects would likely go beyond those mentioned in this prior piece, and I think that anything with potential for "profound social changes" merits some discussion on its own independent of the direct effects. Here I discuss both negative and positive aspects to anti-aging research, as even if anti-aging is negative, this still means we should think about it.
Many effective altruists have focused their attention on electoral reform, governance, economic growth, among other broad interventions to society. The usual justification for research of this kind is that there is a potential for large flow-through effects beyond the straightforward visible moral arguments.
I think this argument is reasonable, but I think that if you buy it, you should also think that anti-aging has been neglected. Even within short-term human focused cause areas, it is striking how little attention I've seen directed to anti-aging. For instance, comparing the search term "aging" to criminal justice reform (both conceived of as short-term human-focused cause areas) in Open Philanthropy's grants database reveals that aging research has captured $7,672,300 of donations compared to $108,555,216 for criminal justice reform.
Pablo Stafforini has proposed one explanation for this discrepancy,
Longevity research occupies an unstable position in the space of possible EA cause areas: it is very "hardcore" and "weird" on some dimensions, but not at all on others. The EAs in principle most receptive to the case for longevity research tend also to be those most willing to question the "common-sense" views that only humans, and present humans, matter morally. But, as you note, one needs to exclude animals and take a person-affecting view to derive the "obvious corollary that curing aging is our number one priority". As a consequence, such potential supporters of longevity research end up deprioritizing this cause area relative to less human-centric or more long-termist alternatives.
This explanation sounds right. However, it does seem clear to me that the long-term indirect effects of anti-aging would be large if the field met success any time soon. Therefore "weird" people can and should take this seriously. A success in anti-aging would likely mean
- An end to both the current healthcare system and retirement system as we currently understand it, which in America is responsible for at least 22% of our GDP, and probably a lot more when you take into account all the things people do to prepare for retirement.
- A very different population trajectory worldwide than the one that the United Nations and several other international bodies currently forecast.
- Substantially more rapid economic growth worldwide.
- An acceleration of environmental destruction and climate change (but also probably an acceleration of a solution).
- Faster intellectual progress as young people are able to work for many decades without mental decline.
- A big shift in attitudes surrounding the natural life cycle, including when it's appropriate to have children, whether it is acceptable when someone dies, the value of a human life etc.
- A mental shift among both elites and regular citizens about the best way to prepare for the future. Here, I imagine that politicians and other elites would regularly talk about the future thousands of years hence because it's reasonable that people will be around that long
- Slower value drift from deeply entrenched values and institutions. Eg. imagine if the same people who run the top 50 tech companies are pretty much all the same people as they were 30 years ago, and the older generations are more cognitively capable than the new rather than the other way around. (I am writing a post on the effects of this one. If anyone is interested, I will try to finish).
- An increase in social and economic inequality, absent massive reforms aimed at reducing such inequality.
But maybe there's a good reason why even longtermists don't always seem to be interested in anti-aging? Another explanation is that people have long timelines for anti-aging, and have mostly concluded that it's not worth really thinking seriously about it right now. I actually agree that timelines are probably long, in the sense that I'm very skeptical of Aubrey de Grey's predictions of longevity escape velocity within 17 years. If you think that anti-aging timelines are long but AI timelines are short-to-medium, then I think it makes a lot of sense to focus on the latter.
But, it also seems that timelines for anti-aging could quite easily also be short if the field suddenly gains mainstream attention. Anti-aging proponents have historically given arguments for why they expect funding to pick up rapidly at some point. Eg. see what happens in Nick Bostrom's fable of the dragon tyrant, or Aubrey de Grey's predictions I quoted in this Metaculus question (and keep in mind the fact that at the time of writing, Metaculus thinks that there's a 75% chance of the question being resolved positively!). In a possible correspondence of these predictions, funding has increased considerably in the last 5 years, though the prospect of curing aging still remains distant in mainstream thought circles.
To illustrate one completely made up scenario for short timelines, consider the following:
For the first few decades of the 21st century, anti-aging remained strictly on the periphery of intellectual thought. Most people, including biologists, did not give much thought to the idea of developing biotechnology to repair molecular and cellular damage from natural aging, even though they understood that aging was a biological process that could in principle be reversed. Then, in the late 2020s, an unexpected success in senolytics, stem cell therapy among other combined treatments demonstrates a lab mouse that lived for many years longer than its natural lifespan. This Metaculus question resolves positively. Almost overnight the field is funded with multi-billion dollar grants to test the drug treatments on primates and eventually humans. While early results are not promising, in the mid 2030s a treatment is finally discovered that seems to work in humans and is predicted to reliably extend human lifespan by 5-10 years.
Then, anti-aging becomes a political issue. People realize the potential for this technology and don't want to die either by lack of access or waiting for it to be developed further. Politicians promise to give the treatment away for free and to put government money into researching better treatments, and economists concur since it would reduce healthcare costs. By the early 2040s, a comprehensive suite of treatments shows further promise and mainstream academics now think we are entering a life expectancy revolution.
Of course, my scenario is extremely speculative, but it's meant to illustrate the pace at which things can turn around.
Perhaps you still think that anti-aging is far away and there's not much we can do about it anyway. It's worth noting that this argument should equally apply to climate change, since the biggest effects of climate change are more than 50 years away and the field is neither neglected nor particularly tractable. And of course, direct research on biotechnology to defeat aging is much more neglected than climate change.
If you don't think EAs should be talking about anti-aging, due to timelines or whatever, you should at least be explicit in your reasoning.
Am I missing something?
Also interested. I did not think about it before, but since the old generation dying is one way scientific and intellectual changes are completely accepted, that would probably have some big impact on our intellectual landscape and culture.
While old generation dying is one way of getting scientific and intellectual change to be enacted, there are longer-term trends towards reduced gatekeeping that may reduce the cost of training (eg when people prove that they're scientifically competent WITHOUT having to go through the entire wasteful process of K12 + PhD), then this could inhibit the gatekeeper socialization effects of the old generation that prevent the new generation from feeling free to express itself w/o permission (programming, at the very least, is much less hierarchical because people don't need to depend on the validation of an adviser/famous person to get their ideas out as much or gain access to resources critical to experimentation- it just has to work). Similarly, reductions in the cost of doing biological experiments could also inhibit this effect.
There are power dynamics associated with scientific training and scientific publishing (not to mention that the training seems to help scientists get through publishing - blind review be damned)- and there are SOME trends towards funding people who do work without needing access to gatekeepers (look at trends in funding from Emergent Ventures, or the Patrick Collision network). I've also noticed that people are growing
It looks like this comment was cut off mid-sentence.
I'm also interested.
Anders Sandberg discusses the issue a bit in one of his conversations with Rob Wiblin for the 80k Podcast.
related, so posting just as a reference: https://axiomaticdoubts.wordpress.com/2020/03/22/how-would-anti-aging-medicine-change-the-world/
Will most of the (dis)value in the future come from nonbiological consciousness?
If I had to predict, I would say that yes, ~70% chance that most suffering (or other disvalue you might think about) will exist in artificial systems rather than natural ones. It's not actually clear whether this particular fact is relevant. Like I said, the effects of curing aging extend beyond the direct effects on biological life. Studying anti-aging can be just like studying electoral reform, or climate change in this sense.
Thanks for this. As someone who worked in the ageing field and has been thinking about this for a while it's good to see more explicitly longtermist coverage of this cause area.
I've taken this as an opportunity to lay down some of my thoughts on the matter; this turned out to be quite long. I can expand and tidy this into a full post if people are interested, though it sounds like it would overlap somewhat with what Matthew's been working on. I haven't tried too hard to make this non-redundant with other comments, so apologies if you feel you've already covered something I discuss here.
TL;DR: I'm very uncertain about lots of things and think everyone else should be too; social-science research to address these uncertainties seems very valuable for both optimists and pessimists. That said, I'm still quite optimistic that life-extension will be net-positive from a long-termist perspective, assuming AI timelines are long.
Longevity, AI timelines, and high-level uncertainty
Longevity and technological progress
Belatedly, I'd also be very interested in seeing this become a full post!
Thanks for the bullet points and thoughtful inquiry!
I am very interested in a full post, as right now I think this area is quite neglected and important groundwork can be completed.
My guess is that most people who think about the effects of anti-aging research don't think very seriously about it because they are either trying to come up with reasons to instantly dismiss it, or come up with reasons to instantly dismiss objections to it. As a result, most of the "results" we have about what would happen in a post-aging world come from two sides of a very polarized arena. This is not healthy epistemologically.
In wild animal suffering research, most people assume that there are only two possible interventions: destroy nature, or preserve nature. This sort of binary thinking infects discussions about wild animal suffering, as it prevents people from thinking seriously about the vast array of possible interventions that could make wild animal lives better. I think the same is true for anti-aging research.
Most people I've talked to seem to think that there's only two positions you can take on anti-aging: we should throw our whole support behind medical biogerontology, or we should abandon it entirely and focus on other cause areas. This is crazy.
In reality, there are many ways that we can make a post-aging society better. If we correctly forecast the impacts to global inequality or whatever, and we'd prefer to have inequality go down in a post-aging world, then we can start talking about ways to mitigate such effects in the future. The idea that not talking about the issue or dismissing anti-aging is the best way to make these things go away is a super common reaction that I cannot understand.
I'm currently writing a post about this, because I see it as one of the most important variables affecting our evaluation of the long-term impact of anti-aging. I'll bring forward arguments both for and against what I see as "value drift" slowed by ending aging.
Overall, I see no clear arguments for either side, but I currently think that the "slower moral progress isn't that bad" position is more promising than it first appears. I'm actually really skeptical of many of the arguments that philosophers and laypeople have brought forward about the necessary function of moral progress brought about by generational death.
And as you mention, it's unclear why we should expect better value drift when we have an aging population, given that there is evidence that the aging process itself makes people more prejudiced and closed-minded in a number of ways.
I'm not sure it's all that crazy. EA is all about prioritisation. If something makes you believe that anti-ageing is 10% less promising as a cause area than you thought, that could lead you to cut your spending in that area by far more than 10% if it made other cause areas more promising.
I've spoken to a number of EAs who think anti-ageing research is a pretty cool cause area, but not competitive with top causes like AI and biosecurity. As long as there's something much more promising you could be working on it doesn't necessarily matter much how valuable you think anti-ageing is.
Now, some people will have sufficient comparative advantage that they should be working on ageing anyway: either directly or on the meta-level social-science questions surrounding it. But it's not clear to me exactly who those people are, at least for the direct side of things. Wetlab biologists and bioinformaticians could work on medical countermeasures for biosecurity. AI/ML people (who I expect to be very important to progress in anti-ageing) could work on AI safety (or biosecurity again). Social scientists could work on the social aspects of X-risk reduction, or on some other means of improving institutional decision-making. There's a lot competing with ageing for the attention of well-suited EAs.
I'm not saying ageing will inevitably lose out to all those alternatives; it's very neglected and (IMO) quite promising, and some people will just find it more interesting to work on than the alternatives. But I do generally back the idea of ruthless prioritisation.
Right, I wasn't criticizing cause priortization. I was criticizing the binary attitude people had towards anti-aging. Imagine if people dismissed AI safety research because, "It would be fruitless to ban AI research. We shouldn't even try." That's what it often sounds like to me when people fail to think seriously about anti-aging research. They aren't even considering the idea that there are other things we could do.
Thanks for this. Regarding moral and cultural progress, I think there is some research that suggests that this largely occurs through generational replacement.
Regarding the selfish incentives:
Potentially, but initially, lifespan extension would be much more muted, and would not give particularly strong selfish incentives for people to care about the long-term future. My sense is that this factor would initially be swamped by the negative effects on moral progress of slower generational replacement.
Thanks for this post, strongly upvoted. The amount of attention (and funding) aging research gets within EA is unbelievably low. That's why I wrote an entire series of posts on this cause-area. A couple of comments:
1) Remember: if a charity finances aging research, it has the effect of hastening it, not enabling it. Aging will be brought under medical control at some point, we are only able to influence when. This translates into the main impact factor of hastening the arrival of Longevity Escape Velocity.
2) Now look again at your bulleted list of "big" indirect effects, and remember that you can only hasten them, not enable them. To me, this consideration make the impact we can have on them seem no more than a rounding error if compared to the impact we can have due to LEV (each year you bring LEV closer by saves 36,500,000 lives of 1000QALYS. This is a conservative estimate I made here.)
Small correction: Aubrey de Grey only estimates a 50/50 chance of LEV within 17 years. This is also conditional on funding, because before the private money started to pour in five years ago, his estimate had been stuck for many years at 50/50 chance of LEV within 20-22 years.
This isn't clear to me. In Hilary Greaves and William MacAskill's paper on strong longtermism, they argue that unless what we do now impacts a critical lock-in period, then most of the stuff we do now will "wash out" and have a low impact on the future.
If a lock-in period never comes, then there's no compelling reason to focus on indirect effects of anti-aging, and therefore I'd agree with you that these effects are small. However, if there is a lock-in period, then the actual lives saved from ending aging could be tiny compared to the lasting billion year impact that shifting to a post-aging society lead to.
What a strong long-termist should mainly care about are these indirect effects, not merely the lives saved.
Do Long-Lived Scientists Hold Back Their Disciplines? It's not clear reducing cognitive decline can make up for this or the effects of people becoming more set in their ways over time; you might need relatively more "blank slates".
Similarly, a lot of moral progress is made because of people with wrong views dying. People living longer will slow this trend, and, in the worst case, could lead to suboptimal value lock-in from advanced AI or other decisions that affect the long-term future.
I think speciesism is one of the most individually harmful and most widespread prejudices, and we need a relatively larger percentage of the population to have grown up eating plant-based and cultured animal products to reduce speciesism, since animal product consumption seems to cause speciesism (and not just speciesism causing animal product consumption). For the long-term future, antispeciesism may translate to concern for artificial sentience, from which most of the value in the future might come. Of course, there are also probably more direct effects on concern for artificial sentience like this unrelated to speciesism.
In addition to what I wrote here, I'm also just skeptical that scientific progress decelerating in a few respects is actually that big of a deal. The biggest case where it would probably matter is if medical doctors themselves had incorrect theories, or engineers (such as AI developers) were using outdated ideas. In the first case, it would be ironic to avoid curing aging to prevent medical doctors from using bad theories. In the second, I would have to do more research, but I'm still leaning skeptical.
I have another post in the works right now and I actually take the opposite perspective. I won't argue it fully here, but I don't actually believe the thesis that humanity makes consistent moral progress due to the natural cycle of birth and death. There are many cognitive biases that make us think that we do though (such as the fact that most people who say this are young and disagree with the elder, but when you are old you will disagree with the young. Who's correct?)
I think newer generations will tend to grow up with better views than older ones (although older generations could have better views than younger ones at any moment, because they're more informed), since younger generations can inspect and question the views of their elders, alternative views, and the reasons for and against with less bias and attachment to them. Curing aging doesn't cure confirmation bias or belief perseverance/the backfire effect.
This view assumes that moral progress is a real thing, rather than just an illusion. I can personally understand this of view if the younger generations shared the same terminal values, and merely refined instrumental values or became better at discovering logical consistencies or something. However, it also seems likely that moral progress can be described as moral drift.
Personally, I'm a moral anti-realist. Morals are more like preferences and desires than science. Each generation has preferences, and the next generation has slightly different preferences. When you put it that way, the idea of fundamentally better preferences doesn't quite make sense to me.
More concretely, we could imagine several ways that future generations disagree with us (and I'm assuming a suffering reduction perspective here, as I have identified you as among that crowd):
I'm not trying to say that these are particularly likely things, but it would seem strange to put full faith in a consistent direction of moral progress, when nearly every generation before us has experienced the opposite ie. take any generation from prior centuries and they would hate what we value these days. The same will probably be true for you too.
I'm a moral anti-realist, too. You can still be a moral anti-realist and believe that your own ethical views have improved in some sense, although I suppose you'll never believe that they're worse now than before, since you wouldn't hold them if that were the case. Some think of it as what you would endorse if you were less biased, had more information and reflected more. I think my views are better now because they're more informed, but it's a possibility that I could have been so biased in dealing with new information that my views are in fact worse now than before.
In the same way, I think the views of future generations can end up better than my views will ever be.
So I don't expect such views to be very common over the very long-term (unless there are more obstacles to having different views in the future), because I can't imagine there being good (non-arbitrary) reasons for those views (except the 2nd, and also the 3rd if future robots turn out to not be conscious) and there are good reasons against them. However, this could, in principle, turn out to be wrong, and an idealized version of myself might have to endorse these views or at least give them more weight.
I think where idealized versions of myself and idealized versions of future generations will disagree is due to different weights given to opposing reasons, since there is no objective way to weight them. My own weights may be "biases" determined by my earlier experiences with ethics, other life experiences, genetic predisposition, etc., and maybe some weights could be more objective than others based on how they were produced, but without this history, no weights can be more objective than others.
Finally, just in practice, I think my views are more aligned with those of younger generations and generations to come, so views more similar to my own will be relatively more prominent if we don't cure aging (soon), which is a reason against curing aging (soon), at least for me.
Sure. There are a number of versions of moral anti-realism. It makes sense for some people to think that moral progress is a real thing. My own version of ethics says that morality doesn't run that deep and that personal preferences are pretty arbitrary (though I do agree with some reflection).
Again, that makes sense. I personally don't really share the same optimism as you.
One of the frameworks I propose in my essay that I'm writing is the perspective of value fragility. Across many independent axes, there are many more ways that your values can get worse than better. This is clear in the case of giving an artificial intelligence some utility function, but it could also (more weakly) be the case in deferring to future generations.
You point to idealized values. My hypothesis is that allowing everyone who currently lives to die and putting future generations in control is not a reliable idealization process. There are many ways that I am OK with deferring my values to someone else, but I don't really understand how generational death is one of those.
By contrast, there are a multitude of human biases that make people have more rosy views about future generations than seems (to me) warranted by the evidence:
I personally think that the moral circle expansion hypothesis is highly important as a counterargument, and I want more people to study this. I am very worried that people assume that moral progress will just happen automatically, almost like a spiritual force, because well... the biases I gave above.
This makes sense if you are referring to the current generation, but I don't see how you can possibly be aligned with future generations that don't exist yet?
There are more ways, yes, but I think they're individually much less likely than the ways in which they can get better, assuming they're somewhat guided by reflection and reason. This might still hold once you aggregate all the ways they can get worse and separate all the ways they can get better, but I'm not sure.
I expect future generations, compared to people alive today, to be less religious, less speciesist, less prejudiced generally, more impartial, more consequentialist and more welfarist, because of my take on the relative persuasiveness of these views (and the removal of psychological obstacles to having these views), which I think partially explains the trends. No guarantee, of course, and there might be alternatives to these views that don't exist today but are even more persuasive, but maybe I should be persuaded by them, too.
I don't expect them to be more suffering-focused (beyond what's implied by the expectations above), though. Actually, if current EA views become very influential on future views, I might expect those in the future to be less suffering-focused and to cause s-risks, which is concerning to me. I think the asymmetry is relatively more common among people today than it is among EAs, specifically.
Again, I seem to have different views about to what extent moral views are driven by reflection and reason. For example, is the recent trend towards Trumpian populism driven by reflection and reason? (If you think this is not a new trend, then I ask you to point to previous politicians who share the values of the current administration).
I agree with that.
This is also likely. However, I'm very worried about the idea that caring about farm animals doesn't imply an anti-speciesist mindset. Most vegans aren't concerned about wild animal suffering, and the primary justification that most vegans give for their veganism is from an exploitation framework (or environmentalist one) rather than a harm-reduction framework. This might not robustly transfer to future sentience.
This isn't clear to me. From this BBC article, "Psychologists used to believe that greater prejudice among older adults was due to the fact that older people grew up in less egalitarian times. In contrast to this view, we have gathered evidence that normal changes [ie. aging] to the brain in late adulthood can lead to greater prejudice among older adults." Furthermore, "prejudice" is pretty vague, and I think there are many ways that young people are prejudiced without even realizing it (though of course this applies to old people too).
I don't really see why we should expect this personally. Could you point to some trends that show that humans have become more consequentialist over time? I tend to think that Hansonian moral drives are really hard to overcome.
The second reason is a good one (I agree that when people stop eating meat they'll care more about animals). The relative persuasiveness thing seems weak to me because I have a ton of moral views that I think are persuasive and yet don't seem to be adopted by the general population. Why would we expect this to change?
It sounds like you are not as optimistic as I thought you were. Out of all the arguments you gave, I think the argument from moral circle expansion is the most convincing. I'm less sold on the idea that moral progress is driven by reason and reflection.
I also have a strong prior against positive moral progress relative to any individual parochial moral view given what looks like positive historical evidence against that view (the communists of the early 20th century probably thought that everyone would adopt their perspective by now; same for Hitler, alcohol prohibitionists, and many other movements).
Overall, I think there are no easy answers here and I could easily be wrong.
I don't really have a firm idea of the extent reflection and reason drives changes in or the formation of beliefs, I just think they have some effect. They might have disproportionate effects in a motivated minority of people who become very influential, but not necessarily primarily through advocacy. I think that's a good description of EA, actually. In particular, if EAs increase the development and adoption of plant-based and cultured animal products, people will become less speciesist because we're removing psychological barriers for them, and EAs are driven by reflection and reason, so these changes are in part indirectly driven by reflection and reason. Public intellectuals and experts in government can have influence, too.
Could the relatively pro-trade and pro-migration views of economists, based in part on reflection and reason, have led to more trade and migration, and caused us to be less xenophobic?
Minimally, I'll claim that, all else equal, if the reasons for one position are better than the reasons for another (and especially if there are good reasons for the first and none of the other), then the first position should gain more support in expectation.
I don't think short-term trends can usually be explained by reflection and reason, and I don't think Trumpian populism is caused by reflection and reason, but I think the general trend throughout history is away from such tribalistic views, and I think that there are basically no good reasons for tribalism might play a part, although not necessarily a big one.
That's a good point. However, is this only in social interactions (which, of course, can reinforce prejudice in those who would act on it in other ways)? What about when they vote?
We're talking maybe 20 years of prejudice inhibition lost at most on average, so at worst about a third of adults at any moment, but also a faster growing proportion of people growing up without any given prejudice they'd need to inhibit in the first place vs many extra people biased towards views they had possibly hundreds of years ago. The average age in both cases should trend towards half the life expectancy, assuming replacement birth rates.
This judgement was more based on the arguments, not trends. That being said, I think social liberalism and social democracy are more welfarist, flexible, pragmatic and outcome-focused than most political views, and I think there's been a long-term trend towards them. Those further left are more concerned with exploitation and positive rights despite the consequences, and those further right are more concerned with responsibility, merit, property rights and rights to discriminate. Some of this might be driven by deference to experts and the views of economists, who seem more outcome-focused. This isn't something I've thought a lot about, though.
Maybe communists were more consequentialist (I don't know), but if they had been right empirically about the consequences, communism might be the norm today instead.
I actually haven't gotten a strong impression that most ethical vegans are primarily concerned with exploitation rather than cruelty specifically, but they are probably primarily concerned with harms humans cause, rather than just harms generally that could be prevented. It doesn't imply antispeciesism or a transfer to future sentience, but I think it helps more than it hurts in expectation. In particular, I think it's very unlikely we'll care much about wild animals or future sentience that's no more intelligent than nonhuman animals if we wouldn't care more about farmed animals, so at least one psychological barrier is removed.
On moral progress - I think it's highly plausible that future generations will not be okay with people dying due to natural causes in the same way that they're not okay with people dying from cancer or infectious diseases.
Eliminating aging also has the potential for strong negative long-term effects. Both of the ones I'm worried about are actually extensions of your point about eliminating long-term value drift. No aging enables autocrats to stay in power indefinitely, as it is often the uncertainty of their death that leads to the failure of their regimes. Given that billions worldwide currently live under autocratic or authoritarian governments, this is a very real concern.
Another potentially major downside is the stagnation of research. If Kuhn is to be believed, a large part of scientific progress comes not from individuals changing their minds, but from outdated paradigms being displaced by more effective ones. This one is less certain, as it's possible that knowing they have indefinite futures may lead to selection for people who are willing to change their minds. Both of these are cases where progress probably *requires* value drift.
Among the largest nations that are most relevant to the world (or have a disproportionate ability to shape what happens to the world relative to their ability to be shaped by other countries), it only applies for China and Russia, and it's unclear whether Xi or Putin strongly care about immortality (and even if it did, it would be unlikely to arrive quickly enough to save them). Given that the next 100 years might be the most important years ever in human history, this makes this supposition more bounded on what might happen in the next 100 years, and there aren't that many dictators in that position. It's also unlikely that China would become less autocratic/flexible even after Xi dies (the CCP will just have other ways to maintain its power - probably in a way similar to how North Korea barely changed after Kim Jung Il died. When an autocrat's closest associates also die off over time, it can cause a weakening of the strong beliefs held by some of the previous generation, which might facilitate regime change.
I think this concern has a potential for strong negative downside on the tails, but it's unclear if it is a strong negative downside in the median case (given that we know who the most relevant dictators are here, and there aren't many). Given the increasing power disparity between China and the West, what happens in China then becomes uniquely important, so this concern may be more targeted around whether or not China's ability to change is affected by whether or not the death of Xi's successor (and everyone in Xi's generation of the CCP) would significantly increase the chances of China transitioning away from the strongest downsides of autocracy or authoritarian governments (I believe China transitioning away from authoritarianism is unlikely no matter what, though the death of its autocrats over the next 100 yearsmight increase China's chances of ultimately transitioning away from the most negative effects of authoritarian governments, such as censorship of thoughtcrimes). Conditioning everything into the far future could also time-localize (or impose an upper bound on) much of the "suffering" that comes from the mission of "transforming the identities of unreceptive people into Han Chinese" [eg people in Hong Kong now will most likely suffer in the present, but future people born in Hong Kong probably won't "suffer" as much from not having something tangible "taken away" from them], though what China is doing now (wrt stifling dialogue) is certainly not making China's future more robust.
It's also possible that AI may ultimately improve social dialogue to the point that it may help the CCP get what it wants without feeling threatened if it relaxes some of its more draconian measures such as censorship. I'm not sure if prolonging the lives of China's authoritarians is guaranteed to be a strong negatives - it certainly has issues such as being insufficiently insensitive over what it's doing to Xinjiang/Tibet/Hong Kong (and possibly eventually Taiwan), but these issues are mostly in the now and will be unaffected by life extension in the future. What China is doing to increase its influence/power elsewhere will be done irrespective of who is in power, and it probably doesn't have a strong desire to "take over" other countries in the way that Hitler or Stalin did (ultimately, it is more constrained by what other countries can do to it more than it is constrained by the potential deaths of its dictators, unless it had an unusually powerful/effective/ruthless dictator, which I'm not sure if has).
It may ultimately come down to anti-aging technology ultimately come at just the right time to save us from the worst of authoritarianism (given that we no longer have Stalin or Mao).
2021 edit: Though who knows, democracies can easily turn into authoritarian regimes, and all it takes is a single terrorist or bioterrorist attack that forces universal surveillance...
Agreed. One way you can frame what I'm saying is that I'm putting forward a neutral thesis: anti-aging could have big effects. I'm not necessarily saying they would be good (though personally I think they would be).
Even if you didn't want aging to be cured, it still seems worth thinking about it because if it were inevitable, then preparing for a future where aging is cured is better than not preparing.
I think this is real, and my understanding is that empirical research supports this. But the theories I have read also assume a normal aging process. It is quite probable that bad ideas stay alive mostly because the proponents are too old to change their mind. I know for a fact that researchers in their early 20s change their mind quite a lot, and so a cure to aging would also mean more of that.
As I wrote here, I think this could be due (in part) to biases accumulated by being in a field (and being alive) longer, not necessarily (just) brain aging. I'd guess that more neuroplasticity or neurogenesis is better than less, but I don't think it's the whole problem. You'd need people to lose strong connections, to "forget" more often.
Also, people's brains up until their mid 20s are still developing and pruning connections.
George Church is over 60 and I've heard some people refer to him as a "child", given that he seems to not strongly identify with strongly held beliefs or connections (he's also not especially attached to a certain identity). I talked to him - he cares more about regeneration/rejuvenation - or maintaining the continuity of consciousness and the basic gist of his personality/mode of being than about maintaining specific memories (regeneration/rejuvenation research may ultimately come down to replacing old parts of your brain or identity with new untrained tissue - this is where developmental biology/SCRB becomes especially relevant). In fact, he's unironically bullish about anti-aging therapies coming in his lifetime
I'm not convinced there is actually that much of a difference between long-term crystallization of habits and natural aging. I'm not qualified to say this with any sort of confidence. It's also worth being cautious about confidently predicting the effects of something like this in either direction.
There are some scientists who roamed around and never really crystallized (famous examples being Freeman Dyson and Francis Crick)