Hide table of contents

The recent Oxford University Press anthology Essays on Longtermism: Present Action for the Distant Future includes a chapter titled “Prudential Longtermism” by the philosophers Johan E. Gustafsson and Petra Kosonen. There is a response to the essay that I think is obvious but wasn't anticipated in it.

Here’s the gist of what the essay says: normally, since humans only live a maximum of around 100 years, when I take an action that might directly affect my personal well-being, it only makes sense to think a maximum of about 100 years minus my age into the future. But what if technologies such as rejuvenation biotechnology, cryonics, or mind uploading allowed me to live much longer? Then my actions might directly affect my personal well-being much further in the future. (When I’m thinking about how my actions affect me, philosophers call that prudential, in contrast with moral, which refers to how my actions affect others.)

The authors note that we don’t know whether these technologies will ever be invented or be proven to work and, if they will, when they will. So, what practical implications does prudential longtermism have, if I accept the premise? It’s hard to imagine it having any practical implication, since the strategy of procrastination (or delay, or deferral) is so strong here. I will find out if I’m going to die within a normal human lifespan within about 100 years minus my age. So, I can just wait and see.

The costs of not preparing, within the next 100 years minus age, to live for many centuries or millennia longer than the typical life expectancy are presumably quite small. What actions that would affect my personal well-being over the next, say, 1,000 years couldn’t safely be delayed by about 100 years? By contrast, the costs of living as if I were going to live much, much longer than 100 years when I’m really not could be quite large. So, the logical strategy is to kick the can down the road 100 years minus my age.

Simply procrastinating thinking about prudential longtermism until it clearly becomes relevant is the common-sense thing to do. There’s barely any downside, but there is a big downside to not procrastinating, namely, wasting some irreplaceable part of my precious life. The fact that procrastinating (delaying, deferring) is the common-sense thing to do removes any practical implication prudential longtermism might have.

This post could end right here, since that fully encapsulates the main reason prudential longtermism is not practically important, but there are a few more things to say which unravel the conceptual problems at the heart of both prudential longtermism and moral longtermism.[1]

Life extension and ”rolling longtermism”

Gustafsson and Kosonen raise the idea that prudential longtermism could imply I should be motivated to make sure healthy life extension biotechnologies are invented in time to benefit me by, say, donating to life extension research. But I don’t think that concept should be called “longtermism”. If my remaining life expectancy is less than 100 years, and if I want to add years onto that life expectancy, then, at the margin, we’re talking about outcomes within the next 100 years.

Sure, the logical endpoint of this is that I’ll care about outcomes affecting my personal survival and well-being indefinitely into the future, but that’s not longtermism. If you have kids and grandkids who you hope will have kids and grandkids of their own, and so on, as human beings have done since the beginning of time, this, too, implies a project that extends indefinitely into the future, for a theoretically unlimited number of generations. But this kind of “rolling longtermism” or “relay race longtermism” is not novel and didn’t have to be invented at Oxford University in 2017.[2] It has existed since the Stone Age. If the conclusion of this scholarship is that almost all humans have been longtermists since humans evolved in the African savannah, what was the point of any of it?

What if, hypothetically, “longevity escape velocity” — the point at which more than 1 year of life expectancy is added per year, due to continuous improvements in rejuvenation biotechnology — were achieved today? Would prudential longtermism have any practical implications then? What would they be? The problem here for prudential longtermism mirrors a similar problem for moral longtermism.

The version of “rolling longtermism” or “relay race longtermism” for moral longtermism is where we plan for, say, the next 100 years, with the intention of leaving the world in a state where people 100 years from now will be able to plan for their next 100 years, and so on — or, to put it another way, grandparents plan to give a good world to their grandchildren that will allow those grandchildren to give a good world to their own grandchildren, and so on. Unless we count plans for this rolling window of time as longtermism — in which case, longtermism has always been the status quo (after all, the term “longtermism” was coined by philosophers at a university that is nearly 1,000 years old) — it is hard to think of what moral longtermism implies we should do other than what we already knew we should do before the term “longtermism” was coined in 2017.[3]

What’s a longtermist to do?

Just as it’s hard to come up with new ideas about what the world should do to plan for 1,000 years or 10,000 years or longer in the future over and above following the status quo approach of planning for the next 100 years at a time (or existential risk, see the relevant footnote[3]), even if I were somehow guaranteed a lifespan of 1,000 years or 10,000 years or longer, it’s hard to imagine what kind of long-term planning I should do beyond a time horizon of 100 years. Since, like humanity at large, I can continually adjust my plans based on new information, the benefit of planning more than 100 years in advance is low, and the cost of procrastinating (or delaying, or deferring) making such plans is low.

For humanity, there are very few, if any, actions we could take that would have a direct impact on the far future, since almost all actions we can take will only have an impact on the far future through long chains of human actions stretching forward in time. Since human plans can always be adjusted in response to new information,[4] the only situations where we need to plan longer-term than a century or so are situations where we can’t rely on people in the future having the ability to adjust their plans, or where the ability to adjust their plans isn’t adequate to solve the problems they face, since we left them with too little time, too few resources, too degraded an environment, too small a population, or whatever it is — in other words, we didn’t leave them with a better world and more capability to shape it than we have. (As noted in an earlier footnote,[3] the big exception where our actions do have a direct impact on the far future is when we affect the risk of human extinction or a global catastrophe causing the irrecoverable collapse of human civilization.)

For an individual like me, if I am given a life expectancy of 1,000 years or more, other than avoiding death or an irrecoverable injury or disease (such as one that erases information in my brain that is part of my personal identity, like memory or personality), what can I do to affect my own well-being more than 100 years in the future that I can’t safely delay until the future that my actions affect is much less than 100 years away? This is a mirror of the same question that troubles moral longtermism, with death and irrecoverable injury/disease standing in for extinction and irrecoverable collapse. What could I possibly do to “lock in” a benefit for myself in, say, 900 years, rather than start a very long chain of actions that will have some much earlier payoff and will also require me to continually renew my commitment, in which case nothing is locked in at all?

Long-term projects and the problem of credit assignment

There may, hypothetically, be some projects that an individual would want to work on for 900 years that would take that long to complete. But let’s think carefully about whether this should be called longtermism. The Cologne Cathedral in Germany was under construction continuously for about 300 years, then work stalled for 250 years, and finally it was completed about 600 years after the first stone was laid. Is this a longtermist project, or simply a long-term project, and is there a difference? And if there isn’t a difference, is longtermism an important idea? The way the term longtermism is used in practice connotes a concern primarily with whether future lives exist or not, and, secondarily, with economic outcomes for future people, the progress of science and technology, which presumably in combination determine health outcomes, and moral progress. Cathedrals do impact people’s well-being for the better, but if building a cathedral over 600 years could be considered a longtermist project, could building a cathedral over 60 years (or 6 years) be considered an effective altruist project?

Gustafsson and Kosonen define Weak Prudential Longtermism as the view that most of the prudential expected value of a person’s present actions will be in how they affect that person’s distant future. It’s hard to understand how to think of this idea in relation to hypothetical projects, undertaken by an individual, that span centuries. It’s particularly complicated because we’re not talking about one discrete action that will have an impact far in the future, but taking one step in a continuous series of thousands or millions of steps, all of which depend on the steps taken before. Laying the first stone of the cathedral has practically no expected value considered as a discrete action.

You could object that actions related to long-term projects of this kind couldn’t plausibly account for most of the prudential expected value of that person’s actions overall — which might make Weak Prudential Longtermism false. However, the conceptual problem here also applies to matters of life and death. For example, considered one way, the food I eat this month will support my life for a month. Considered another way, it will support my life for (hopefully!) many more decades, since without that food, I would starve to death. So, which is it? Does a month of food provide a month of life or decades? It only provides decades of life conditional on all the other food that will be required to sustain my life for that time. So, is the first stone laid of the cathedral responsible for the whole cathedral, or just one stone of it?

This is related to a problem with how impact is often measured in effective altruism. If I tell my friend with a high-paying job about effective altruism and then, over the next year, they donate enough money to the Against Malaria Foundation to save a life, many people in effective altruism seem to want to count this as me saving a life. Does my friend also get credit for saving a life? Surely yes. But then I’m credited with saving a life and so are they, which means the total number of lives we’ve saved is supposedly two, but only one life has been saved in actuality, so we’re overcounting by 2x. If you consider longer, more complicated chains of causality, you could end up overcounting by much more. 

The mathematically more intuitive way to apportion credit and blame is to divide, rather than multiply, such that maybe I saved one-tenth of a life and my friend saved nine-tenths. (But then what about the Against Malaria Foundation employees? What about AMF’s partners who distribute the anti-malarial bednets? What about the mother who puts the bednets over her kids’ beds? What about the person who invented the bednets in the first place? What about the Stone Age people who prevented human extinction by reproducing and teaching their kids and grandkids the necessary skills for life?)

Does prudential longtermism matter?

The standard I have been holding both prudential longtermism and moral longtermism to is not a precise, technical definition like Gustafsson and Kosonen’s definition of Weak Prudential Longtermism, but the looser, less precise standard of whether longtermism is a novel, actionable idea, that is, whether it tells us to do something that we didn’t already know we should do anyway. Weak Prudential Longtermism, as defined, could be true, depending on how we assign credit, simply because human life is cumulative, if the person under consideration still has most of their life ahead of them. It would be plausible to argue that for a person who lives to 80, most of the prudential expected value of what happens in the first 20 years of their life is in how it affects the subsequent 60 years of their life.

But then if Weak Prudential Longtermism is only saying the equivalent of this, it’s a completely non-novel idea and not interesting or useful. Whether people’s lifespans increase or not would not be particularly relevant to the general point, so this wouldn’t actually, fundamentally be about the long term in the sense meant by “longtermism”. It would just be a view that the first 25% of a person’s life has a greater impact in the last 75% of a person’s life than in the first 25%.

Similarly, let’s think about how the concept of prudential longtermism might apply to an individual’s projects. The filmmaker Ken Burns has made multiple documentaries that each took about a decade of work. Perhaps if his life expectancy were over 1,000 years, he might make documentaries that take over a century. But is this longtermism? And, more importantly, is this a novel, useful philosophical idea? Are we saying something more than if people’s lifespans increased by a lot, the time horizon of their projects might expand commensurately? And then, so what? If that's all we're saying, is this is an important topic worth thinking about?

What is philosophy for?

The space of possible thoughts that a human being can think is practically infinite, if not literally infinite,[5] which means that, in the words of the philosopher Daniel Dennett, there are an infinitude of wrong trees to go barking up.[6] Dennett’s example was the hypothetical game of chmess, a variation of chess in which the king can move two spaces instead of one. How much time should be spent studying the strategy of chmess, or the strategy of the endless other possible variations of chess?

The point here is that philosophical ideas have to be held to a higher standard than truth. It’s not enough that philosophical ideas are true, they also have to be important. “Important” doesn’t necessarily mean practically useful; it could also mean it’s important for our understanding of the world, that it’s highly interesting for a large number of non-specialists, or that it satisfies (and/or feeds) our curiosity about some question or topic that humans tend to care about. The precision of analytic philosophy is a strength, but it seems like there’s a slippery slope — not just hypothetically, but actually, in practice — from applying the precise habits of mind of analytic philosophy and getting tangled up in puzzles that are no more philosophically important than puzzle games.

Moreover, ideas in applied ethics or normative ethics — which is where moral longtermism seems to belong — in addition to being theoretically important, probably should also have some sort of practical implication. I also think it’s fair to criticize longtermism for failing to give clear, actionable, novel, important advice on what to actually do because it’s been advertised as being morally and practically important, and philosophy should be held to the standard of truth in advertising as much as anything else.

  1. ^

    By moral longtermism, I just mean what typically gets called longtermism. The philosopher and effective altruism co-founder Will MacAskill's book on longtermism is called What We Owe The Future — you can see that longtermism is about moral obligation right in that title.

  2. ^

    The term “longtermism” was coined in or around 2017 by the philosophers Will MacAskill and Toby Ord, both co-founders of effective altruism and both at Oxford until MacAskill's departure in 2024.

  3. ^

    Existential risk stands out as the one real clear example where thinking about the sheer number of possible future lives might lead us to take some action we otherwise wouldn’t. If there’s a very small chance of a dinosaur killer-sized asteroid hitting Earth, there might be a level of probability below which the monetary cost of trying to prevent its impact is not justified based on the value of a statistical life for all people currently living (or even if you add in people who will born within the next 100 years, or the 100 years after that), but is justified based on the value of a statistical life for all the people who will ever be born if we avoid the asteroid. (The philosopher Toby Newberry published a report in 2021 that attempted to give cost-effectiveness estimates for asteroid defense efforts given various assumptions, including whether future lives or only present lives were accounted for.) 

    However, the philosopher Nick Bostrom made arguments of this form at least 15 years before the term “longtermism” was coined, and the concept of existential risk and this kind of argument for caring about it were well-known long before 2017. (The philosopher Derek Parfit made an argument of this kind about existential risk and future lives in his book Reasons and Persons, published in 1984, so the intellectual lineage goes back even further.) Given the history of the scholarship, it would probably be a good idea to split the terminology into two distinct subject areas. Maybe we could differentiate “existential risk concerned with far future lives” and “longtermism excluding existential risk”, for the sake of clarity. (Or “x-risk CWFFL” and “LTism ex-x-risk”, just to be really clear and catchy.)[7]

  4. ^

    This is a trait unique to humans not shared by, for example, big rocks hurtling through space, which makes predicting human behaviour or things affected by it different from predicting practically anything else. This is an insight from the physicist David Deutsch’s explosively creative book The Beginning of Infinity, which is a thrilling read for anyone interested in these kinds of cosmic ideas.

    I would add to Deutsch's point that biological evolution might be the only other thing in the universe besides the human mind that responds to new information with new ideas and so poses a similar difficulty for prediction as human behaviour does. The key difference is evolution takes thousands, millions, or billions of years to come up with new ideas.

  5. ^

    In theory, you could think of any number from zero to infinity, but that’s kind of a silly technicality. The key question here is how many substantive ideas it is possible for humans to come up with and think about. I think it’s sufficient to just stipulate that the number is much, much larger than the number we have the time to think about.

  6. ^

    I couldn’t find the exact quote and I’m probably misremembering the exact wording.

  7. ^

    Just kidding.

  8. Show all footnotes

13

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

Executive summary: The author argues that prudential longtermism—the idea that individuals should act now based on the possibility of personally experiencing far-future consequences—collapses under the logic of procrastination, since it’s always rational to wait and see if life extension becomes real; more broadly, both prudential and moral longtermism fail to generate novel or actionable insights beyond ordinary long-term planning or concern for existential risks.

Key points:

  1. Prudential longtermism assumes future technologies (like rejuvenation or mind uploading) might let individuals live far longer, implying their present actions could affect their distant personal future—but since we’ll learn within a normal lifespan whether that’s true, delaying decisions is optimal and low-cost.
  2. This “strategy of procrastination” makes prudential longtermism practically toothless: it gives no reason to act differently today.
  3. Efforts like funding life-extension research are really short- or medium-term prudence (“rolling longtermism”) rather than genuine longtermism—an attitude humans have effectively practiced for millennia.
  4. Even if immortality or 1,000-year lives were possible, the capacity to continually update plans means there’s little value in planning more than about a century ahead; only extinction-level risks demand longer-term action.
  5. The conceptual problem extends to moral longtermism: unless it provides guidance distinct from ordinary intergenerational care, it isn’t a novel moral theory but a rebranding of familiar principles.
  6. The essay concludes that philosophy should prioritize ideas that are not only true but important—offering meaningful, novel, or actionable insights—whereas both prudential and moral longtermism fail this test by producing chmess-like puzzles rather than valuable guidance.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

That sounds right to me! Good job, SummaryBot! 

Curated and popular this week
Relevant opportunities