Lundgren and Kudlek (2024), in their recent article, discuss several challenges to longtermism as it currently stands. Below is a summary of these challenges.
The Far Future is Irrelevant for Moral Decision-Making
- Longtermists have not convincingly shown that taking the far future into account impacts decision-making in practice. In the examples given, the moral decisions remain the same even if the far future is disregarded.
- Example: Slavery
We didn’t need to consider the far future to recognize that abolishing slavery was morally right, as its benefits were evident in the short term. - Example: Existential Risk
The urgency of addressing existential risks does not depend on the far future; the importance of avoiding these risks is clear when focusing on the present and the next few generations.
- Example: Slavery
- As a result, the far future has little relevance to most moral decisions. Policies that are good for the far future are often also good for the present and can be justified based on their benefits to the near future.
The Far Future Must Conflict with the Near Future to be Morally Relevant
- For the far future to be a significant factor in moral decisions, it must lead to different decisions compared to those made when only considering the near future. If the same decisions are made in both cases, there is no need to consider the far future.
- Given the vastness of the future compared to the present, focusing on the far future, risks harming the present. Resources spent on the far future could instead be used to address immediate problems like health crises, hunger, and conflict.
We Are Not in a Position to Predict the Best Actions for the Far Future
- There are two main reasons for this:
- Unpredictability of Future Effects
It's nearly impossible to predict how our actions today will influence the far future. For instance, antibiotics once seemed like the greatest medical discovery, estimating the long-term effects of medical research in 10,000 years—or even millions of years—is beyond our capacity. - Unpredictability of Future Values
Technological advancements significantly change moral values and social norms over time. For example, contraceptives contributed to shifts in values regarding sexual autonomy during the sexual revolution. We cannot reliably predict what future generations will value.
- Unpredictability of Future Effects
Implementing Longtermism is Practically Implausible
- Human biases and limitations in moral thinking lead to distorted and unreliable judgments, making it difficult to meaningfully care about the far future.
- Our moral concern is naturally limited to those close to us, and our capacity for empathy and care is finite. Even if we care about future generations in principle, our resources are constrained.
- Focusing on the far future comes at a cost to addressing present-day needs and crises, such as health issues and poverty.
- Implementing longtermism would require radical changes to human psychology or to social institutions, which is a major practical hurdle.
I'm interested to hear your opinions on these challenges and how they relate to understanding longtermism.
Yeah, perhaps I am subtly misrepresenting the argument. Trying again, I interpret it as saying:
People have justified longtermism by pointing to actions that seem sensible, such as the claim that it made sense in the past to end slavery, and it makes sense currently to prevent existential risk. But both of these examples can be justified with a lot more certainty by appealing to the short term future. So in order to justify longtermism in particular, you have to point out proposed policies that are a lot less sensible seeming, and rely on a lot less certainty.
It might help to clarify that in the article they are defining “long term future” as a scale of millions of years.