Hide table of contents

Introduction

My thoughts on temporal discounting in longtermist ethics.

 

Two Senses of Normative Ethics

There are two basic senses of a normative moral system:
1. Criterion of judgment: "what is right/good; wrong/evil?"
2. Decision procedure: "how should I act? What actions should I take?" 

We can — and I argue we should — distinguish between these two senses.


Consider consequentialism:

Criterion of Judgment

Using consequentialism as a criterion of judgment, we can evaluate the actual ex post consequences of actions (perhaps over a given timeframe: e.g. to date, or all of time [if you assume omniscience]) to decide whether an action was right/wrong.

 

Decision Procedure

However, when deciding what action to take, we cannot know what the actual consequences will be. 

For a decision procedure to be useful at all — for it to even qualify as a decision procedure — it must be actionable. That is, it must be possible — not only in principle, but also in practice — to act in accordance with the decision procedure. That is, the decision procedure must be:

  • Directly evaluable or
  • Approximately evaluable or 
  • Robustly estimable or
  • Etc.

We should have a way of determining what course of action the procedure actually recommends in a given scenario. As such, the procedure must be something we can evaluate/approximate/estimate ex ante, before we know the actual consequences of our actions.

Because of coherence arguments, I propose that a sensible decision procedure for consequentialists is the "ex ante expected consequences[1] of (policies over) actions"[2].


Against Discounting

When considering how we should value the longterm future, I feel like MacAskill/Ord conflate/elide between the two senses[3].

They make a compelling argument that we shouldn't discount the interests (wellbeing/preferences) of future people:

Assigning a fixed discount rate per year (e.g. 1%) to future people and extrapolating back in time, you get a conclusion like the interests of Ancient Egyptian royalty (e.g. Cleopatra) outweigh the interests of everyone alive today, and this seems obviously lacking.

 

Temporal Discounting and the Two Senses of Normative Ethics

I agree that from a perspective of "morality as criterion of judgment", we should not discount the interests of future people. Plausibly, in a consequentialist-criterion-of-judgment-framework almost all the value of any action is determined by its impact on far future people.


However, it does not follow that in a consequentialist-decision-procedure-framework almost all the value of any action is determined by its impact on far future people.

Rather, it seems to me that this is quite unlikely to be the case.


It is difficult for us to evaluate the effect of our actions on future people (What future people will counterfactually [depending on which actions we take] exist? What are their interests? What will be the effects of our actions on those interests? Etc.), and the further out they are, the greater our uncertainty of the effect of our actions.

Alan Hájek makes the case for the difficulty of objective consequentialism as a decision procedure in his interview with Robert Wiblin.
 

To a first approximation[4], the uncertainty of the aggregate moral value of a particular action grows exponentially the further out in time we consider in our evaluation window[5].


As such, I think a temporal discount rate does make sense within a consequentialist-decision-procedure-framework. At least if you agree that the relevant consideration when deciding what action to take is "ex ante expected consequences".


The conclusion of the above is that while the interests of Cleopatra in her time do not in actuality outweigh the interests of everyone alive today, Cleopatra should not have considered the people alive today in her moral decision making.


And I endorse that conclusion? (If it's a bullet, then it must be the easiest bullet I've ever bitten.) I don't think Cleopatra could have usefully evaluated the consequences of her actions on people alive today. The current global geopolitical macrostate is probably something Cleopatra's accessible world models could not readily conceive.

In her decision making, Cleopatra should have considered the interests of her direct subjects and those in the near future; they are the only people for whom she could usefully reason about.

 

To summarise this argument in a less nuanced but more memetically fit form:

We should care about the interests of our children and grandchildren; and leave the interests of our great grandchildren to our grandchildren; they are better positioned to evaluate and act upon them.

 

Conclusions/Summary

  • We shouldn't discount the interests of future people within a consequentialist-criterion-of-judgment-framework
    • Actual, existent people don't intrinsically matter any more or less based on their temporal location
    • Cleopatra's interests do not outweigh the interests of the eight billion people alive today
  • We should discount the interests of future people within a consequentialist-decision-procedure-framework
    • Due to our uncertainty about:
      • The counterfactual existence of such people
      • Their interests
      • The effects of our actions upon them
      • Etc.
    • This uncertainty grows (exponentially? hyperbolically?) the more distant they are from us in time
  1. ^

    Subject to bounded computing constraints. Evaluating the full ex ante expected consequences may be computationally intractable. So some sort of "best effort" estimate of said consequences may be needed.

  2. ^

    More sophisticated consequentialists may want higher order abstractions in their decision procedures (policies for selecting policies for ... for selecting actions).

  3. ^

    Note, I'm going of my vague recollections of their writings, so this may be somewhat inaccurate.

  4. ^

    There is a maximum level of uncertainty: maximum entropy, and so beyond some certain horizon in time, we have roughly constant (maximum) uncertainty about future people, and wouldn't further discount people that come into existence after that horizon based on temporal displacement. 

    As such a proper temporal discounting due to uncertainty may not be exponential but perhaps hyperbolic or similar?

  5. ^

    I suspect the growth rate of said uncertainty is probably significantly higher than 1% (at least before the maximum entropy time horizon).

17

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since: Today at 7:06 PM

I had a pretty sophisticated/elaborate framework for longtermist ethics within a consequentialist metaframework, but I think this distinction between consequentialism-as-a-criterion-of-judgment and consequentialism-as-a-decision-procedure elides the need for such sophistry sophistication.

Once we make that distinction, the case for temporal discounting falls out naturally as a function of our uncertainty.

This is an interesting post, thank you! What do you think is a reasonable discount rate to account for uncertainty in our ability to affect the future?

That sounds like an empirical question that I don't have the data (or expertise) to evaluate/estimate at this point.

To start with, I'd try to familiarise myself with the econ discount literature, understand their methodology and the motivations for particular recommended discount rates and work from there.

Nailed it. I really am surprised this is in the least controversial.

I've been thinking similar things. A few comments:

  • We might be able to justify a specifically exponential  decay by assuming that the impact of our intervention will eventually be nullified by some (unknown) future event. If that event follows a Poisson distribution (i.e. it has an equal probability of occurring in any given year), the probability that the event hasn't occurred at a given point in the future decays exponentially.
  • The rate of decay is probably not universal, but dependent on the intervention. For example, in evaluating the impact of preventing X tonnes of carbon emissions, we expect the carbon will be absorbed into the ocean over 100s or 1000s of years. On the other hand, in trying to influence politics, our impact would become very uncertain beyond a time-scale of 5-10 years. We can therefore have a more certain impact further into the future by influencing climate change than politics. (There are obviously some big caveats here, but it illustrates the point).
  • Chaos theory is relevant here: it's impossible to predict the outcome of actions taken now very far into the future due to the complexity of political/social systems, meaning that the expected value decays over time. In theory, by clapping your hands you create atmospheric turbulence which will eventually drastically change weather systems on the other side of the world (e.g. causing/preventing tornadoes!). Of course, the expected impact is zero - there is an equal probability of causing a tornado as there is of preventing one.
  • Influencing extinction would have an impact for a much longer (indefinite?) period of time.

I might write a separate post going into a bit more detail at some point.

Curated and popular this week
Relevant opportunities