In Hilary Greaves and William MacAskills' "The Case for Strong Longtermism," they offer  a rough way for evaluating whether or not strong longtermism (SL) is true.

In this post, I will attempt to offer a more precise way for evaluating whether SL is true, making heavy use of ideas throughout their essay. A formulation similar to mine is roughly suggested in their essay, but I thought it would be valuable to write out my version of it in full.

Throughout this essay, I will refer to Greaves and MacAskill as G&M for concision.

To copy G&M, let's define a time t as a "surprisingly" long time from now, such as "100 years."

With this definition,

  • The near-term is all time until time t.
  • The long-term is all time after time t.

Additionally,

  • Neatermist actions, which we will write as , are those which seek to maximize expected value across the near-term.
  • Longtermist actions, which we will write as , are those which seek to maximize expected value across the long-term.

SL, then, is the idea that, of all of the actions that one can take, the best actions are longtermist ones rather than neartermist ones.

As a classical utilitarian, I think we should evaluate the truthfulness of SL by using expected value. As such, SL is true iff the expected value of longtermist actions is greater than the expected value of neartermist actions. In mathematical notation, this means that SL is true iff:

In this post, I'm going to expand on the first part of the equation.

In my view, there are roughly six domains that humans can inhabit:

  1. The Earth
  2. The Solar System
  3. The Milky Way Galaxy
  4. The Virgo Supercluster
  5. Beyond the Virgo Supercluster
  6. An infinite amount of space (if the universe has infinitely reachable space)

Let's let  be the set of these domains.

We are less likely to reach each domain than the one before it, but the expected value of each domain is likely much higher than the one before it. M&G note (in reference to a slightly different set of domains) that "in any such expected-value calculation, it tends to be the ‘largest’ scenario in which one has any non-zero credence that drives the overall estimate," meaning that the largest domain usually accounts for almost all of the expected value of longtermist actions.

Notably, if humans don't inhabit a domain, it's value does not necessarily default to zero because it is possible that aliens, for instance, could have inhabited it.

With that said, let's say that:

Then the expected value of the long-term future can be expressed as:

But, we're not actually concerned with the expected value of the long-term future. Instead, we're concerned with the expected value longtermist actions have on the long-term future. Longtermists are usually concerned with either reducing the risk of extinction or with increasing the value of futures where humanity survives. As such, let's say:

    • Meaning how much an action reduces the risk of extinction or increases the probability of persisting within a domain
    • Meaning how much an action increases the expected value of humanity's presence in a certain domain given that humanity inhabits it

Using this,

Hence, SL is true iff:

The reason I like this expression is that it allows us to more precisely quantify the expected value of longtermist actions across different domains as well as to account for counterfactual outcomes. Even if you think it's extraordinarily unlikely we spread across the Virgo Supercluster, you can still use this equation to account for how actions affect human life within the Milky Way, across the Solar System, and on Earth. It also allows us to distinguish between what domains the action affects and what kind of effects the action creates.

3

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

It seems like most of the additional coefficients you've added are impossible to estimate with any degree of confidence, particularly when it is plausible the impact may be negative. Whether it was the intention or not, that is the main message I get from your formulation

As someone who is not a strong longtermist, I note that an advantage for using non longtermist heuristics to evaluate impact is that identifying whether an action appears robustly positive for aggregate utility [for humans] on earth up to time t is much easier than anticipating the effect on the Virgo supercluster after time t

(A more sophisticated approach might use discounting for temporal and extreme spatial distance rather than time bounding, but amounts to the same thing; attaching zero weight to the estimated impact of my actions on the Virgo supercluster a thousand years from now)

Curated and popular this week
Relevant opportunities