The original paper argued for axiological strong longtermism (AL), providing examples of interventions that avoided the “washing out hypothesis”. It claimed that AL was robust to plausible deviations from popular axiological and decision-theoretic assumptions, considered which decision situations fall within the scope of AL, and argued for deontic strong longtermism on account of very large axiological stakes.
The new paper strengthens existing points and introduces some new content. I briefly summarise (what I see as) some of the most interesting/important differences between the new and old paper below, with a focus on what is new rather than what is strengthened. In the comments section I provide some of my own thoughts on these differences.
What is new?
A new view on the best objections to strong longtermism
In the original paper, Greaves and MacAskill state that they regard the washing-out hypothesis as “the most serious objection to axiological strong longtermism”. The washing-out hypothesis states that the expected instantaneous value differences between available actions decay with time from the point of action, and decay sufficiently fast that the near-future effects tend to be the most important contributor to expected value. In other words, Greaves and MacAskill considered the intractability objection to be the strongest objection.
However, in the new paper this is no longer the case. Instead, the authors state that “the weakest points in the case for axiological strong longtermism” are:
- The assessment of numbers for the cost-effectiveness of particular attempts to benefit the far future (I take this to be the “arbitrariness” issue)
- The appropriate treatment of cluelessness
- The question of whether an expected value approach to uncertainty is too “fanatical” in this context
They state that these issues in particular would benefit from further research.
As outlined in the comments, I think that this identification of the biggest weaknesses in the argument for ASL is the most useful contribution of this second version of the paper.
A new definition for ASL
The original paper defined axiological strong longtermism (AL) as:
In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.
The new paper abbreviates axiological strong longtermism as ‘ASL’ and defines it as:
In the most important decision situations facing agents today,
(i) every option that is near-best overall is near-best for the far future, and
(ii) every option that is near-best overall delivers much larger benefits in the far future than in the near future.
As I discuss in the comments, as far as I can tell this new definition does pretty much the same job as the old definition, but is clearer and more precise.
The new paper attempts to address the concern that we are clueless about the very long-run effects of our actions which, if true, would undermine both ASL and DSL (deontic strong longtermism). The authors note that there are several quite distinct possibilities in the vicinity of the “cluelessness” worry:
- Simple cluelessness
- Conscious unawareness
- Ambiguity aversion
For each of these they argue that, in their view, ASL is not in fact undermined. However, they express some uncertainty with regards to arbitrariness stating that they “regard the quantitative assessment of the crucial far-future-related variables as a particularly important topic for further research”.
The new paper attempts to address the concern that the arguments for ASL and DSL rests problematically on tiny probabilities of enormous payoffs.
The authors deny that, on a societal level, probabilities of positively affecting the very long-term future are problematically small. However, they concede that these probabilities may be problematically small on an individual level i.e. when an individual is donating money to positively affect the very long-term future.
Ultimately however the authors tentatively imply that this is all a moot point as they imply that they are happy to be fanatical, as avoiding fanaticism comes at too high a price (extreme “timidity”) and they believe that our intuitions around how to handle very low probabilities are unreliable.
Yes! To just give a few more:
- The authors raise the possibility that far-future impacts may be more important than near-future impacts in a much wider class of decision situations than they cover in the main body of the text. For instance, decisions about whether or not to have a child, and government policy decisions within a relatively narrow ‘cause area’. They say therefore that strong longtermism could potentially set a methodology for further work in applied ethics and applied political philosophy, and could lead to some “surprisingly revisionary” answers.
- The authors present a more fleshed out argument for the vastness of the future, including consideration of digital sentience.
- The authors present a more fleshed out argument for DSL, considering potential objections.
- “Attractor state” becomes “persistent state”.