Hide table of contents

On 14th June 2021 Hilary Greaves and Will MacAskill published an update of their September 2019 paper, The case for strong longtermism. The original paper also had a video counterpart.

The original paper argued for axiological strong longtermism (AL), providing examples of interventions that avoided the “washing out hypothesis”. It claimed that AL was robust to plausible deviations from popular axiological and decision-theoretic assumptions, considered which decision situations fall within the scope of AL, and argued for deontic strong longtermism on account of very large axiological stakes.

The new paper strengthens existing points and introduces some new content. I briefly summarise (what I see as) some of the most interesting/important differences between the new and old paper below, with a focus on what is new rather than what is strengthened. In the comments section I provide some of my own thoughts on these differences.

What is new?

A new view on the best objections to strong longtermism

In the original paper, Greaves and MacAskill state that they regard the washing-out hypothesis as “the most serious objection to axiological strong longtermism”. The washing-out hypothesis states that the expected instantaneous value differences between available actions decay with time from the point of action, and decay sufficiently fast that the near-future effects tend to be the most important contributor to expected value. In other words, Greaves and MacAskill considered the intractability objection to be the strongest objection.

However, in the new paper this is no longer the case. Instead, the authors state that “the weakest points in the case for axiological strong longtermism” are: 

  • The assessment of numbers for the cost-effectiveness of particular attempts to benefit the far future (I take this to be the “arbitrariness” issue)
  • The appropriate treatment of cluelessness
  • The question of whether an expected value approach to uncertainty is too “fanatical” in this context

They state that these issues in particular would benefit from further research.

As outlined in the comments, I think that this identification of the biggest weaknesses in the argument for ASL is the most useful contribution of this second version of the paper.

A new definition for ASL

The original paper defined axiological strong longtermism (AL) as:

In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.

The new paper abbreviates axiological strong longtermism as ‘ASL’ and defines it as:

In the most important decision situations facing agents today, 

(i) every option that is near-best overall is near-best for the far future, and

(ii) every option that is near-best overall delivers much larger benefits in the far future than in the near future.

As I discuss in the comments, as far as I can tell this new definition does pretty much the same job as the old definition, but is clearer and more precise.

Addressing cluelessness

The new paper attempts to address the concern that we are clueless about the very long-run effects of our actions which, if true, would undermine both ASL and DSL (deontic strong longtermism). The authors note that there are several quite distinct possibilities in the vicinity of the “cluelessness” worry:

  1. Simple cluelessness
  2. Conscious unawareness
  3. Imprecision
  4. Arbitrariness
  5. Ambiguity aversion

For each of these they argue that, in their view, ASL is not in fact undermined. However, they express some uncertainty with regards to arbitrariness stating that they “regard the quantitative assessment of the crucial far-future-related variables as a particularly important topic for further research”. 

Addressing fanaticism

The new paper attempts to address the concern that the arguments for ASL and DSL rests problematically on tiny probabilities of enormous payoffs.

The authors deny that, on a societal level, probabilities of positively affecting the very long-term future are problematically small. However, they concede that these probabilities may be problematically small on an individual level i.e. when an individual is donating money to positively affect the very long-term future.

Ultimately however the authors tentatively imply that this is all a moot point as they imply that they are happy to be fanatical, as avoiding fanaticism comes at too high a price (extreme “timidity”) and they believe that our intuitions around how to handle very low probabilities are unreliable.

Anything else?

Yes! To just give a few more:

  • The authors raise the possibility that far-future impacts may be more important than near-future impacts in a much wider class of decision situations than they cover in the main body of the text. For instance, decisions about whether or not to have a child, and government policy decisions within a relatively narrow ‘cause area’. They say therefore that strong longtermism could potentially set a methodology for further work in applied ethics and applied political philosophy, and could lead to some “surprisingly revisionary” answers.
  • The authors present a more fleshed out argument for the vastness of the future, including consideration of digital sentience.
  • The authors present a more fleshed out argument for DSL, considering potential objections.
  • “Attractor state” becomes “persistent state”.
Comments12
Sorted by Click to highlight new comments since: Today at 2:12 PM

On their (new) view on what objections against strong longtermism are strongest - I think that this may be the most useful update in the paper. I think it is very important to pinpoint the strongest objections to a thesis, to focus further research. 

It is interesting that the authors essentially appear to have dismissed the intractability objection. It isn’t clear if they no longer think this is a valid objection, or if they just don’t think it is as strong as the other objections they highlight this time around. I would like to ask them about this in an AMA.

The authors concede that there needs to be further research to tackle these new objections. Overall, I got the impression that the authors are still “strong longtermists”, but are perhaps less confident in the longtermist thesis than they were when they wrote the first version of the paper - something else I would like to ask them about.

On addressing cluelessness - for the most part I agree with the authors’ views, which includes the view that there needs to be further research in this area.

I do find it odd however that they attempt to counter the worry of ‘simple cluelessness’ but not that of ‘complex cluelessness’ i.e. to counter the possibility that there could be semi-foreseeable unintended consequences of longtermist interventions that make us ultimately uncertain on the sign of the expected-value assessment of these interventions. Maybe they see this as obviously not an issue...but I would have appreciated some thoughts on this.

I think complex cluelessness is essentially covered by the other subsections in the Cluelessness section. It's an issue of assigning numbers arbitrarily to the point that what you should do depends on your arbitrary beliefs. I don't think they succeed in addressing the issue, though, since they don't sufficiently discuss and address ways each of their proposed interventions could backfire despite our best intentions (they do discuss some in section 4, though). The bar is pretty high to satisfy any "reasonable" person.

Thanks, I really haven't given sufficient thought to the cluelessness section which seems the most novel and tricky. Fanaticism is probably just as important, if not more so, but is also easier to get one's head around.  

I agree with you in your other comment though that the following seems to imply that the authors are not "complexly clueless" about AI safety:

For example, we don’t think any reasonable representor even contains a probability function according to which efforts to mitigate AI risk save only 0.001 lives per $100 in expectation.

I mean I guess it is probably the case that if you’re saying it’s unreasonable for a probability function associated with very small positive expected value to be contained in your representor, you’ll also say a probability function associated with negative expected value also isn't contained in it. This does seem to me to be a slightly extreme view.

Ya, maybe your representor should be a convex set, so that for any two functions in it, you can take any probabilistic mixture of them, and that would also be in your representor. This way, if you have one with expected value x and another with expected value y, you should have functions with each possible expected value between. So, if you have positive and negative EVs in your representor, you would also have 0 EV.

Do you mean negative EV is slightly extreme or ruling out negative EV is slightly extreme?

I think neglecting to look into and address ways something could be negative (e.g. a probability difference, EV) often leads us to unjustifiably assuming a positive lower bound, and I think this is an easy mistake to make or miss. Combining a positive lower bound with astronomical stakes would make the argument appear very compelling.

Yeah I meant ruling out negative EV in a representor may be slightly extreme, but I’m not really sure - I need to read more.

Thanks for this post Jack, I found it really useful as I haven't got round yet to reading the updated paper. This break down in the cluelessness section was a new arrangement to me. Does anyone know if this break down has been used elsewhere? If not this seems like useful progress in better defining the cluelessness objections to longtermism. 

Thanks Robert. I've never seen this breakdown of cluelessness and it could be a useful way for further research to define the issue.

The Global Priorities Institute raised the modelling of cluelessness in their research agenda and I'm looking forward to further work on this. If interested, see below for the two research questions related to cluelessness in the GPI research agenda. I have a feeling that there is still quite a bit of research that could be conducted in this area.

------------------

Forecasting the long-term effects of our actions often requires us to make difficult comparisons between complex and messy bodies of competing evidence, a situation Greaves (2016) calls “complex cluelessness”. We must also reckon with our own incomplete awareness, that is, the likelihood that the long-run future will be shaped by events we’ve never considered and perhaps can’t fully imagine. What is the appropriate response to this sort of epistemic situation? For instance, does rationality require us to adopt precise subjective probabilities concerning the very-long-run effects of our actions, imprecise probabilities (and if so, how imprecise?), or some other sort of doxastic state entirely?

Faced with the task of comparing actions in terms of expected value, it often seems that the agent is ‘clueless’: that is, that the available empirical and theoretical evidence simply supplies too thin a basis for guiding decisions in any principled way (Lenman 2000; Greaves 2016; Mogensen 2020) (INFORMAL: Tomasik 2013; Askell 2018). How is this situation best modelled, and what is the rational way of making decisions when in this predicament? Does cluelessness systematically favour some types of action over others?

Thanks, I found this useful  - I'm not sure when I'll get around to reading the new version since I read the old version recently, so it's useful to have a summary of some key changes.

Links that some readers may find useful:

If, for instance, one had credences such that the expected number of future people was only 10^14, the status quo probability of catastrophe from AI was only 0.001%, and the proportion by which $1 billion of careful spending would reduce this risk was also only 0.001%, then one would judge spending on AI safety equivalent to saving only 0.001 lives per $100 – less than the near-future benefits of bednets. But this constellation of conditions seems unreasonable.

(...)

For example, we don’t think any reasonable representor even contains a probability function according to which efforts to mitigate AI risk save only 0.001 lives per $100 in expectation.

This isn't central so they don't elaborate much, but they are assuming here that we will not do more harm than good in expectation if we spend "carefully", and that seems arbitrary and unreasonable to me. See some discussion here.

On the new definition - as far as I can tell it does pretty much the same job as the old definition, but is clearer and more precise, bar a small nitpick I have...

One deviation is from “a wide class of decision situations” to “the most important decision situations facing agents today”. As far as I can tell, Greaves and MacAskill don’t actually narrow the set of decision situations they argue ASL applies to in the new paper. Instead, I suspect the motivation for this change in wording was because “wide” is quite imprecise and subjective (Greaves concedes this in her 80,000 Hours interview). Therefore, instead of categorising the set of decision situations as wide, which was supposed to communicate the important decision-relevance of ASL, the authors instead describe these same decision situations as “the most important faced by agents today” on account of the fact that they have particularly great significance for the well-being of both present and future sentient beings. In doing so they still communicate the important decision-relevance of ASL, whilst being slightly more precise.

The authors also change from “fairly small subset of options whose ex ante effects on the very long-run future are best” to “options that are near-best for the far future”. It is interesting that they don’t specify “ex ante” best - as if it was simply obvious that is what they mean by “best”…(maybe you can tell I’m not super impressed by this change…unless I’m missing something?).

Otherwise, splitting the definition into two conditions seems to have just made it easier to understand.

Strongtermism?

More from JackM
Curated and popular this week
Relevant opportunities