weeatquince

Comments

Possible misconceptions about (strong) longtermism

Hi Jack, Thank you for your thoughts. Always a pleasure to get your views on this topic.

I agree with your overall point that the case isn’t as airtight as it could be

I think that was the main point I wanted to make (the rest was mostly to serve as an example). The case is not yet made with rigour, although maybe soon. Glad you agree.

I would also expect (although cant say for sure) that if you go hang out with GPI academics and ask how certain they are about x y and z about longtermism you would perhaps find less certainty than it comes across from the outside or that you might find on this forum and that it is useful for people to realise that.

Hence thought it might be one for your list.

 

– – 

The specific points 1. and 2. were mostly to serve as examples for the above (the "etc" was entirely in that vein, just to imply that there maybe things that a truly rigorous attempt to prove CL would throw up).

Main point made, and even roughly agreed on :-), so happy to opine a few thoughts on the truth or 1. and 2. anyway:

 

– – 

1. The actions that are best in the short run are the same as the ones that are best in the long run

Please assume that by short-term I mean within 100 years, not within 10 years.

A few reasons you might think this is true:

  • Convergence: See your section on "Longtermists won't reduce suffering today". Consider some of the examples in the paper, speeding up progress, preventing climate change, etc are quite possibly the best things you would do to maximise benefit over the next 100 years. AllFed justify working on extreme global risks based on expected lives saved in the short-run. (If this is suspicious convergence it goes both ways, why are many of the examples in the paper so suspiciously close to what is short-run best).
  • Try it: Try making the best plan you can accounting for all the souls in the next 1x10^100 years, but no longer. Great done. Now make the best plan but only take into account the next 1X10^99 years. Done? does it look any different? Now try 1x10^50 years. How different does that look? What about the best plan for 100000 years? Does that plan look different? What about 1000 years or 100 years?  At what point does it look different? Based on my experience of working with governments on long-term planning my guess would be it would start to differ significantly after about 50-100 years. (Although it might well be the case that this number is higher for philanthropists rather than policy makers.)
  • Neglectedness: Note that the two thirds of the next century (after 33 years) is basically not featured in almost any planning today. That means most of the next 100 years is almost as neglected as the long-term future (and easier to impact).

On:

Even if there are some cases where the actions that have the best short run effects are also the ones that have the best long-run effects ... the value of these actions will in fact be coming from the long-run effects

I think I agree with this (at least intuitively agree, not given it deep though). I raised 1. as it as I think it is a useful example of where the Case for Strong Longtermism paper focuses on AL rather than CL. See section 3 p9 – the authors say if short-term actions are also the best long-term actions then AL is trivially true and then move on.  The point you raise here is just not raised by the authors as it is not relevant to the truth of AL.

 

– – 

2. Making decisions solely by evaluating ex ante effects is not a useful way of making decisions or otherwise interacting with the world.

I agree that AL leads to 'deontic strong longtermism'.

I don’t think expected value approach (which is the dominant approach used in their paper) or the other approaches they discuss fully engages with how to make complex decisions about the far future. I don’t think we disagree much here (you say more work could be done on decisions theoretic issues, and on tractability).

I would need to know more about your proposed alternative to comment.

Unfortunately, I am running out of time and weekend to go into this in too much depth on this so I hope you don’t mind that instead of a lengthy answer here if I just link you to some reading. 

I have recently been reading the following that you might find an interesting introduction to how one might go about thinking about these topics and is fairly close to my views:

https://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking/

https://www.givewell.org/modeling-extreme-model-uncertainty

 

– –

Always happy to hear your views. Have a great week

Possible misconceptions about (strong) longtermism

Thank you for this Jack.

Floating an additional idea here, in the terms of another misconception that I sometimes see. Very interested in your feedback:

 

Possible misconception: Someone has made a thorough case for "strong longtermism"

Possible misconception: “Greaves and MacAskill at GPI have set out a detailed argument for strong longtermism.”

My response: “Greaves and MacAskill argue for 'axiological strong longtermism' but this is not sufficient to make the case that what we ought to do is mainly determined by focusing on far future effects”

Axiological strong longtermism (AL) is the idea that: “In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.”

The colloquial use of strong longtermism on this forum (CL) is something like  “In most of the ethical choices we face today we can focus primarily on the far-future effects of our actions".

Now there are a few reasons why this might not follow (why CL might not follow from AL):

  1. The actions that are best in the short run are the same as the ones that are best in the long run (this is consistent with AL, see p10 of the Case for Strong Longtermism paper) in which case focusing attention on the more certain short term could be sufficient.
  2. Making decisions solely by evaluating ex ante effects is not a useful way of making decisions or otherwise interacting with the world.
  3. Etc

Whether or not you agree with these reasons it should at least be acknowledged that the Case for Strong Longtermism paper focuses on making a case for AL – it does not actually try to make a case for CL. This does not mean there is no way to make a case for CL but I have not seen anyone try to and I expect it would be very difficult to do, especially if aiming for philosophical-level rigour.

 

– – 

This misconception can be used in discussions for or against longtermism. If you happen to be a super strong believer that we should focus mainly on the far future it would whisper caution and if you think that Greaves and MacAskill's arguments are poor it would suggest being careful not to overstate their claims.



(PS. Both 1 and 2 seem likely to be true to me)
 

What Helped the Voiceless? Historical Case Studies

Yes sorry my misunderstanding. You are correct that this would still be non-ideal. 

I don’t think in most cases it would be a big problem but yes it would be problem.

 

Also another very clear problem will all of this is that humans do not naturally plan in their own long-term self interest. So for example enfranchising the young would not necessarily lead to less short-termism just because they have longer to live. The policies would have to be more nuanced and complex than that.

 

Either way I think I am drawing a lesson to lean more towards strategies that focus a bit more on policies that empower creating a good world for the next generation rather than for all future generations, although of course both matter. 

Introduction to Longtermism

Thank you Jack very useful. Thank you for the reading suggestion too. Some more thoughts from me

"Discounting for the catastrophe rate" should also include discounting for sudden positive windfalls or other successes that make current actions less useful. Eg if we find out that the universe is populated by benevolent intelligent non-human life anyway, or if a future unexpected invention suddenly solves societal problems, etc.

There should also be an internal project discount rate (not mentioned in my original comment). So the general discount rate (discussed above) applies after you have discounted the project you are currently working on for the chance that the project itself becomes of no value – capturing internal project risks or windfalls, as opposed to catastrophic risk or windfalls.

I am not sure I get the point about "discount the longterm future as if we were in the safest world among those we find plausible".

I don’t think any of this (on its own) invalidates the case for longtermism but I do expect it to be relevant to thinking through how longtermists make decisions.

Introduction to Longtermism

Yes that is true and a good point. I think I would expect a very small non-zero discount rate to be reasonable, although still not sure what relevance this has to longtermism arguments. 

Introduction to Longtermism

I guess some of the: AI will be transformative therefore deserves attention arguments are some of the oldest and most generally excepted within this space.

For various reasons I think the arguments for focusing on x-risk are much stronger than other longtermist arguments, but how best to do this, what x-risks to focus on, etc, is all still new and somewhat uncertain.

What Helped the Voiceless? Historical Case Studies

Hi, The point isn't about what is "predictable" it is about what is "plannable". Predictions are only useful insofar as they let us decide how to act. What we want is to be able robustly positively affect the world in the future.

So the adapted version of your version of my argument would be:

  • We can sometimes take actions that robustly positively affect the world with a timescale of 30+ years, BUT as the future is so uncertain most such long-term plans involve being flexible and adaptable to changes and in  practice they look very similar to if you planned for <30 year effects (with the additional caveat that in 30 years you want to be in as good a position as possible to keep making long-term positive changes going forward).
    • E.g. The global technologies and trends that could lead to brutal totalitarianism over the very long-term future are so uncertain that addressing the trends and technologies that might lead to totalitarianism in the next 30 years whilst also keeping an ongoing watchful eye on emerging trends and technologies and adapting to concerning  changes and/or to opportunities to strengthen democracy, is likely the best plan you can make.
  • Therefore there's a lot of overlap between the policies that (as best as we can tell) have the best effects on the world in 30+ years (or 60+ years ), those that have the best effects on the world in 0-30 years (and also leave the world in 30 years time ready to the next 30 years).
  • Therefore we can achieve most longtermist policy goals by getting policies that are best for the world over the next 0-30 years (and also leave the world in 30 years time ready to the next 30 years).

 

I think this mostly (although not quite 100%) addresses the two concerns that you raise


NOTES:

I would note that 30 years is not some magic number. Much of policy, including some long-term policy is time-independent. Where plans are made they might be over the next 1, 3, 5, 10, 20, 25, 30 or 50 years, as appropriate given the topic at hand. Over each length of time the aim should not be solely to maximise benefit over the planned time period but to leave the world in a good end state so that it can continue to maximise benefit going forward (eg your 1 year budgeting shouldn't say lets spend all the money this year).

There are plans that go beyond 30 years but according to Roman Kaznaric's book The Good Ancestor making plans for more than 30 years are very rare. And my own experience suggests 25 years is the maximum in most (UK) government work, and even at that length of time it is often poorly done. Hence I tend to settle on 30 years as reasonable maximum. There are of course some plans that go beyond 30 years. They tend to be on issues where long-term thinking is both necessary and simple (eg tree planting) or to use adopt adaptive planning techniques to allow for various changes in circumstances (eg the Thames Estuary 2100 flood planning).

What Helped the Voiceless? Historical Case Studies

Any Future Generations institution should be explicitly mandated to consider long-term prosperity, in addition to existential risks arising from technological development and environmental sustainability

Yes I fully agree with this. 

 [...] advocates of future generations can lastingly diminish the opposition of business interests—or turn it into support—by designing pro-future institutions so that they visibly contribute to areas where future generations and far-sighted businesses have common interests, such as long-term trends in infrastructure, research and development, education, and political/economic stability.

I also agree with this – although I would take my agreement with a pinch of salt – I don’t feel I have specific expertise on how farsighted businesses can be in order to take a strong view on this. 

Introduction to Longtermism

To add a more opinionated less factual point, as someone who researches and advises policymakers on how to think and make long-term decisions, I tend to be somewhat disappointed by the extent to which the longtermist community lacks discussions and understanding of how long-term decision making is done in practice.  I guess, if worded strongly, this could be worded as an additional community-level objection to longtermism along the lines of: 

Objection: The longtermist idea makes quite strong somewhat counterintuitive claims  about how to do good but the longtermist community has not yet demonstrated appropriately strong intellectual rigour (other than in the field of philosophy) about these claims and what they mean in practice. Individuals should therefore should be  sceptical of the claims of longtermists about how to do good. 

If worded more politely the objection would basically be that the ideas of longtermism are very new and somewhat untested and may still change significantly so we should be super cautious about adopting the conclusions of longtermists for a while longer.

Load More