elliottthornley

Philosophy DPhil at Oxford and Parfit Scholar at GPI https://www.elliott-thornley.com/

Wiki Contributions

Comments

Towards a Weaker Longtermism

I remember Toby Ord gave a talk at GPI where he pointed out the following:

Let L be long-term value per unit of resources and N be near-term value per unit of resources. Then spending 50% of resources on the best long-term intervention and 50% of resources on the best near-term intervention will lead you to split resources equally between A and C. But the best thing to do on a 0.5*(near-term value)+0.5*(long-term value) value function is to devote 100% of resources to B.

Diagram

The Impossibility of a Satisfactory Population Prospect Axiology

Yes, that all sounds right to me. Thanks for the tip about uniformity and fanaticism!  Uniformity also  comes up here, in the distinction between the Quantity Condition and the Trade-Off Condition.

The Impossibility of a Satisfactory Population Prospect Axiology

Thanks! This is a really cool idea and I'll have to think more about it. What I'll say now is that I think your version of lexical totalism violates RGNEP and RNE. That's because of the order in which I have the quantifiers. I say, 'there exists p such that for any k...'. I think your lexical totalism only satisfies weaker versions of RGNEP and RNE with the quantifiers the other way around: 'for any k, there exists p...'.

The Impossibility of a Satisfactory Population Prospect Axiology

Ah no, that's as it should be!  is saying that  is one of the very positive welfare levels mentioned on page 4.

The Impossibility of a Satisfactory Population Prospect Axiology

Thanks! Your points about independence sound right to me.

The Impossibility of a Satisfactory Population Prospect Axiology

Thanks for your comment! I think the following is a closer analogy to what I say in the paper:

Suppose apples are better than oranges, which are in turn better than bananas. And suppose your choices are:

  1. An apple and  bananas for sure.
  2. An apple with probability  and an orange with probability , along with  oranges for sure.

Then even if you believe:

  • One apple is better than any amount of oranges

It still seems as if, for some large  and small , 2 is better than 1. 2 slightly increases the risk you miss out on an apple, but it compensates you for that increased risk by giving you many oranges rather than many bananas.

On your side question, I don't assume completeness! But maybe if I did, then you could recover the VNM theorem. I'd have to give it more thought.

The Impossibility of a Satisfactory Population Prospect Axiology

Thanks! 

And agreed! The title of the paper is intended as a riff on the title of the chapter where Arrhenius gives his sixth impossibility theorem: 'The Impossibility of a Satisfactory Population Ethics.' I think that an RC-implying theory can still be satisfactory.

A case against strong longtermism

Thanks!

Your point about time preference is an important one, and I think you're right that people sometimes make too quick an inference from a zero rate of pure time preference to a future-focus, without properly heeding just how difficult it is to predict the long-term consequences of our actions. But in my experience, longtermists are very aware of the difficulty. They recognise that the long-term consequences of almost all of our actions are so difficult to predict that their expected long-term value is roughly 0. Nevertheless, they think that the long-term consequences of some very small subset of actions are predictable enough to justify undertaking those actions.

On the dice example, you say that the infinite set of things that could happen while the die is in the air is not the outcome space about which we're concerned. But can't the longtermist make the same response? Imagine they said: 'For the purpose of calculating a lower bound on the expected value of reducing x-risk, the infinite set of futures is not the outcome space about which we're concered. The outcome space about which we're concerned consists of the following two outcomes: (1) Humanity goes extinct before 2100, (2) Humanity does not go extinct before 2100.'

And, in any case, it seems like Vaden's point about future expectations being undefined still proves too much. Consider instead the following two hypotheses and suppose you have to bet on one of them: (1) The human population will be at least 8 billion next year, (2) The human population will be at least 7 billion next year. If the probabilities of both hypotheses are undefined, then it would seem permissible to bet on either. But clearly you ought to bet on (2). So  it seems like these probabilities are not undefined after all.

Load More