Hello! My name is Vaden Masrani and I'm a grad student at UBC in machine learning. I'm a friend of the community and have been very impressed with all the excellent work done here, but I've become very worried about the new longtermist trend developing recently.
I've written a critical review of longtermism here in hopes that bringing an 'outsiders' perspective might help stimulate some new conversation in this space. I'm posting the piece in the forum hoping that William MacAskill and Hilary Greaves might see and respond to it. There's also a little reddit discussion forming as well that might be of interest to some.
Cheers!
Hi Vaden,
Cool post! I think you make a lot of good points. Nevertheless, I think longtermism is important and defensible, so I’ll offer some defence here.
First, your point about future expectations being undefined seems to prove too much. There are infinitely many ways of rolling a fair die (someone shouts ‘1!’ while the die is in the air, someone shouts ‘2!’, etc.). But there is clearly some sense in which I ought to assign a probability of 1/6 to the hypothesis that the die lands on 1.
Suppose, for example, that I am offered a choice: either bet on a six-sided die landing on 1 or bet on a twenty-sided die landing on 1. If both probabilities are undefined, then it seems I can permissibly bet on either. But clearly I ought to bet on the six-sided die.
Now you may say that we have a measure over the set of outcomes when we’re rolling a die and we don’t have a measure over the set of futures. But it’s unclear to me what measure could apply to die rolls but not to futures.
And, in any case, there are arguments for the claim that we must assign probabilities to hypotheses like ‘The die lands on 1’ and ‘There will exist at least 10^16 people in the future.’ If we don’t assign probabilities, we are vulnerable to getting Dutch-booked and accuracy-dominated.
Suppose, then, that you accept that we must assign probabilities to the relevant hypotheses. Greaves and MacAskill’s point is that all reasonable-sounding probability assignments imply that we ought to pursue longtermist interventions (given that we accept their moral premise, which I discuss later). Consider, for example, the hypothesis that humanity spreads into space and that 10^24 people exist in the future. What probability assignment to this hypothesis sounds reasonable? Opinions will differ to some extent, but it seems extremely overconfident to assign this hypothesis a probability of less than one in one billion. On a standard view about the relationship between probabilities and rational action, that would imply a willingness to stake £1 billion on the hypothesis, losing it all if the hypothesis turns out true and winning an extra £2 if the hypothesis turns out false (assuming, for illustration’s sake only, that utility is linear with respect to money across this interval).
The case is the same with other empirical hypotheses that Greaves and MacAskill consider. To get the result that longtermist interventions don’t maximise expected value, you have to make all kinds of overconfident-sounding probability assignments, like ‘I am almost certain that humanity will not spread to the stars,’ ‘I am almost certain that smart, well-motivated people with billions of pounds of resources would not reduce extinction risk by even 0.00001%,’ ‘I am almost certain that billions of pounds of resources devoted to further research on longtermism would not unearth a viable longtermist intervention,’ etc. So, as it turns out, accepting longtermism does not commit us to strong claims about what the future will be like. Instead, it is denying longtermism that commits us to such claims.
So, to summarise the above, we have to assign probabilities to empirical hypotheses, on pain of getting Dutch-booked and accuracy-dominated. And all reasonable-seeming probability assignments imply that we should pursue longtermist interventions.
Now, this final sentence is conditional on the truth of Greaves and MacAskill’s moral premises. In particular, it depends on their claim that we ought to have a zero rate of pure time preference.
The first thing to note is that the word ‘pure’ is important here. As you point out, ‘we should be biased towards the present for the simple reason that tomorrow may not arrive.’ Greaves and MacAskill would agree. Longtermists incorporate this factor in their arguments, and it does not change their conclusions. Ord calls it ‘discounting for the catastrophe rate’ in The Precipice, and you can read more about the role it plays there.
When Greaves and MacAskill claim that we ought to have a zero rate of pure time preference, they are claiming that we ought not care less about consequences purely because they occur later in time. This pattern of caring really does seem indefensible. Suppose, for example, that a villain has set a time-bomb in an elementary school classroom. You initially think it is set to go off in a year’s time, and you are horrified. In a year’s time, 30 children will die. Suppose that the villain then tells you that they’ve set the bomb to go off in ten years’ time. In ten years’ time, 30 children will die. Are you now less horrified? If you had a positive rate of pure time preference, you would be. But that seems absurd.
As Ord points out, positive rates of pure time preference seem even less defensible when we consider longer time scales: ‘At a rate of pure time preference of 1 percent, a single death in 6,000 years’ time would be vastly more important than a billion deaths in 9,000 years. And King Tutankhamun would have been obliged to value a single day of suffering in the life of one of his contemporaries as more important than a lifetime of suffering for all 7.7 billion people alive today.’
Thanks again for the post! It’s good to see longtermism getting some critical examination.
The Dutch-Book argument relies on your willingness to take both sides of a bet at a given odds or probability (see Sec. 1.2 of your link). It doesn't tell you that you must assign probabilities, but if you do and are willing to bet on them, they must be consistent with proba... (read more)