Intro/summary:

Will MacAskill, arguably the biggest proponent of longtermism, summarises the argument for it as:

1. Future people count.
2. There could be a lot of them.
3. We can make their lives go better.

On the face of it, this is a convincing argument.

However, this post outlines my objections to it, summarised as:

1. Future people count, but less than present people.
2. There might not be that many future people.
3. We might not be able to help future people much.

To this, I will add a fourth: there are trade-offs from this work.

12

0
0

Reactions

0
0
Comments7
Sorted by Click to highlight new comments since: Today at 9:05 AM

I have a post where I address what I see as misconceptions about longtermism. In response to Future people count, but less than present people I would recommend you read the "Longtermists have to think future people have the same moral value as people today" section. In short, I don't think future people counting for less really dents longtermism that much at all as it isn't reasonable to discount that much. You seem to accept that we can't discount that much, so if you accept the other core claims of the argument longtermism will still go through. Discounting future people less is pretty irrelevant in my opinion.

I want to read that Thorstad paper and until I do can't really respond. I would say however that even if the expected number of people in the future isn't as high as many longtermists have claimed, it's still got to be at least somewhat large and large enough to mean GiveWell charities that focus on near-term effects aren't the best we can do. One could imagine being a 'medium-termist' and wanting to say address climate change and boost economic growth which affect the medium and long-term. Moving to GiveWell would seem to me to be overcorrecting.

The assumption that future people will be happy isn't required for longtermism (as you seem to imply). The value of reducing extinction risk does depend on future people being happy (or at least above the zero level of wellbeing), but there are longtermist approaches that don't involve reducing extinction risk. My post touches on some of these in the Sketch of the strong longtermist argument section. For example mitigating climate change, ensuring good institutions develop, and ensuring AI is aligned to benefit human wellbeing.

You say that some risks such as those from AGI or biological weapons are "less empirical and more based on intuitions or unverifiable claims, and hence near-impossible to argue against". I think one can argue against these risks. For example, David Thorstad argues that various assumptions underlying the singularity hypothesis are substantially less plausible than its advocates suppose, arguing that this should allay fears related to existential risk from AI. You can point out weaknesses in the arguments for specific existential risks, it just takes some effort! Personally I think the risks are credible enough to take them seriously, especially given how bad the outcomes would be.

Thank you for the feedback on both the arguments and writing (something I am aiming to improve through this writing). Sorry for being slow to respond, it's been a busy few of weeks!

In response to your points.

In short, I don't think future people counting for less really dents longtermism that much at all as it isn't reasonable to discount that much. You seem to accept that we can't discount that much, so if you accept the other core claims of the argument longtermism will still go through. Discounting future people less is pretty irrelevant in my opinion.

I suspect this depends strongly on your overall shape for the value of the future. If you have infinite exponential growth you're correct. For, in my opinion, more reasonable shapes of future value then this will probably start mattering. In any case, it damages the case for future people to some extent but I agree it is not fatal.

I would say however that even if the expected number of people in the future isn't as high as many longtermists have claimed, it's still got to be at least somewhat large and large enough to mean GiveWell charities that focus on near-term effects aren't the best we can do. One could imagine being a 'medium-termist' and wanting to say address climate change and boost economic growth which affect the medium and long-term. Moving to GiveWell would seem to me to be overcorrecting.

Interesting claim. I would be very interested in a cost-effectiveness analysis (even at BOTEC level) to support this. I don't think we can resolve this without being quantative.

The assumption that future people will be happy isn't required for longtermism (as you seem to imply). The value of reducing extinction risk does depend on future people being happy (or at least above the zero level of wellbeing), but there are longtermist approaches that don't involve reducing extinction risk. My post touches on some of these in the Sketch of the strong longtermist argument section.

I'm pretty sceptical of the tractability of non-x-risk work and our ability to shape the future in broad terms.

You can point out weaknesses in the arguments for specific existential risks, it just takes some effort!

You can, and sometimes (albeit rarely) these arguments are productive, but I still think any numeric estimate you end up with is pretty much just based on intuitions and heavily on priors.

Personally I think the risks are credible enough to take them seriously, especially given how bad the outcomes would be.

Yes, we should certainly take them seriously. But "seriously" is rather imprecise to suggest how many resources we should be willing to throw at it.

For, in my opinion, more reasonable shapes of future value then this will probably start mattering.

Did you read the link I sent? I don't see how it is reasonable to discount very much. I would discount distant future people as much as I would discount distant geographic people (people who are alive today but are not near me). That is to say, not very much.

Interesting claim. I would be very interested in a cost-effectiveness analysis (even at BOTEC level) to support this. I don't think we can resolve this without being quantative.

That is fair, and something I think would be worthwhile. It might be something I try to do at some point. However I would also note the problem of cluelessness which I think is a particular issue for neartermist interventions (see here for my short description of the issue and here for a bit longer). In short - I don't think we actually have a clear sense of the cost-effectiveness of neartermist interventions. I could do a BOTEC and compare to GiveWell's estimates, but I also think GiveWell's estimates miss out far too many effects to be very meaningful.

I'm pretty sceptical of the tractability of non-x-risk work and our ability to shape the future in broad terms.

Feels weird to dismiss a whole class of interventions without justification. Certainly mitigating climate change is tractable. Boosting technological progress / economic growth also seems tractable. I can also think of ways to improve values.

Yes, we should certainly take them seriously. But "seriously" is rather imprecise to suggest how many resources we should be willing to throw at it.

I do personally think that, on the margin, all resources should be going to longtermist work.

Hey Josh,

Is there a reason you haven't copied the whole post? I was surprised not to be able to read it here. 

Good point, I will consider this for next time. Thank you.

Thanks for sharing the link Josh, I enjoyed reading the post (and I agree with Nathan, it definitely seems worth sharing here in its entirety). I think it's a great example of good-faith criticism, lack of deference, and very clearly written :)


As for areas of slight disagreement, where I would welcome your perspective and/or corrections:

1: Future people count, but less than present people.

The first time I read the post I thought you actually were saying that future people do just matter less intrinsically, which I think is as plausible as finding people living in a certain geographic region more morally worthwhile that others because of that fact. You point out that this is not what you believe in your 4th footnote which I think should probably be part of the main post.

As for differing obligations I do agree that from an individual perspective we have special moral obligations to those close to you, but I don't think this extends very far beyond that circle, and other differences have to be justified through people in the present being easier to causally effect (your counterargument 3) rather than applying some discount rate over obligations in future years. Maybe these amount to the same thing - that closeness of causal connection is what you mean by the 'connectedness' of your network, otherwise you might open yourself up to some repugnant implications.[1]

2: There might not be that many future people.

I also think that the Thorstad Paper you link here is an important one for longtermists to deal with (the blog posts you link to are also really good). As you point out though, this does have weird counter-intuitive conclusions. For example, in Thorstad's simple model, the less likely you think x-Risk is, the higher the value you get from reducing it even further! As for your second point about how much value there is in a future, I think for sake of argument clarity this probably deserves its own sub-section?[2] But in principle, even if it's not a source of great value, as long as it's large enough then this effect should cancel out on a totalist utilitarian axiology.[3]

3: We might not be able to help future people much.

So I think this is actually where the rubber really hits the road in terms of objections to longtermism (along with your later point about tradeoffs). I think one of the weaker parts of What We Owe the Future is concrete ways longtermism differs from other philosophies of doing good in terms of being action-guiding. As you point out, in practice xRisk dominates here because, by definition, its effects are permanent and so will definitely affect the future.

I do think Thorstad overstates the strength of the 'regression to the inscrutable' claim. We have to make choices and trade-offs even in the absence of clear and dispositive empirical evidence, though I do think that EA should act with humility in the face of Deep Uncertainty, and actions should be more exploratory rather than committed.[4] 

4: Remember the trade-offs

I don't think that I disagree much with you here in terms of thinking trade-offs are important, and the counterarguments you raise means that the value longtermists place on the future should be interrogated more. I do want to pick a slight issue with: 

Longtermism commits us to using these resources to help far-off humans, necessarily at the cost of those alive today.

Which reads like a knock-down case against Longtermism. But such objections can be raised against any moral system, since they all commit us to some moral trade-off. The moral-network framework you introduce in section 1 commits you to using resources to help those close to you, necessarily at the expense of those further away from you in your network. 

But even as this is stated, I also think that it doesn't follow. I think, to be a longtermist, you only really need to accept the first of MacAskill's premises. I think someone who thinks that the future has great moral value, but thinks that they don't have a clear causal way to ensure/improve that value, would be justified as being a longtermist without be committed to ignore the plight of those in the present for the sake of a possible future.


Overall, I agreed with a fair amount of what you wrote. As I wrote this comment up, I think that the best frame for longtermism actually isn't as a brand new moral theory, but as a 'default' assumption. In your moral network, it'd be a default setting where all weights and connections are set equally impartially across space and time. One could disagree with the particular weights, but you could also disagree with what we could causally effect. I think, under this kind of framing, you hae a lot more in common with 'longtermism' than it might seem at the moment, but I'd welcome your thoughts

  1. ^

    Though this wouldn't necessarily be disqualifying against your theory, as it seems to affect every theory of population ethics!

  2. ^

    It is an interesting point though, and agree it probably needs more concrete work from longtermists (or more publicity for those works where that case is already made!)

  3. ^

    For the record, I am not a (naïve) totalist utilitarian

  4. ^

    There is a separate object-level discussion about whether AI/Bio-risks actually are inscrutable, but I don't particularly want to get into that debate here!

Thank you for the feedback on both the arguments and writing (something I am aiming to improve through this writing). Sorry for being slow to respond, it's been a busy few of weeks!

I don't think there's actually any disagreement here except here:

But even as this is stated, I also think that it doesn't follow. I think, to be a longtermist, you only really need to accept the first of MacAskill's premises. I think someone who thinks that the future has great moral value, but thinks that they don't have a clear causal way to ensure/improve that value, would be justified as being a longtermist without be committed to ignore the plight of those in the present for the sake of a possible future.

I disagree, at least taking the MacAskill definition of as "the view that we should be doing much more to protect future generations". This is not just a moral conclusion but also a conclusion regarding how we should use marginal resources. If you do not think there is a causal way to affect future people, I think you must reject that conclusion.

However, I think sometimes longtermism is used to mean "we should value future people roughly similarly to how we value current people". Under this definition, I agree with you.

Curated and popular this week
Relevant opportunities