Hide table of contents

TL;DR

  • Longtermism does not hold that a far future containing many people is a certain future.
  • The moral status of individuals is not a proxy for the moral status of a species or the moral value of producing more individuals.
  • It is a mistake to create a theory of moral value that is contingent on actions that repeat human failures to manage our population size effectively.
  • Longtermists could be more cautious and deliberate in discussing future population sizes and goals relevant to them.
  • Longtermism with a goal of a far-off future with fewer humans is morally preferable, all other things equal.
  • Longtermists should self-efface, on the assumption that if they believe that their actions will increase their control over future humans, then they are making errors.
  • One such error is failure to respect the moral status of the humans they control.
  • Longtermism does not provide moral clarity about preventing causes of harm to future humans as opposed to achieving causes of help to future humans.

Introduction

Drawing from a few sources about longtermism, including McAskill's own summary of longtermism, and Ezra Klein's recent podcast, though I browsed several more, I want to offer my objections to longtermism, as I understand it. Hopefully any interested readers will inform me if the concerns I raise are addressed elsewhere.

The fundamental beliefs of longtermism

So, taking the fundamental beliefs of longtermism to be:

  1. future people have moral status
  2. there can be a lot of future people
  3. we can make their lives better

lets turn them into questions:

  1. Do future people have moral status?
  2. Can there be a lot of future people?
  3. Can we make their lives better?

and I will provide my answers:

  1. Yes, if you believe that a future person will exist, then that person has moral status.
  2. Sure, it's plausible that the future will contain lots of future people.
  3. Yes, it's plausible that people now can make the lives of future people better.

My concerns about the fundamental beliefs of longtermism

My concerns about the beliefs include:

  1. Longtermists want to protect against human extinction. That means that longtermism does not hold that future people will exist. Rather, it means that longtermism holds that future people could exist, perhaps contingent on longtermist actions. 

    Depending on what beliefs longtermists have and what conditions hold, longtermists could maximize the likelihood of large future populations by working against the well-being of present populations. In other words, longtermist moral calculations weighing the moral status of future humans against present humans could favor actions that cause future humans in ways that work against the welfare or longevity of present humans. While such choices might seem appropriate for other reasons, morality shouldn't be one of them.
  2. Longtermists do not guarantee that the far-future will contain lots of people, but only that it could. It is obvious that any plan that sacrifices the well-being of the present human population to serve a presumed larger future human population will not be morally justifiable, per se, even assuming all sorts of factored-in discount rates. While individuals have moral status, that moral status is not a proxy for the moral status of a species or the moral value of producing more of a species. 

    I wonder if the longtermist assumption of a far future containing lots of people with moral status is intended to slip in a theory of value supporting the idea that, all other things equal, a future containing more people is morally preferable to one that contains fewer people. I like humans and the human species and other species but I oppose any theory of moral value that proposes that conception of individuals or ensuring species continuity is, per se, a moral act. My reason is that the potential existence of a being does not endow that being with moral status in advance of its conception.
  3. Longtermists do not guarantee the welfare of future people, but only point to the value of contributing to their welfare. Now that I know that longtermists consider the size and welfare of the future human population to be contingent to some extent on longtermist actions, I'm much more interested in longtermism that reduces the number of far-future humans added to my moral calculations. 

    Longtermism should bring about a smaller far-future population of beings with moral status relevant to my altruism toward humans. That preference might seem pessimistic or selfish or anthropocentric, but we are currently experiencing resource limits on planet Earth that are extinguishing other species at rates equivalent to a great extinction, an event that has only occurred five times previously in the planet's 4+ billion-year history. Homo sapiens face an extinction threat from our own mistakes in managing our resources and population size. 

    It is a mistake to create a theory of moral value that is contingent on actions that repeat human failures to manage our population size. Of course, there are workarounds for that concern. Longtermists could be more cautious and deliberate in discussing future population sizes and moral goals relevant to future humans.

Longtermists should self-efface

But I have a final concern, and a hope that longtermism will self-efface to address it. In particular, I hope that longtermists will presume that creation of some utility in the experience of a being with moral status, when accomplished through control of that being in context, will contain errors, one or more of:

  1. errors in longtermist accounts of the experience caused for the being.
  2. errors in longtermist beliefs about the control achieved over the being.
  3. errors in longtermist recognition of the moral status of the being.

Total control, as a goal, will suffer at least those three types of error, the least obvious error is the last.

You cannot totally control someone that you believe has moral status

Errors of recognition of moral status become clear in thought experiments about on-going total control over another person's behavior and experience. One of the implications of that control is the subtraction of any degree of autonomy and independent consciousness from that person. A person subject to control to such a degree that the person has no autonomy and no independent consciousness is also a person without intrinsic value to their controller. A person without intrinsic value is also a person without moral status. A person under total control and without moral status is supremely vulnerable to the instrumental whims of their human controller. 

While I don't doubt the good intentions behind longtermism, there is a practical reality embedded in contexts of influence, to do with degree of influence over time. In practice, as the future of other humans' behaviors becomes less certain (regardless of why), a plausible longtermist effort will be to seek increased control over those others and their circumstances. The consequence is one of the errors listed earlier. Maintenance of control despite errors has knock-on effects that either increase uncertainty or require additional control, generating more errors.

To accord with reality, I advocate that longtermists self-efface about their attempts to control humans. Longtermists should self-efface by acknowledging that if they believe that their actions will increase their control over future people, then they are making some sort of error already. 

I doubt whether longtermists will self-efface in this way, but hopefully they will acknowledge that the life of a person that will never be conceived has no moral status. That acknowledgement will let them avoid some obvious errors in their moral calculations.

Seeking moral clarity about what longtermists cause

Keep in mind that the only requirement for you to control a future person is for you to cause something for that person's experience or context.

For example, consider a woman walking along a path in a forested park. Many years ago, some dude threw a bottle on that path, shattering it and leaving shards along the path. The woman is wearing sandals, and not looking down, walks through some shards that cut her feet.  

Now lets rewind that story. A longtermist is walking along the path, carrying an empty glass water bottle. Out of consideration for possible future people on the path, the longtermist puts the empty bottle in his pack rather than throw it on the path. Years later, a woman walking in sandals down the path finishes her walk without injury on a path free from shattered glass.

Here are some questions about that thought experiment:

* Did the longtermist cause anything in that woman's experience of her walk? 
* How about the dude who threw the bottle down on the path? 
* Should someone have caused anything in advance for some other person who went off the path and trampled in flip-flops through some brambles and poison ivy? 
* Should someone plan to do something for all the nonexistent people who walk the park paths after the park is turned into a nature preserve closed to tourism? What about the people that will actually still walk the paths? 
* Do safe walking paths in that park still have value if no one walks them?

Conclusion

I don't think that continuation of the species is a moral activity. It is in fact a selfish one if it is undertaken at all.  However, our grief over our losses or desire for our gains, current or pending, does not grant us dominion, by right or by fitness, over humans who live now or who might in the future.

 When I take a longtermist view, it is morally preferable to me that fewer humans exist in the far future than are alive now, perhaps a few million in four or five hundred years, accomplished through long-term family planning, and maintained for millennia afterward. My preferences reflect my interests and altruistic designs for everyone else now living, including those in the womb. 

My belief is that while the longtermist project to ensure a valuable far future containing large populations of humans living valuable lives has no moral justification, a similar project built on selfish preferences is no less feasible or appropriate. I doubt the project's feasibility either way. I only support it to the extent that it adds value in my selfish calculations of my circumstances and life conditions.

Having children is either a selfish or an altruistic act, from the perspective of parents and others. The decisions of prospective parents are not mine to control, but I wish them well.

-7

0
0

Reactions

0
0

More posts like this

Comments24
Sorted by Click to highlight new comments since: Today at 3:43 AM

I appreciate the fact that you took the time to reflect on what you've heard about longtermism. That said, I'll highlight areas where I strongly disagree.

It is obvious that any plan that sacrifices the well-being of the present human population to serve a presumed larger future human population will not be morally justifiable

This is not at all obvious to me. We make justifiable sacrifices to well-being all the time. Consider a hospital that decides not to expend all of its resources on its current patients because it knows there will be future patients in a month, or a year, or a decade in the future. This seems entirely sensible. I see no difference in principle between this and a more longtermist attitude.

I wonder if the longtermist assumption of a far future containing lots of people with moral status is intended to slip in a theory of value supporting the idea that, all other things equal, a future containing more people is morally preferable to one that contains fewer people.

I don't think longtermism requires adopting the assumption that more people is preferable to fewer people. One memorable thought experiment from MacAskill's book that you reference involves dropping and breaking a glass bottle in a forest. It would be good to pick up the broken glass if you know someone in the future might step on it. This suggests that helping future people is good, and this judgment need not commit us to total utilitarianism or any other broad ethical theory.

I hope that longtermists will presume that creation of some utility in the experience of a being with moral status, when accomplished through control of that being in context, will contain errors

Endorsing longtermism doesn't commit a person to favoring any kind of totalitarian control. In fact, you might think that a future of totalitarian control is bad, which might mean that it would be important to take actions to prevent that for the sake of the longterm future... Moreover, there are longterm causes that don't involve "controlling" people in any standard sense at all (e.g. advocating for reductions in carbon emissions, direct work on AI alignment, creating institutions that can more easily detect the spread of pathogens in a population).

Thanks for the response.

My writing was a bit sloppy, so I need to add the context from the previous sentence of my original post. The full quote is:

 "Longtermists do not guarantee that the far-future will contain lots of people, but only that it could. It is obvious that any plan that sacrifices the well-being of the present human population to serve a presumed larger future human population will not be morally justifiable..."

and what I meant was that longtermists would like the future to contain lots of people. That seems evident to me in their hopeful discussions of our potential as a space-colonizing civilization numbering in trillions, or our transcendence into virtual worlds populated by trillions of conscious AI, etc. 

If the size or presence of that future population is a choice, then sacrifice of the already large present human population for the sake of a future population of optional size is not morally justifiable. 

I am referring to situations where the 8 billion people on the planet are deemed morally less important than a larger future population whose existence is contingent on plans that ignore the well-being of the present population to some extent.  For example, plans that allocate resources specifically to a (relatively) small subset of the global population in order to ensure against the collapse of civilization for that subset. 

I believe that our population size raises both our suffering and extinction risks unless we endorse and pursue lifestyle efficiencies specifically to reduce those risks (for example, by changing our food production practices). If we do, then I believe that we are doing what is morally right in protecting our entire global population. 

The alternative appears to depend on the presumption that so long as we protect our species, each subset of our civilization can hoard resources and fend for itself.  In our globally connected economies, that approach will create unintentionally small, and vulnerable, subsets. The approach is immoral and impractical. I don't want to associate that approach with longtermism, but it does appear to be a plausible development when longtermism is applied in practice over the next several decades. If so, then the approach should not have a moral veneer. It is self-serving and will likely fail.

I don't think I'm following your reasoning.

It's true that longtermists expect for there to be many people in the future, but as far as I'm aware, no one has suggested taking any actions to make that number as large as possible. And no one has suggested we sacrifice the current 8 billion people alive today for some potential future benefit.

The main recommendations are to make sure that we survive the next century and that values aren't terrible in the future. This doesn't at all entail that people should hoard resources and fend for themselves. 

So, if longtermists believe that the far future will contain many people, then they do not feel the need to work against our extinction, correct? 

When I say believe that the future will contain a lot of people, that is in fact what I mean. If longtermists truly believe that the future will contain a lot of people, then they consider that future inevitable.

Is that what you think longtermists believe? If you understand the question, I think that your answer will be no, that longtermists do not in fact believe that the future will contain a lot of people, or else they would not include human extinction as a plausible scenario to take actions to avoid.

It is an implication of longtermist thought that a future of extremely large numbers of people, when created as a goal, provides moral weight to longtermist actions taken toward the goal of large numbers of future people. Those longtermist actions can be in contradiction to the well-being of present people, but on balance  be moral if:

you consider a hypothetical future person to have moral status. Such hypothetical future people are people who are not yet conceived, that is, have not yet been a fetus.

A concern to me is whether longtermists believe that hypothetical future people have moral status. If they do, then any longtermist action can potentially be justified in terms of the well-being of those presumed people. Furthermore, if you do in fact believe that those people will exist, (not could exist, will exist), then it makes complete sense to give those people moral status.

It's when the existence of those presumed people is a choice, or a possibility, or a desired scenario, but not the only plausible future, that the decision to give those people moral status in moral decision-making is  in error, and self-serving besides. 

I will offer without evidence, because I consider it empirically obvious, that for selfish reasons people will hoard resources and allow outsiders to fend for themselves, particularly in a situation in which resources are constrained and sharing with outsiders is only an option, not a requirement. 

How does a longtermist approach such a situation? If their approach is to propose a far future in which large numbers of people exist, but only if we make sacrifices among our present population (sacrifices among the global population who are outsiders to the longtermists), then their justifications should not be framed as moral. 

It is not moral to sacrifice the well-being of definite, currently existent people, for the possible well-being of hypothetical future people, unless:

  • longtermists consider the survival of the human species to have moral weight
  • longtermists consider the act of conception to have moral weight

For me the survival of the human species, distinct from the well-being of existent members of the human species (including fetuses), has no moral status or weight. If it did, then we could sacrifice the well-being of members of the human species for the sake of survival of the human species.

For me the action of conception (procreation), fun though it can be, has no moral weight. If it did, then acts of procreation would be moral acts and add some moral weight (or  altruistic value) to bringing more people into existence.

I'm not following the reasoning for most of your claims, so I'll just address the main claims I understand and disagree with.

If longtermists truly believe that the future will contain a lot of people, then they consider that future inevitable.

This doesn't follow. There's a difference between saying "X will probably happen" and "X will inevitably happen."

Compare: Joe will probably get into a car accident in the next 10 years, so he should buy car insurance.

This is analogous to the longtermist position: There will probably be events that test the resilience of humanity in the next 100 years, so we should take actions to prepare for them.

For me the action of conception (procreation), fun though it can be, has no moral weight.

Although some longtermists think that it's good to bring additional people into the world, this is not something that longtermists need to commit to. It's possible to say, "Given that many billions of people will (probably) exist in the future, it's important to make sure they don't live in poverty/under a totalitarian regime/at risk of deadly pandemics." In other words, we don't have an obligation to create more people, but we do have an obligation to ensure the wellbeing of the people who live in the future.

Moreover, there are actions we can take that would not require any sacrifice to present people's wellbeing (e.g. pandemic prevention, reducing carbon emissions, etc.). In fact, these would benefit both present and future generations.

For a defense of why it's good to make happy people, I'd just refer to the chapter in MacAskill's book.

It is not contradictory for you or for longtermists to work against the extinction of the human race while you believe that the human race will continue, provided you think that those actions to prevent extinction are a cause of the continuation of the human race and that you believe those actions will be performed (not could be performed). A separate question is whether those actions should be performed.

I believe that longtermists believe that the future should contain many billions of people in a few hundred years, and that those hypothetical future people have moral status to longtermists. But why do longtermists think that the future should contain many billions of people and that it is our task to make those people's lives happier?

I think the normal response is "But it is good to continue the human race. I mean, our survival is good, the survival of the species is good, procreating is good, we're good to have in the universe. Taking action toward saving our species is good in the face of uncertainty even if the actions could fail, maybe some people would have to sacrifice so that our species continues, but our species is worth it. Eventually there can be trillions of us, and more of us is better provided  humans are all doing well then"  but those are not my morals. 

I want to be clear: we current humans could all live long happy lives, existing children could grow up, and also live long happy lives, existing fetuses could mature to term and be born, and live to a ripe old human age, long and happily. So long as no one had any more children, because we all used contraception, our species would die out. I am morally ok with that scenario. I see no moral contradiction in it. If you do, let me know.

What is worrisome to me is that the above scenario, if it occurred in the context of hypothetical future people having moral status, would include the implication that those people who chose to live well but die childless, were all immoral. I worry that longtermists would claim that those childless humans ended the human species and prevented a huge number of people from coming into existence, people who have moral status. I don't believe those childless humans were immoral, but my belief is that longtermists do, in some contexts.

There is the thought experiment about making people happy vs making happy people. Well, first off, I am not morally or personally neutral toward the making of future people. And why would someone concerned about improving the welfare of existing people  consider a future of more people a neutral possibility? It's fairly obvious that anyone interested in the happiness of the world's population would prefer that the population were smaller because that population would be easier to help.  

In the far future, a small but steady population of a few million is one that altruists within that far-future population would find reasonable. That's my belief right now, but I haven't explored the numbers in enough detail.

In practice, many scenarios of altruism do not satisfy standards of selfish interest. Serving an ever-growing population is one of those scenarios. You don't have to like or prefer your moral standards or their requirements. A bigger population is therefore a scary thing to altruists who give each person moral status because they can't decide to develop moral uncertainty or shift their moral standards whenever that's more convenient. 

I still have to read MacAskill's book though, and will carefully read the chapter you referenced.

But why do longtermists think that the future should contain many billions of people and that it is our task to make those people's lives happier?

Different longtermists will have different answers to this.  For example, many people think they have an obligation to make sure their grandchildren's lives go well. It's a small step from there to say that other people in the future besides one's grandchildren are worth helping.

Or consider someone who buries a bomb in a park and sets the timer to go off in 200 years. It seems like that's wrong even though no one currently alive will be affected by that bomb. If you accept that, you might also accept that there are good things we can do to help future generations who don't yet exist.

What is worrisome to me is that the above scenario, if it occurred in the context of hypothetical future people having moral status, would include the implication that those people who chose to live well but die childless, were all immoral.

No, this doesn't follow. The mere fact that it's good to do X doesn't entail that anyone who doesn't do X is immoral. Example: I think it's good to grow crops to feed other people. But I don't think everyone is morally obligated to be a farmer.

And again, longtermists are not committed to the claim that it's good or necessary to create future people. It's possible to be a longtermist and just say that it's good to help the people who will be a live in the future, for example, by stopping the bomb that was placed in the park.

It's fairly obvious that anyone interested in the happiness of the world's population would prefer that the population were smaller because that population would be easier to help. 

It seems like an important crux for you is that you think the world is overpopulated. I disagree.  I think there are plenty of resources on Earth to support many billions more people, and the world would be better with a larger population. A larger population means more people to be friends with, more potential romantic partners, more taxpayers, more people generating new ideas, more people helping others.

OK, thanks for the response.

Yes, well, perhaps  it's true that longtermists expect that the future will contain lots, many billions or trillions, of future people. 

I do not believe:

  •  that such a future is a good or moral outcome. 
  •  that such a future is a certain outcome. 

I'm still wondering:

  • whether you believe that the future will contain future people.
  • whether people that you believe are hypothetical or possible future people have moral status

I think I've said this a few times already, but the implication of a possible future person having moral status is that the person has moral status comparable to people who are actually alive and people who will definitely be alive. Do you believe that a possible future person has moral status?

Yes, I do expect the future to contain future people. And I think it's important to make sure their lives go well.

Another crux seems to be that you think helping future people will involve some kind of radical sacrifice of people currently alive. This also doesn't follow.

Consider: People who are currently alive in Asia have moral status. People who are currently alive in Africa have moral status. It doesn't follow that there's any realistic scenario where we should sacrifice all the people in Asia for the sake of Africans or vice versa.

Likewise, there are actions we can take to help future generations without the kind of dramatic sacrifice of the present that you're envisioning. 

Yes, I do expect the future to contain future people. And I think it's important to make sure their lives go well.

OK then! If you believe that the future will contain future people, then I have no argument with you giving those future people moral status equivalent to those alive today. I disagree with the certainty you express, I'm not so sure, but that's a separate discussion, maybe for another time.

I do appreciate what you've offered here, and I applaud your optimistic certainty. That is what I call belief in a future. 

I assume then that you feel assured that whatever steps you take to prevent human extinction are also steps that you feel certain will work, am I right?
EDIT: Or you feel assured that one of the following holds:

  •  whatever steps someone takes will prevent human extinction,
  • humanity will survive catastrophic events, no matter the events
  • existential risks will not actually cause human extinction, maybe because they are not as threatening as some think

I disagree with the certainty you express, I'm not so sure, but that's a separate discussion, maybe for another time.

I haven't expressed certainty. It's possible to expect X to happen without being certain X will happen. Example: I expect for there to be another pandemic in the next century, but I'm not certain about it.

I assume then that you feel assured that whatever steps you take to prevent human extinction are also steps that you feel certain will work, am I right?

No, this is incorrect for the same reason as above.

The whole point of working on existential risk reduction is to decrease the probability of humanity's extinction. If there were already a 0% chance of humanity dying out, then there would be no point in that work.

OK, so you aren't so sure that lots of humans will live in the future, but those possible humans still have moral status, is that right?

I think they will have moral status once they exist, and that's enough to justify acting for the sake of their welfare.

Do you believe that:

  1. possible future people have moral status once they exist
  2. it's enough that future people with moral status are possible to justify acting on their behalf 

I believe point 1. 

If you believe point 2, is that because you believe that possible future people have moral status now?

No, it's because future moral status also matters.

Huh. "future moral status" Is that comparable to present moral status in any way?

Longtermists think we should help those who do (or will) have moral status.

Oh, I agree with that, but is "future moral status" comparable to or the same as "present moral status"?

If you agree we should help those who will have moral status, that's it. That's one of the main pillars of longtermism. Whether or not present and future moral status are "comparable" in some sense is beside the point. The important point of comparison is whether they both deserve to be helped, and they do.

I agree that we should  help those who have moral status now, whether those people are existing or just will exist someday . People who will exist someday are people who will exist in our beliefs about the pathway into the future that we are on. 

There is a set of hypothetical future people on pathways into the future that we are not on. Those pathways are of two types:

  • pathways that we are too late to start down (impossible future people)
  • pathways that we could still start down (possible future people or plausible future people)

If you contextualize something with respect to a past time point, then it is trivial to make it impossible. For example, "The child I had when I was 30 is an impossible future person." With that statement, I describe an impossible person because I contextualized its birth as occurring when I was 30. But I didn't have a child when I was 30, and I am almost two decades older than 30. Therefore, that hypothetical future person is impossible.

Then there's the other kind of hypothetical future person, for example, a person that I could still father. My question to you is whether that person should have moral status to me now, even though I don't believe that the future will be welcoming and beneficial for a child of mine. 

If you believe that a hypothetical future child does have moral status now, then you believe that I am behaving immorally by denying it opportunities for life because in your belief, the future is positive and my kid's life will be a good one, if I have the kid. I don't like to be seen as immoral in the estimation of others who use flawed reasoning.

The flaw in your reasoning is that the hypothetical future child that I won't have has moral status and that I should act on its behalf even though I won't conceive it.  You could be right that the future is positive. You are wrong that the hypothetical future child has any moral status by virtue of its future existence when you agree that the child might not ever exist.

If I had plans to have a child, then that future child would immediately take on a moral status, contingent on those plans, and my beliefs in my influence over the future. However, I have no such plans. And, in fact, not much in the way of beliefs about my influence over the future.

I think you keep misinterpreting me, even when I make things explicit. For example, the mere fact that X is good doesn’t entail that people are immoral for not doing X.

Maybe it would be more productive to address arguments step by step.

Do you think it would be bad to hide a bomb in a populated area and set it to go off in 200 years?

[comment deleted]2y1
0
0
Curated and popular this week
Relevant opportunities