Hide table of contents


A few weeks ago, I was speaking with a student (call him Josh) who was skeptical of the importance of existential risk and the validity of longtermism. He said something like, "What matters to me is kids going hungry today, not hypothetical future people." I found this to be a moving, sincere objection, so I thought about where it seemed to go wrong and offered Josh a version of Case 1 below, and he seemed pretty convinced.[1] 

Josh's skepticism echoes the dismissal of "merely possible people" expressed by critics who hold presentist person-affecting views — that is, they believe that "an act can only be bad [or good] if it is bad [or good] for someone," where "someone" is a person who exists at the time of the act. The current non-existence of future people is a common objection to taking their well-being into moral consideration, and it would be good for longtermists to have cases ready to go that illustrate the weaknesses of this view.

I developed a couple more cases in Twitter threads and figured I'd combine them into a linkable forum post.

(Edit: since most of the comments have raised this point, yes, showing that improving the lives of future people seems morally good does not imply that causing more future people to exist is morally good. My intention with these cases is not to create an airtight case for total utilitarianism or to argue against the strongest steelman of person-affecting views. Instead, I want to provide some examples that drive intuitions against objection that I, and presumably other EAs, commonly encounter "in the wild" — namely, that the interests of future people are counterintuitive or invalid interests on which to focus your actions. I am not super familiar with more sophisticated person-affecting views. I'll briefly say that I find Joe Carlsmith, along with the transitivity arguments linked in the conclusion, to be pretty persuasive that we should think creating happy lives is good, and I make a few more arguments in the comments.)

Case 1: The Reformer

You work in a department of education. You spend a full year working on a report on a new kindergarten curriculum that makes kids happier and learn better. It takes a few years for this to circulate and get approved, and a few more for teachers to learn it.

By the time it's being taught, 6 years have passed since your work. I think your work, 6 years ago, was morally significant because of the happier, better-educated students now. But these kindergarteners are (mostly) 5 years old. They didn't even exist at the time of your work.

You remember a conversation you had, while working on the curriculum, with your friend who thinks that "hypothetical future people can't have interests" (and who is familiar with the turnaround times of education reform). The friend shook her head. "I don't know why you're working on this kindergarten curriculum for future people," she said. "You could be helping real people who are alive today. Why not switch to working on a second-grade curriculum?"

Indeed, if only already-existing people matter, you'd be in the weird position where your work would've been morally valuable if you'd written a 2nd grade curriculum but your kindergarten curriculum is morally worthless. Why should the birth year of beneficiaries affect this evaluation?

Case 2: The Climate Resiliency Project

After finishing architecture school, you choose to work at a firm that designs climate resiliency projects. The government of Bangladesh has contracted that firm to design sea walls, on the condition that the work be expedited. You could have worked at a commercial firm for more pay and shorter hours, but you choose to work at the climate-focused firm.

The team works for a year on the sea walls project. The Bangladeshi government builds it over the next 20 years. In 2042, a typhoon strikes, and the walls save thousands of lives.

Now, you consider how your choice to work at the climate resiliency firm compared to its alternatives. You think your work on the sea walls accounted for, say, 1% of the impact, saving dozens of lives. But maybe you could have donated a big share of your larger salary to Against Malaria and saved dozens of lives that way instead.

If "nonexistent future people" don't matter, we are again in the absurd position of asking, "Well, how many of the lives saved were over the age of 20?" After all, those under 20 didn't exist yet, so you should not have taken their non-existent interests into consideration.

As the decades progress, the sea walls save more lives, as the effects of climate change get worse. But the "future people don't matter" view holds that these effects should matter less in your 2022 decision-making, because more and more of the beneficiaries don't yet exist to "have interests."

Case 3: The Exonerated Hiker

William MacAskill writes a crisp case in the New York Times: "Suppose that I drop a glass bottle while hiking. If I don’t clean it up, a child might cut herself on the shards. Does it matter when the child will cut herself — a week, or a decade, or a century from now? No. Harm is harm, whenever it occurs."

I propose a modification that shows the implausibility of the alternative view.

You drop the bottle and don't clean it up. Ten years later, you return to the same spot and remember the glass bottle. The shards are still there, and, to your horror, before your eyes, a child does cut herself on the shards.

You feel a pang of guilt, realizing that your lack of care 10 years ago was morally reprehensible. But then, you remember the totally plausible moral theory that hypothetical future people don't matter, and shout out: "How old are you?"

The child looks up, confused. "I'm eight."

"Whew," you say. Off the hook! While it's a shame that a child was injured, your decision not to clean up 10 years ago turns out not to have had any moral significance.

In your moment of relief, you accidentally drop another glass bottle. Since it turned out okay last time, you decide not to clean up this one either.


Looking beyond individual moral actions, these implications might play out in analogous policy or philanthropic decisions. Governments or grantmakers might have to decide whether it's worth undertaking a study or project that would help kindergarteners in 2028, hikers in 2032, or the residents of floodplains in 2042. But it's hard for the cost-benefit analysis to work out if you ignore the interests of anyone who doesn't yet exist. Taking the presentist person-affecting view seriously would lead to dramatically under-investing not only in the long-term future and the reduction of existential risk, but also in very familiar medium-term policies with fairly typical lead times.

Other flavors of person-affecting views might not have this problem, though they encounter transitivity problems. But critics who echo the refrain of "hypothetical non-existent future 'people'" should be reminded that this stance justifies policy decisions that they presumably disagree with.

  1. ^

    He remained skeptical that existential risks, especially from AI, are as high as I claimed, which seems reasonable, since he hadn't been given very strong evidence. He agreed that if he thought AI was as high of a risk as I did, he would change his mind, so this whole episode is further evidence that seemingly normative disagreements are often actually empirical disagreements.

Sorted by Click to highlight new comments since:

I found this discussion, and these cases, objectionably uncharitable. It doesn't offer the strongest version of person-affecting views, explain why someone might believe them, then offer the objections and how the advocate of the view might reply. It simply starts by assuming a position is true and then proposes some quick ways to persuade others to agree with it.

An equivalent framing in a different moral debate would be saying something like "people don't realise utilitarianism is stupid. If they don't realise, just point out that utilitarians would kill someone and distribute their organs if they thought it would save more lives". I don't think the forum is the place for such one-sidedness.

I appreciate the intention of keeping argumentative standards on the forum high, but I think this misses the mark. (Edit: I want this comment's tone to come off less as "your criticism is wrong" and more like "you're probably right that this isn't great philosophy; I'm just trying to do a different thing.")

I don't claim to be presenting the strongest case for person-affecting views, and I acknowledge in the post that non-presentist person-affecting views don't have these problems. As I wrote, I have repeatedly encountered these views "in the wild" and am presenting this as a handbook for pumping the relevant intuitions, not as a philosophical treatise that shows the truth of the total view. The point of the post is to help people share their intuitions with skeptics, not to persuade moral philosophers.

In general, I'm confused by the standard of arguing against the strongest possible version of a view rather than the view people actually have and express. If someone said "I'm going to buy Home Depot because my horoscope said I will find my greatest treasure in the home," my response wouldn't be "I'll ignore that and argue against the strongest possible case for buying Home Depot stock," it would be to argue that astrology is not a good way of making investment decisions. I also am not sure where you're seeing the post "assuming a position is true." My methodology here is to present a case and see what conclusions we'd have to draw if the position weren't true. Utilitarians do in fact have to explain either why the organ harvesting is actually fine or why utilitarianism doesn't actually justify it, so it seems fine to ask those who hold presentist person-affecting view to either bite the bullet or explain why I'm wrong about the implication.

Finally, for what it's worth, I did initially include a response: Émile Torres's response to a version of Case 2. I decided including Torres's response — which was literally to "shrug" because if utilitarians get to not care about the repugnant conclusion then they get to ignore these cases — would not have enriched the post and indeed might have seemed combative and uncharitable towards the view. (This response, and the subsequent discussion that argued that "western ethics is fundamentally flawed," leads me to think the post wouldn't benefit much by trying to steelman the opposition. Maybe western ethics is fundamentally flawed, but I'm not trying to wade into that debate in this post.)

I agree that those examples are compelling. I'm not sure if presentist person-affecting views are a particularly common alternative to longtermism. I guess it's possible that a bunch of people consider themselves to have presentist views but they haven't worked out the details. (Or maybe some call their views "presentist" but they'd have arguments on why their view says to do the thing you want them to do in the examples you give.) And you might say "this reflects poorly on proponents of person-affecting views; it seems like they tend to be less philosophically sophisticated." I wouldn't completely agree with that conclusion... Sure, consistency seems important. But I think the person-affecting intuition is very strong for some people and the way ethics works, you cannot get positions off the ground without some fundamental assumptions ("axioms"). If some people's person-affecting intuitions s are so strong that the thought of turning an already existing small paradise into an instantiation of the repugnant conclusion seems completely unacceptable, that can function as (one of) someone's moral axiom(s). And so their views may not be totally developed but that will still seem better to them (justifiably so) than adopting totalism, which – to them – would violate what feels like an axiom.

Other flavors of person-affecting views might not have this problem, though they encounter transitivity problems.

I recently published a post on why these "problems" don't seem like a big deal from a particular vantage point. (Note that the view in question is still compatible with the not-strong formulations of longtermism in MacAskill's definition, but for subtly different, more indirect reasons.) It's hard to summarize the point because the post presents a different reasoning framework ("population ethics without an objective axiology"). But here's an attempt at a summary (and some further relevant context) on why person-affecting views seem quite compelling to me within the particular framework "population ethics without an objective axiology:"

Before explaining what’s different about my proposal, I’ll describe what I understand to be the standard approach it seeks to replace, which I call “axiology-focused.”

The axiology-focused approach goes as follows. First, there’s the search for axiology, a theory of (intrinsic) value. (E.g., the axiology may state that good experiences are what’s valuable.) Then, there’s further discussion on whether ethics contains other independent parts or whether everything derives from that axiology. For instance, a consequentialist may frame their disagreement with deontology as follows. “Consequentialism is the view that making the world a better place is all that matters, while deontologists think that other things (e.g., rights, duties) matter more.” Similarly, someone could frame population-ethical disagreements as follows. “Some philosophers think that all that matters is more value in the world and less disvalue (“totalism”). Others hold that further considerations also matter – for instance, it seems odd to compare someone’s existence to never having been born, so we can discuss what it means to benefit a person in such contexts.”

In both examples, the discussion takes for granted that there’s something that’s valuable in itself. The still-open questions come afterward, after “here’s what’s valuable.”


My alternative account, inspired by Johann Frick [...], says that things are good when they hold what we might call conditional value – when they stand in specific relation to people’s interests/goals. On this view, valuing the potential for happiness and flourishing in our long-run future isn’t a forced move. Instead, it depends on the nature and scope of existing people’s interests/goals and, for highly-morally-motivated people like effective altruists, on one’s favored notion of “doing the most moral/altruistic thing.”


“There’s no objective axiology” implies (among other things) that there’s no goal that’s correct for everyone who’s self-oriented to adopt. Accordingly, goals can differ between people (see my post, The Life-Goals Framework: How I Reason About Morality as an Anti-Realist). There are, I think, good reasons for conceptualizing ethics as being about goals/interests. (Dismantling Hedonism-inspired Moral Realism explains why I don’t see ethics as being about experiences. Against Irreducible Normativity explains why I don’t see much use in conceptualizing ethics as being about things we can’t express in non-normative terminology.)


One arguably interesting feature of my framework is that it makes standard objections against person-affecting views no longer seem (as) problematic. A common opinion among effective altruists is that person-affecting views are difficult to make work.[6] In particular, the objection is that they give unacceptable answers to “What’s best for new people/beings.”[7] My framework highlights that maybe person-affecting views aren’t meant to answer that question. Instead, I’d argue that someone with a person-affecting view has answered a relevant earlier question so that “What’s best for new people/beings” no longer holds priority. Specifically, to the question “What’s the most moral altruistic/thing?,” they answered “Benefitting existing (or sure-to-exist) people/beings.” In that light, under-definedness around creating new people/beings is to be expected – it’s what happens when there’s a tradeoff between two possible values (here: the perspective of existing/sure-to-exist people and that of possible people) and someone decides that one option matters more than the other.


The transitivity of “better-than relations.”

For any ambitious morality, there’s an intuition that well-being differences in morally relevant others should always matter.[23] However, I think there’s an underappreciated justification/framing for person-affecting views where these views essentially say that possible people/beings are “morally relevant others” only according to minimal morality (so they are deliberately placed outside the scope of ambitious morality).

This part refers to a distinction between minimal morality and ambitious morality, which plays an important role in my reasoning framework:

Minimal morality is “don’t be a jerk” – it's about respecting that others’ interests/goals may be different from yours. It is low-demanding, therefore compatible with non-moral life goals. It is “contractualist”[11] or “cooperation-focused” in spirit, but in a sense that stays nice even without an expectation of reciprocity.[12]

Ambitious morality is “doing the most moral/altruistic thing.” It is “care-morality,” “consequentialist” in spirit. It’s relevant for morally-motivated individuals (like effective altruists) for whom minimal morality isn’t demanding enough.


[In my framework], minimal isn’t just a low-demanding version of ambitious morality. In many contexts, it has its own authority – something that wouldn’t make sense within the axiology-focused framework. (After all, if an objective axiology governed all aspects of morality, a “low-demanding” morality would still be directed toward that axiology.)[13] In my framework, minimal morality is axiology-independent – it protects everyone’s interests/goals, not just those of proponents of a particular axiology.

So, on the one hand, morality can be about the question "If I want to do 'the most moral/altruistic thing,' how can I best benefit others?" – that's ambitious morality. On the other hand, it can also be about the question "Given that others don't necessarily share my interests/goals, what follows from that in terms of fairness norms for a civil society?" – that's minimal morality ("contractualist" in spirit; "don't be a jerk").

I agree that person-affecting views don’t give satisfying answers to “what’s best for possible people/beings,” but that seems fine! It’s only within the axiology-focused approach that a theory of population ethics must tell us what’s best for both possible people/beings and for existing (or sure-to-exist) people/beings simultaneously.

There's no objective axiology that tells us what's best for possible people/beings and existing people/beings all at once. Therefore, since we're driven by the desire to better specify what we mean by "doing the most moral/altruistic thing," it seems like a defensible option to focus primarily on existing (and sure-to-exist) people/beings.

Even simpler counterexample: Josh's view would preclude climate change as being important at all. Josh probably does not believe climate change is irrelevant just because it will mostly harm people in a few decades.

I suspect what's unarticulated is that Josh doesn't believe in lives in the far future, but hasn't explained why lives 1000 years from now are less important than lives 100 years from now. I sympathize because I have the same intuition. But it's probably wrong.

I have a similar intuition, but I think for me it isn’t that I think far future lives should be discounted. Rather, it’s that I think the uncertainty on basically everything is so large at that time scale (more than 1k years in the future) that it feels like the whole exercise is kind of a joke. To be clear: I’m not saying this take is right, but at a gut level I feel it very strongly.

I anticipate the response being "climate change is already causing suffering now," which is true, even though the same people would agree that the worst effects are decades in the future and mostly borne by future generations.

All that's required in all those cases is that you believe that some population will exist who benefits from your efforts.

It's when the existence of those people is your choice that it no longer makes sense to consider them to have moral status pre-conception.

Or should I feel guilty that I deprived a number of beings of life by never conceiving children in situations that I could have? 

It's everyone else having children that create the population that I consider has moral status. So long as they keep doing it, the population of beings with moral status grows. 

The real questions are whether:

  • it is moral to sustain the existence of a species past the point of causing harm to the species' current members
  • the act of conceiving is a moral act

What do you think?

Let me see if I can build on this reasoning. Please tell me if I've misunderstood your position.

Since we're pretty sure there will indeed be people living in Bangladesh in the future, you're saying it's reasonable to take into account the future lives saved by the seawalls when comparing the choice of whether to invest in climate resiliency vs immediately spend on bednets.

But, your position implies, we can only consider the lives saved by the seawalls, not future children who would be had by the people saved by seawalls, right? Suppose we have considered every possible way to save lives, and narrowed it down to two options: give bednets to elderly folks in areas with endemic malaria, or invest in seawalls in Bangladesh. Saving the elderly from malaria is the most cost-effective present-day intervention you've found, and it would have a benefit of X. Alternatively, we've done some demographic studies of Bangladesh, and some detailed climate forecasting, and concluded that the seawalls would directly save some people who haven't been born yet but will be, and would be killed by extreme weather, for a benefit of Y.  Suppose further that we know those people will go on to have children, and the population will be counterfactually higher by an additional amount, for an additional benefit Z.

You're saying that the correct comparison is not X vs 0 (which would be correct if you ignore all benefits to hypothetical future people), nor X vs Y+Z (which is if you include all benefits to hypothetical future people), but X vs Y (which is the appropriate comparison if you do include benefits to future people, but not ones who are brought about by your choosing between the options).

Is this indeed your position?

Yes, that's right. (edit: thinking in terms of lives saved from populations here, not benefits accruing to those populatiions) X vs Y. If Y is chosen (ie, if the lives of Y are saved), and the seawall is built, then Y+Z (those populations) have moral status for me, assuming I am certain that (population) Y will conceive population Z. The details are below, but that's my summary answer for you.

EDIT: Sorry, in the discussion below, my use of language confuses the original poster's meaning of X  as a benefit with the population rX receiving benefit X. Hopefully you can understand what I wrote. I address the spirit of the question, namely, is a choice between populations, one of which leads to additional contingent population, bound by my belief that only people who will exist have moral status.  As a further summary, and maybe to untangle benefits from populations at some point, I believe in:

  • mathematical comparison: comparing benefits for size and multiplying population by benefit per capita (if meaningful)
  • actual people's moral status: giving only actual (not potential) people moral status
  • smallness: of future populations having benefits, all other things equal.
  • inclusive solutions: whenever feasible (for example, saving all at-risk populations)

And now, continuing on with my original discussion...


So, to use your example, I have to believe a few things:

  1. bednets extend lives of people who won't have children.
  2. seawalls extend lives of people who will have children.
  3. a life extended is an altruistic benefit to the person who lives longer.
  4. a life created is an altruistic benefit to the  person created.

I will try to steelman where this argument goes with a caveat: 

  • as part of beliefs 3 and 4, an extended or created life does not reduce quality of life for any other human. NOTE: in a climate change context 20-30 years from now, I don't actually believe 3, 4, or this caveat will hold for the majority of the human global population.

I think your question is:

  • how I decide the benefit in terms of lives extended or created?  
    For me, that is roughly the same as asking what consequences the actions of installing seawalls and providing bednets each have. In the case where each action is an exclusive alternative and my action to take, I might for altruistic reasons choose the action with greater altruistic consequences.

So, your scenario goes:

  • X = total years of lives saved by bednets.
  • Y= total years of lives saved by seawalls. 
  • Z = total years of lives lived for children born behind seawalls if seawalls are built.

EDIT: below I will refer to X, Y,  and Z as populations, not the benefits accrued by the populations, since my discussion makes no mention of differences in benefits, just differences in populations and counts of lives saved.

Lets assume further that:

  •  in your scenario X and Y are based on the same number of people.
  • lives in population X and population Y are extended by the same amount of time.
  • people in populations X and Y each value their lives equally.
  • people in population X and Y experience the same amount of happiness. 

My answer comes down to whether I believe that it is my choice whether to cause (savings of lives of) X or Y+Z. If it is my choice, then I would choose X over Y+Z out of personal preference and beliefs, because: 

  • Z is a hypothetical population, while X and Y are not. Choosing against Z only means that Z are never conceived.
  • The numbers of people and the years given to them are the same for Y as they are for X. My impact on each population if I save them is the same.
  • Humans have less impact on the natural world and its creatures with a smaller population, and a future of X is smaller than a future of Y+Z.
  • A smaller population of humans causes less difficulty for those seeking altruistic ends for existing lives, for example, in case I want to be altruistic after saving one of the populations.

Aside from this scenario, however, what I calculate as altruistically beneficial is that X+Y are saved and children Z are never conceived because family planning and common-sense allows population Y to not have children Z. Returning to this scenario, though, I can only save one of X or Y and if I save Y, they will have children Z. Then, for the reasons I listed, I would choose X over Y. 

I  just went through your scenario in a context where I choose whether to save population X or population Y, but not both. Now I will go through the same scenario, but in a context where I do not choose between population X or population Y.


  • it is not my choice whether X or Y+Z is chosen.
  • other existing people chose to save population Y with a seawall.
  • If population Y has a seawall, they will have children Z.
  • population Y have or will get a seawall.


  • population Y will have children Z.
  • the population Z is no longer hypothetical. 
  • Y+Z have moral status even though Z are not conceived yet.

However, there is no way of choosing between X  and  Y+Z that ignores that the future occurrence of Z is contingent on the choice  and  thus hypothetical. Accordingly, population Z has no moral status unless population Y is saved by seawalls. 

Notice that, even then, I must believe that population Y will go on to have children Z. This is not a question of whether children Z could be conceived or if I suspect that population Y will have children Z or if I believe that it is Y's option to have children Z. I really have to know that Y will have children Z.

Also notice that, even if I remove beliefs 3 and 4, that does not mean that X or Y populations lose their moral status. A person stuck in suffering has moral status. However, decisions about how to help them will be different. 

For example, if putting up seawalls saves Bangladesh from floods but not from drought and famine, I would say that their lives saved and their happiness while alive are in doubt. Similarly in  the case of saving the elderly from malaria. If you save them from malaria but they now face worse conditions than suffering malaria, then your extending their lives or happiness while alive are in doubt. Well, in doubt from my perspective.

However, I see nothing wrong with adding to potential for a good life, all other things equal. I'd say that the "all other things equal" only applies when you know very little about the consequences of your actions and your choices are not driven by resource constraints that force difficult decisions.


  • you are altruistically-minded
  • you have plenty of resources (for example, bednets and cement) so you don't have to worry about triage
  • you don't have beliefs about what else will happen when you save someone's life

then it makes sense to help that person (or population). So yeah, supply the bednets and build the seawalls because why not? Who know who will have children or eat meat or cause others harm or suffer a worse disease or die from famine? Maybe everything turns out better, and even if it doesn't, you've done no harm by preventing a disease or stopping flooding from sea level rise.

I basically just sidestep these issues in the post except for alluding to the "transitivity problems" with views that are neutral to the creation of people whose experiences are good. That is, the question of whether future people matter and whether more future people is better than fewer are indeed distinct, so these examples do not fully justify longtermism or total utilitarianism.

Borrowing this point from Joe Carlsmith: I do think that like, my own existence has been pretty good, and I feel some gratitude towards the people who took actions to make it more likely and personal anger towards those who made it less likely (e.g. nuclear brinksmanship). To me, it does seem like if there are people who might or might not exist in the future who would be glad to exist (though of course would be neutral to nonexistence), it's good to make them exist.

I also think the linked "transitivity problems" are pretty convincing.

I basically think the stuff about personally conceiving/raising children brings in lots of counterproductive baggage to the question, related to the other effects of these actions on others' lives and my own. I think pretty much everything is a "moral act" in the sense that its good or bad foreseeable effects have moral significance, including like eating a cheeseburger, and conception isn't an exception; I just don't want to wade into the waters of whether particular decisions to conceive or not conceive are good or bad, which would depend on lots of context.

About MacAskill's Longtermism

Levin, let me reassure you that, regardless of how far in the future they exist, future people that I believe will exist do have moral status to me, or should.

However, I see no reason to find more humans alive in the far future to be morally preferable to fewer humans alive in the far future above a population number in the lower millions.

Am I wrong to suspect that MacAskill's idea of longtermism includes that a far future containing more people is morally preferable to a far future containing fewer people?

A listing of context-aware vs money-pump conditions

The money pump seems to demonstrate that maximizing moral value inside a particular person-affecting theory of moral value (one that is indifferent toward the existence of nonconceived future people) harms one's own interests.

In context, I am indifferent to the moral status of nonconceived future people that I do not believe will ever exist. However, in the money pump, there is no distinction between people that could someday exist versus will someday exist. In context, making people is morally dangerous. However, in the money pump, it is morally neutral. In context, increasing the welfare of an individual is not purely altruistic (for example, wrt everyone else). However, in the money pump, it is purely altruistic. In context, the harm of preventing conception of additional life is only what it causes those who will live, just like in the money pump.

The resource that you linked on transitivity problems includes a tree of valuable links for me to explore. The conceptual background information should be interesting, thank you.

About moral status meaning outside the context of existent beings

Levin, what are the nonconceived humans (for example, humans that you believe are never conceived) that do not have moral status in your ethical calculations?

Are there any conditions in which you do not believe that future beings will exist but you give them moral status anyway?

I am trying to answer whether my understanding of what moral status allows or requires is flawed.

For me, another being having moral status requires me to include effects on that being in my calculations of the altruism of my actions. A being that will never exist will not experience causes of my actions and so should be excluded from my moral calculations. However, I might use a different definition of moral status than you.

Thank you.

I am generally not that familiar with the creating-more-persons arguments beyond what I've said so far, so it's possible I'm about to say something that the person-affecting-viewers have a good rebuttal for, but to me the basic problem with "only caring about people who will definitely exist" is that nobody will definitely exist. We care about the effects of people born in 2024 because there's a very high chance that lots of people will be born then, but it's possible that an asteroid, comet, gamma ray burst, pandemic, rogue AI, or some other threat could wipe us out by then. We're only, say, 99.9% sure these people will be born, but this doesn't stop us from caring about them.

As we get further and further into the future, we get less confident that there will be people around to benefit or be harmed by our actions, and this seems like a perfectly good reason to discount these effects.

And if we're okay with doing that across time, it seems like we should similarly be okay with doing it within a time. The UN projects a global population of 8.5 billion by 2030, but this is again not a guarantee. Maybe there's a 98% chance that 8 billion people will exist then, an 80% chance that another 300 billion will exist, a 50% chance that another 200 billion will exist (getting us to a median of 8.5 billion), a 20% chance for 200 billion more, and a 2% chance that there will be another billion after that. I think it would be odd to count everybody who has a 50.01% chance of existing and nobody who's at 49.99%. Instead, we should take both as having a ~50% chance of being around to be benefited/harmed by our actions and do the moral accounting accordingly.

Then, as you get further into the future, the error bars get a lot wider and you wind up starting to count people who only exist in like 0.1% of scenarios. This is less intuitive, but I think it makes more sense to count their interests as 0.1% as important as people who definitely exist today, just as we count the interests of people born in 2024 as 99.9% as important, rather than drawing the line somewhere and saying we shouldn't consider them at all.

The question of whether these people born in 0.1% of future worlds are made better off by existing (provided that they have net-positive experiences) rather than not existing just returns us to my first reply to your comment: I don't have super robust philosophical arguments but I have those intuitions.

Thank you for the thorough answer.

To me it's a practical matter. Do I believe or not that some set of people will exist?

To motivate that thinking, consider the possibility that ghosts exist, and that their interests deserve account. I consider its probability non-zero because  I can imagine plausible scenarios in which ghosts will exist, especially ones in which science invents them. However, I don't factor those ghosts into my ethical calculations with any discount rate. Then there's travelers from parallel universes, again, a potentially huge population with nonzero probability of existing (or appearing) in future. They don't get a discount rate either, in fact I don't consider them at all.

As far as large numbers of future people in the far future, that future is not on the path that humanity walks right now. It's still plausible, but I don't believe in it. So no discount rate for trillions of future people. And, if I do believe in those trillions, still no discount rate. Instead, those people are actual future people having full moral status.

Lukas Gloor's description of contractualism and minimal morality that is mentioned in a comment on your post appeals to me, and is similar to my intuitions about morality in context, but I am not sure my views on deciding altruistic value of actions match Gloor's views. 

I have a few technical requirements before I will accept that I affect other people, currently alive or not. Also, I only see those effects as present to future, not present to past. For example, I won't feel concern myself about the moral impacts of a cheeseburger, no matter what suffering was caused by the production of it, unless I somehow caused that production. However, I will concern myself with what suffering my eating of that burger will cause (not could cause, will cause) in future. And I am accountable for what I caused after I ate cheeseburgers before.

Anyway, belief in a future is a binary thing to me. When I don't know what the future holds, I just act as if I do. Being wrong in that scenario tends not to have much impact on my consequences, most of the time.

Some thoughts:

  • You can think of the common-sense moral intuition (like Josh's) as a heuristic rather than "a pure value" (whatever that means) - subtly tying together a value with empirical beliefs about how to achieve that value
  • Discarding this intuition might mean you are discarding empirical knowledge without realizing it
  • Even if the heuristic is a "pure value," I'm not sure why it's not allowed for that value to just discount things more the farther they are away from you. If this is the case, then valuing the people in your cases is consistent with not valuing humans in the very far future.
  • And if it is a "pure value," I suppose you might say that some kind of "time egalitarianism" intuition might fight against the "future people don’t matter as much" intuition. I'm curious where the "time egalitarianism" intuition comes from in this case, and if it's really an intuition or more of an abstract belief.
  • And if it is a "pure value," perhaps the intuition shouldn't be discarded or completely discarded since agents with utility functions generally don't want those utility functions changed (though this has questionable relevance).

I think it is a heuristic rather than a pure value. My point in my conversation with Josh was to disentangle these two things — see Footnote 1! I probably should be more clear that these examples are Move 1 in a two-move case for longtermism: first, show that the normative "don't care about future people" thing leads to conclusions you wouldn't endorse, then argue about the empirical disagreement about our ability to benefit future people that actually lies at the heart of the issue.

I think I understood that's what you were doing at the time of writing, and mostly my comment was about bullets 2-5. E.g. yes "don't care about future people at all" leads to conclusions you wouldn't endorse, but what about discounting future people with some discount rate? I think this is what the common-sense intuition does, and maybe this should be thought of as a "pure value" rather than a heuristic. I wouldn't really know how to answer that question though, maybe it's dissolvable and/or confused.

Some people claim we should care about only those future people that will actually exist, not those that could have but won't.

It's a bit hard to make sense of what that means, but in any case, it's unclear what they want to say when we are uncertain about who will exist, whether because of uncertainty about our own future actions or uncertainty about how other events beyond our control will play out. 

Further, I wonder if how we approach uncertainty about who will exist in the future should be treated differently from uncertainty about who currently exists?

Suppose there are between 1-5 people trapped at the bottom of the well, but we don't know exactly how many. It seems hard to argue that we should discount the uncertain existence of those people by more than their probability of existing.

I’m wondering where fetuses fit into this argument as they are more than hypothetical people. I’m relatively neutral on the subject so I’m not being political. Also just about everything alive today will be dead in a few hundred years. Most of it horribly. Either starving, disease, predation or territorial aggression. What does it matter if it’s replaced or not? It is nonexistent so it knows or feels nothing. Because some DNA coding tells us too? If life is just a biochemical happenstance what does anything really matter? Why should anyone care beyond the scope of their own lives? If every living thing went extinct 5 days or 5 billion years after I die, I won’t know the difference. I know questions like this eventually lead to the mystical but that would really be the only reason to care. If we somehow go on. Honestly both seem absurd to me but I lean towards something beyond us. What, I don’t know, but it’s a gut feeling, whatever that’s worth. Science and religion have been at odds for awhile and it seems science is winning now, but it’s cold comfort. I think it’s important for the future of humanity to reconcile the two. Without the judgment, the hypocrisy and the exclusivity. I think a lot of people under value the effects believing in a greater being or plan or whatever can have on society. It makes the future and all the hypothetical life that goes along with it seem more valuable if you believe in some way you will still be around to witness it, whether it’s true or not.

Presence of a fetus is presence of an actual person. However, my standard for what traits qualify a person as a person are fairly relaxed. To me, anyone not planning an abortion in a population with a high rate of successful births is going to bring that fetus to term, so that fetus is and will be a person.

I agree we should care about future people who we think are probably going to exist, for example caring about climate change as it will affect future people who we know will exist.

Where longtermism might go wrong is when one says there is a moral obligation to bring more people into existence. For example under total utilitarianism one might argue that we have an obligation to bring an enormous number of people into existence. I think this is wrong. I've seen longtermists argue that extinction is bad not just because of the harm it might do to present people but because of the 10^n future people who don't get to exist. I see this as wrong. There's no harm done by not having children. This is a very dangerous pro-life type argument. It says there is essentially infinite value in all these potential future people and would justify torturing everyone alive today if it guaranteed the existence of these future people.

While I don't necessarily agree with Matty's view that total utilitarianism is wrong, I think this comment highlights a key distinction between a) improving the lives of future people and b) bringing lives into existance. 

The examples in this post are really useful to show that future people matter, but they don't show that we should bring people into existance. For example, if future people were going to live unhappy lives, it would still be good to do things that prevent their lives from being worse (e.g. improve education, prevent climate change, pick up glass), but this doesn't necessarily imply we should try to bring those unhappy people into existance (which may have been Josh's concern, if I understand correctly). 

More from tlevin
Curated and popular this week
Relevant opportunities