Hide table of contents

[epistemic status: thinking out loud]

Hey everyone, I'd like to receive some feedback on a line of thought that came up in a recent discussion.

In Oxford's introductory fellowship, this question came up as part of the exercise for the week on longtermism:

Imagine you could save 100 people today by burying toxic waste that will, in 200 years, leak out and kill thousands (for the purposes of the question, assume you know with an unrealistic level of certainty that thousands will die). Would you choose to save the 100 now and kill the thousands later? Does it make a difference whether the toxic waste leaks out 200 years from now or 2000?

My take is that the question is meant to tease out the answer that it seems better to save more people rather than less, even though those lives are yet to come. In other words the question seems intended to make a case for longtermism and there are many other thought experiments like it.

Here we could make all the usual arguments for longtermism and against having time preferences.

In my group all the fellows at first answered that they would save the thousands in the future, i.e. the expected answer.

Then one of the fellows came up with the following argument, which this post is about:

If we save the 100 people today, we can assume that they are going to have children. With reasonable presumptions about birth rates, we might expect that after 2000 years the combined number of the 100 original people and their descendants far exceeds a few thousand. Thus, it is better to save the 100 people now, as the total value of happiness from their lineage is bigger in expectation than that of the thousands we will save 2000 years from now.

One might call this the “compound interest” argument for saving lives.

My guess is that this argument seems less important right now, because globally birth rates are slowing down and many countries even have negative population growth. It could become more important should we be able to traverse “the precipice” and populate other planets. Then we could plausibly arrive at a rate of population growth that looks exponential, similar to how it looked in the 60s.

It seems likely that smart people have already thought about the argument but that I'm just not familiar enough with the literature on longtermism.

I invite you to point out why the argument is false or irrelevant.

Uncertainties:

  • Does the argument have any practical implications?
  • Wouldn't this line of thinking seem implausible for similar reasons to why we dismiss discount rates? Could taking into account the happiness of all descendants of a person imply that people who lived a long time ago were much more important because they have had many descendants? (2)
  • If you save a life, to what extent are you responsible for the happiness (or suffering) of all descendants of the person saved?
  • Does the argument work the other way around? Should we consider murder even worse because it might prevent a person from having many happy descendants?

Appendix: Math

The line of argument could also be portrayed by some math, here is a simple growth rate calculation:

If a = initial amount of people saved

r = growth rate

n = number of years

If we take 100 people originally saved, a very small growth rate of 0.2% per year would, after 2000 years, lead to a population of more than a few thousand. 

f(2000) = 100*1.002^2000 = ~5438

Of course this is extremely simplified and calculating population growth depends on many more factors and is much harder. A small group of 100 people, for example, will not have the same average growth rate as the population at large. This calculation also doesn’t account for people who are born and die in the meantime; it just gives the population size after 2000 years. If we were to value the number of (happy) lives lived during the whole time in aggregate, we would arrive at a much larger number. 

The calculation is just meant to show that it's plausible to arrive at a number higher than a couple thousand, as it was framed in the thought experiment. 

Acknowledgements

Thanks to Adam, Akash, Aaron and everyone else who discussed this with me!

Sources:

(1) https://ourworldindata.org/world-population-growth

(2) https://www.givingwhatwecan.org/post/2013/04/was-tutankhamun-a-billion-times-more-important-than-you/

34

0
0

Reactions

0
0

More posts like this

Comments10
Sorted by Click to highlight new comments since:

Hey Max, I think this is a valid and important line of thought. As you suspect, the basic idea has been discussed, though usually not with a focus on exactly the uncertainties you list.

I'm afraid I don't have time to respond to your questions directly, but here are a couple of links that might be interesting:

Yeah, those are good links. To add to that, a key issue that the value of saving lives now, and the effects this has on the future, depends on the more general concern of where the Earth is in relation to it optimum population trajectory. However, as discussed in Hilary Greaves, Optimum Population Size  it's not clear on any of a range of models whether there are too many or too few people now. Hilary discusses this assuming totalism, but the results are more general, as I discuss in chapter 2.7 of my  PhD thesis.  (This discussion isn't the main point of the chapter, which is really noting and exploring  the tension between believing both that the Earth is overpopulated and that saving lives is good.) 

[anonymous]3
0
0

Michael, thanks for these links, I'm really enjoying reading both of them. Super interesting thesis! I am pretty puzzled by the idea of there being a single optimal population size though. Even under a totalist view,well being seems super dependent on who is alive and not just how many. E.g. if you had a world full of 20 billion people with a (learned or genetic) predisposition toward happiness, then that will look very different from a world with 20 billion people with a predisposition toward misery (and might look similar to a world with 10 or 30 billion people with a predisposition toward neutral moods). So it strikes me as strange to imagine a single inverted u function. Hillary Greaves' piece mentions that she is considering the optimal population "under given empirical conditions," but I'm not really sure what that means given that population could grow in any number of ways. I think that it refers to something along the lines of "the optimal population level taking the world completely as is and offering no interventions that change either who is born or how happy anyone is given that they were born" which I guess makes logical sense as an intellectual exercise but then doesn't tell us about the ethics of any particular intervention ( offering free birth control or subsidizing births or having a baby, for example, takes us out of the world of "existing empirical conditions" and changes both who is born and how happy existing people are). I'm sure both of you considered this point though--Would it be correct to say that you believe that these considerations just don't empirically matter that much for our practical decision making?

Hello Monica. I agree there would be different optima given different assumptions. The natural thing to do is to take the world as we, in fact, expect it to be - we're trying to do ethics in the real world. 

Hilary's paper focuses on whether we are in relation to optimum population assuming a 'business as usual' trajectory, i.e. one whether we don't try to change what will happen currently. You need to settle your view on that to know whether you think you want to encourage or discourage extra people from being born. And, as Hilary quite really points out, this is not a straightforward question to answer. 

[anonymous]1
0
0

That makes sense, thanks Michael!  

I like the framing of "optimum population trajectory",  that's an idea I haven't encountered before. Thanks! 

yeah, it's the natural way to think about it unless you're only concerned about the current population. 

Hey Max, thank you for the links! I guess now I have some quality reading material over the holidays :) 

Michael Bitton has used this argument as a reductio against longtermism (search "Here's an argument").

It seems it could work as to the medium term but would not work as to the very long term because i) if the fertility rate is above replacement, the initial additional people stop having a population effect after humanity reaches carrying capacity and ii) if the fertility rate is below replacement, the number of additional people in each generation attributable to the initial additional people would eventually reach zero.

Economists routinely discount the future because they expect the future to be richer. This seems analogous and might be with looking into and I expect there is a fair amount written on the topic although I don't have good links.

(Note: this is different to pure time preference discounting which is what many folk in the EA community object to and what I assume you mean when you say "we dismiss discount rates")

Curated and popular this week
Relevant opportunities