S

SaludCinderella

29 karmaJoined Apr 2022

Comments
4

I'm reposting this comment from my own post, since in case anyone finds it relevant, here. 

In a sense, I agree with many of Greaves' premises but none of her conclusions. I do think we ought to be doing more modeling, because there are some things that are actually possible to model reasonably accurately (and other things not) (a mixture of Response 1 and 3). 

Greaves says an argument for longtermism is, “I don't know what the effect is of increasing population size on economic growth.” But we do! There are times when it increases economic growth, and there are times when it decreases it. There are very well-thought-out macro models of this, but in general, I think we ought to be in favor of increasing population growth. 

She also says, “I don't know what the effect [of population growth] is on tendencies towards peace and cooperation versus conflict.” But a similar thing to say would be, “Don’t invent the plow or modern agriculture, because we don’t know whether they’ll get into a fight once villages have grown big enough.” 

Her argument distresses me so much, because it seems that the pivotal point is that we can no longer agree that saving lives is good, but rather only that extinction is bad. If we can no longer agree that saving lives is good, I really don’t know what we can agree upon.

In a sense, I agree with many of Greaves' premises but none of her conclusions in this post that you mentioned here. I do think we ought to be doing more modeling, because there are some things that are actually possible to model reasonably accurately (and other things not). 

Greaves says an argument for longtermism is, “I don't know what the effect is of increasing population size on economic growth.” But we do! There are times when it increases economic growth, and there are times when it decreases it. There are very well-thought-out macro models of this, but in general, I think we ought to be in favor of increasing population growth. 

She also says, “I don't know what the effect [of population growth] is on tendencies towards peace and cooperation versus conflict.” But that’s like saying, “Don’t invent the plow or modern agriculture, because we don’t know whether they’ll get into a fight once villages have grown big enough.” This distresses me so much, because it seems that the pivotal point in her argument is that we can no longer agree that saving lives is good, but rather only that extinction is bad. If we can no longer agree that saving lives is good, I really don’t know what we can agree upon…

Hi Max, 

Thanks so much for responding to my post. I’m glad that you were able to point me towards very interesting further reading and provide very valid critiques of my arguments. I’d like to provide, respectfully, a rebuttal here, and I think you will find that we both hold very similar perspectives and much middle ground.

You say that attempting to avoid infinities is fraught when explaining something to someone in the future. First, I think this misunderstands the proof by contradiction. The original statement has no moral bearing or any connection with the concluding statement—so long as I finish with something absurd, there must have been a problem with the preceding arguments. 

But formal logic aside, my argument is not a moral one. But a pragmatic and empirical one (which I think you agree with). Confucius ought to teach followers that people who are temporarily close are more important, because helping them will improve their lives, and they will go on to improve lives of others or give birth to children. This will then compound and affect me in 2022 in a much more profound way than had Confucius taught people to try to find interventions that might save my life in 2022.

Your points about UN projections of population growth and the physics of the universe are well taken. Even so, while I do not completely understand the physics behind a multi-verse or a spatially infinite universe, I do think that even if this is the case, this makes longtermism extremely fraught when it comes to small probabilities and big numbers. Can we really say that the probability that a random stranger will kill everyone is 1/(total finite population of all of future humanity)? Moreover—the first two contradictions within Argument 1 rely loosely on an infinite number of future people, but Contradiction 3 does not.

The Reflective Disequilibrium post is fascinating, because it implies that perhaps the more important factor that contributes to future well-being is not population growth, but rather accumulation of technology, which then enables health and population growth. But if anything, I think the key message of that article ought to be that saving a life in the past was extremely important, because that one person could then develop technologies that help a much larger fraction of the population. The blog does say this, but then does not extend that argument to discount rates. However, I really do think it should.

Of course, I do think one very valid argument is whether technological growth will always be a positive force for human population growth. In the past, this seems to be true. As, it seems that these positive technologies vastly outweighed the negative effects of technology on the ability to wage war, say. The longtermist argument would then be, that in the future, the negative effect of technology growth on population will outpace the positive effect of technology on population growth. If this indeed is the argument of longtermists, then adopting a near zero discount rate indeed may be appropriate.

I do not want to advocate for a constant discount rate across all time for all projects, in the same way that we ought not to assign the same value of a statistical life across all time and all countries and actors. However, one could model a decreasing discount rate into the future (if one assumes that population growth will continue to decline past 2.4 and technological progress’s effect on growth will also slow down) and then mathematically reduce that into a constant discount rate.

I also agree with you that there are different interventions that people could do or make at different periods of history. 

I think overall, my point is that helping someone today is going to have some sort of unknown compounding effect on people in the future. However, there are reasonable ways of understanding and mathematically bounding this compounding effect on people in the future. So long as we ignore this, we will never be able to adequately prioritize projects that we believe are extremely cost-effective in the short term with projects that we think are extremely uncertain and could affect the long term. 

Given your discussion in the fourth bullet point from the last, it seems like we are broadly in agreement. Yes, I think one way to rephrase the push of my post is not so much that longtermism is wrong per se, but rather that we ought to find more effective ways of prioritizing any sort of projects by assessing the empirical long-term effects of short-term interventions. So long as we ignore this, we will certainly see nearly all funding shift from global health and development to esoteric long-run safety projects.

As you correctly pointed out, there are many flaws with my naïve approach calculation. But the very fact that few have attempted to provide some way of thinking about different funding opportunities across time seems very flawed.