The language of longtermism focuses on future generations of humans. Should it explicitly include the flourishing of moral agents in general, rather than just future generations of people?

Imagine we find intelligent life on another planet, where lives are long, rich, fulfilling and peaceful, making  human lives look very poor in comparison. Would a longtermist want to ensure that alien life of that enviable type flourishes far into the future, even if this comes at the expense of human life? 

The same thought experiment can be done with AI. If super-happy, super-moral artificially intelligences emerge, would a longtermist be clearing the way for their long-term proliferation in preference to humanity's?

25

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since: Today at 9:33 PM

I think there's room for divergence here (i.e., I can imagine longtermists who only focus on the human race) but generally, I expect that longtermism aligns with "the flourishing of moral agents in general, rather than just future generations of people." My belief largely draws from one of Michael Aird's posts.

This is because many longtermists are worried about existential risk (x-risk), which specifically refers to the curtailing of humanity's potential. This includes both our values⁠—which could lead to wanting to protect alien life, if we consider them moral patients and so factor them into our moral calculations—and potential super-/non-human descendants. 

However, I'm less certain that longtermists worried about x-risk would be happy to let AI 'take over' and for humans to go extinct. That seems to get into more transhumanist territory. C.f. disagreement over Max Tegmark's various AI aftermath scenarios, which runs the spectrum of human/AIcoexistence.

I consider helping all Earth's creatures, extending our compassion, and dissolving inequity as part of fulfilling our potential.

I don't think that because the aliens seemed to enjoy life much more, and had higher levels of more sustained happiness, that would necessarily mean their continued existence should be prioritized over our's. I wouldn't consider one person's life more valuable than another person's life just because that person experienced substantially more enjoyment and happiness. Also, I am not sure how to compare happiness and/or enjoyment between two different people. If a person had 20 years of unhappiness then suddenly became happy, maybe their new happiness (perhaps by putting all the previous years of their life in a more positive perspective) makes up for all the past unhappiness they had.

If the aliens never had wars, or hadn't had one for the last two thousand years, it would seem incomprehensible to favor our own continued existence over their's. If there were only two possibilities, our continued existence or their's, and we favored our own existence, I imagine that our future generations would view our generation as having gone through a moral catastrophe. Favoring our own species would have robbed the universe of great potential flourishing and peace. 

A justification for favoring our own species might be that we expect we will catch up to them and eventually be even more happy and peaceful than they are, and/or live longer in such a state than they would. We would have to expect that we would be more happy and peaceful, and/or live longer in such a state, and not just equally happy and peaceful, since the time spent catching up would add harm to the universe and make the universe overall less better.

More from dotsam
Curated and popular this week
Relevant opportunities