I have not researched longtermism deeply. However, what I have found out so far leaves me puzzled and skeptical. As I currently see it, you can divide what longtermism cares about into two categories:
1) Existential risk.
2) Common sense long-term priorities, such as:
- economic growth
- environmentalism
- scientific and technological progress
- social and moral progress
Existential risk isn’t a new idea (relative to longtermism) and economic growth, environmentalism, and societal progress aren’t new ideas either. Suppose I already care a lot about low-probability existential catastrophes and I already buy into common sense ideas about sustainability, growth, and progress. Does longtermism have anything new to tell me?
Maybe? This depends on what you think about the probability that intelligent life re-evolves on earth (it seems likely to me) and how good you feel about the next intelligent species on earth vs humans.
IMO, most x-risk from AI probably doesn't come from literal human extinction but instead AI s... (read more)
This is an interesting point, and I guess it’s important to make, but it doesn’t exactly answer the question I asked in the OP.
In 2013, Nick Bostrom gave a TEDx talk about existential risk where he argued that it’s so important to care about because of the 10^umpteen future lives at stake. In the talk, Bostrom referenced even older work by Derek Parfit. (From a quick Google, the Parfit stuff on existential risk was from his book Reasons and Persons, published in 1984.)
I feel like people in the EA community only started talking about "longtermism" in the la... (read more)