Phil Torres has an article criticizing Longtermism. I'm posting here in the spirit of learning from serious criticism. I'd love to hear others' reactions: https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk
Spelling out his biggest concern, he says "Even more chilling is that many people in the community believe that their mission to “protect” and “preserve” humanity’s 'longterm potential' is so important that they have little tolerance for dissenters." He asserts that "numerous people have come forward, both publicly and privately, over the past few years with stories of being intimidated, silenced, or 'canceled.'" This doesn't match my experience. I find the EA community loves debate and questioning assumptions. Have others had this experience? Are there things we could do to improve as a community?
Another critique Torres makes comes down to Longermism being intuitively bad. I don't agree with that, but I bet it is a convincing argument to many outside of EA. For a large number of people, Longtermism can sound crazy. Maybe this has implications for communications strategy. Torres gives examples of Longtermists minimizing global warming. A better framing for Longtermists to use could be something like "global warming is bad, but these other causes could be worse and are more neglected." I think many Longtermists, including Rob Wiblin of 80,000 hours, already employ this framing. What do others think?
Here is the passage where Torres casts Longtermism as intuitively bad:
If this sounds appalling, it’s because it is appalling. By reducing morality to an abstract numbers game, and by declaring that what’s most important is fulfilling “our potential” by becoming simulated posthumans among the stars, longtermists not only trivialize past atrocities like WWII (and the Holocaust) but give themselves a “moral excuse” to dismiss or minimize comparable atrocities in the future.
To me, the question is "what are the logical conclusions that longtermism leads to?" The idea that as of today we have not exhausted every intervention available is less relevant in considerations of 100s of thousand and millions of years.
I agree. The debate would be whether to follow the moral reasoning of longtermism or not. Something that might be "awful for people alive today" is completely in line with longtermism - it could be the situation. To not support the intervention would constitute a break between theory and practice.
I think it is important to address the implications of this funny situation sooner rather than later.