Phil Torres has an article criticizing Longtermism. I'm posting here in the spirit of learning from serious criticism. I'd love to hear others' reactions: https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk
Spelling out his biggest concern, he says "Even more chilling is that many people in the community believe that their mission to “protect” and “preserve” humanity’s 'longterm potential' is so important that they have little tolerance for dissenters." He asserts that "numerous people have come forward, both publicly and privately, over the past few years with stories of being intimidated, silenced, or 'canceled.'" This doesn't match my experience. I find the EA community loves debate and questioning assumptions. Have others had this experience? Are there things we could do to improve as a community?
Another critique Torres makes comes down to Longermism being intuitively bad. I don't agree with that, but I bet it is a convincing argument to many outside of EA. For a large number of people, Longtermism can sound crazy. Maybe this has implications for communications strategy. Torres gives examples of Longtermists minimizing global warming. A better framing for Longtermists to use could be something like "global warming is bad, but these other causes could be worse and are more neglected." I think many Longtermists, including Rob Wiblin of 80,000 hours, already employ this framing. What do others think?
Here is the passage where Torres casts Longtermism as intuitively bad:
If this sounds appalling, it’s because it is appalling. By reducing morality to an abstract numbers game, and by declaring that what’s most important is fulfilling “our potential” by becoming simulated posthumans among the stars, longtermists not only trivialize past atrocities like WWII (and the Holocaust) but give themselves a “moral excuse” to dismiss or minimize comparable atrocities in the future.
Hmm, I guess I wasn't being very careful. Insofar as "helping future humans" is a different thing than "helping living humans", it means that we could be in a situation where the interventions that are optimal for the former are very-sub-optimal (or even negative-value) for the latter. But it doesn't mean we must be in that situation, and in fact I think we're not.
I guess if you think: (1) finding good longtermist interventions is generally hard because predicting the far-future is hard, but (2) "preventing extinction (or AI s-risks) in the next 50 years" is an exception to that rule; (3) that category happens to be very beneficial for people alive today too; (4) it's not like we've exhausted every intervention in that category and we're scraping the bottom of the barrel for other things ... If you believe all those things, then in that case, it's not really surprising if we're in a situation where the tradeoffs are weak-to-nonexistent. Maybe I'm oversimplifying, but something like that I guess?
I suspect that if someone had an idea about an intervention that they thought was super great and cost effective for future generations and awful for people alive today, well they would probably post that idea on EA Forum just like anything else, and then people would have a lively debate about it. I mean, maybe there are such things...Just nothing springs to my mind.