Epistemic Status: Personal view about longtermism and its critics.
Recently, there have been a series of attacks on longtermism. These largely focus on the (indisputable) fact that avoiding X-risks can be tied to racist or eugenic historical precedents. This should be worrying; a largely white. educated, western, and male group talking about how to fix everything should raise flags. And neglecting to address the roots of futurism is worrying - though I suspect that highlighting them and attempting apologetics would have been an even larger red flag to many critics.
At the same time, attacks on new ideas like longtermism are inevitable. New ideas, whether good or bad, are usually controversial. Moreover, any approaches or solutions that are proposed will have drawbacks, and when they are compared to the typical alternative (of ignoring the problem) they will inevitably be worse in some ways. Nonetheless, some portions of the attacks have merit.
In 2017, Scott Alexander wrote a post defending the ideas of the Lesswrong community, Yes, We Have Noticed the Skulls. He says, in part, "the rationalist movement hasn’t missed the concerns that everybody who thinks of the idea of a 'rationalist movement' for five seconds has come up with. If you have this sort of concern, and you want to accuse us of it, please do a quick Google search to make sure that everybody hasn’t been condemning it and promising not to do it since the beginning."
Similarly, there is a real concern about the dangers of various longtermist approaches, but it is one which at least the majority of those who engage with longtermist ideas understand. These attacks looked at some roots of longtermism, but ignore the actual practice and the motives, repeated assertions that we are still uncertain, and the clear evidence that we are and will continue to be interested in engaging with those who disagree.
As the Effective Altruism forum should make abundantly clear, the motivations for the part of the community which embraces longtermism still includes Peter Singer's embrace of practical ethics and effective altruist ideas like the Giving Pledge, which are cornerstones of the community's behavior. Far from carrying on the racist roots of many past utopians, we are trying to address them. "What greater racism is there than the horrifically uneven distribution of resources between people all because of an accident of their birth?," as Sanjay noted. Still, defending existential risk mitigation and longtermism, by noting its proximity and roots in global health and other effective altruist causes is obviously less than a full response.
And in both areas, despite the best of intentions, there are risks that we cause harm, we increase disparities in health and happiness, we promote ideas which are flawed and dangerous, or we otherwise fail to live up to our ideals. Yes, we see the skulls. And yes, some of the proposals that have been put forward have glaring drawbacks which need to be discussed and addressed. I cannot speak for others, though if there is one thing longtermism cannot be accused of, it's insufficient attention to risk.
So I remain wary of the risks - not just the farcical claim that transhumanism is the same as eugenics, or the more reasonable one that some proposed paths towards stability and safety have the potential to worsen inequalities rather than address them, but also immediate issues like gender and racial imbalance within the movement, and the problem of seeing effective altruism as a white man's burden. The community has a history of engaging with critics, and we should continue to take their concerns seriously.
But all the risks of failure aren't a reason to abandon the project of protecting and improving the future - they are a reason to make sure we continue discussing and planning. I hope that those who disagree with us are willing to join in productive conversation about how we can ensure our future avoids the pitfalls they see. If we do so, there is a real chance that our path forward will not just be paved with good intentions, but lead us towards a better and safer future for everyone.
I want to note not just the skulls of the eugenic roots of futurism, but also the "creepy skull pyramid" of longtermists suggesting actions that harm current people in order to protect hypothetical future value.
This goes anywhere from suggestions to slow down AI progress, which seems comfortably within the Overton Window but risks slowing down economic growth and thus slowing reductions in global poverty, to the extreme actions suggested in some Bostrom pieces. Quoting the Current Affairs piece:
Mind you, I don't think these tensions are unique to longtermism. In biosecurity, even if you're focused entirely on the near-term, there are a lot of trade-offs and tensions between preventing harm and securing benefits.
You might have really robust export controls that never let pathogens be shipped around the world... but that will make it harder for developing countries to build up their biomanufacturing capacity. Under the bioweapons convention you have a lot of diplomats arguing about balancing Article IV ("any national measures necessary to prohibit and prevent the development, production, stockpiling, acquisition or retention of biological weapons") and Article X ("the fullest possible exchange of equipment, materials and information for peaceful purposes"). That said, I think longtermist commitments can increase the relative importance of preventing harm.
Thanks - I largely agree, and am similarly concerned about the potential for such impacts, as was discussed in the thread with John Halstead.
As an aside, I think Harper's LARB article was being generous in calling Phil's current affairs article "rather hyperbolic," and think its tone and substance are an unfortunate distraction from various more reasonable criticisms Phil himself has suggested in the past.