A popular science channel host, Sabine Hossenfelder just posted this video on a critique of longtermism.
I don't think it was a fair assessment and misunderstood key concepts of longtermism. It is sad to see it being misrepresented to thousands of general viewers who might turn away from even critically engaging with these ideas based on a biased general overview. It could be worth engaging with her and her viewers in the comment section, to both learn what longtermism might be getting wrong so as to update our own views, and to discuss some of the points she raises, more critically. I'm concerned that EA would become too strongly associated with longtermism which could make thousands of these viewers avoid EA.
Some of the points I agree with:
- Her discomfort with the fact that many major ideas of EA and longtermism have all originated from a handful of philosophers all located at Oxford or related institutes.
- This quote from Singer that she cites: “..just how bad the extinction of intelligent life on our planet would be depends crucially on how we value lives that have not yet begun and perhaps never will begin”. I agree that this is an important crux that makes for a good argument against longtermism, or for a more cautious advancement on the longtermism front.
Some of the points that I disagree with:
- “Unlike effective altruists, longtermists don’t really care about famines or floods because those won’t lead to extinction”. She mistakes prioritizing long term future over short term as an implication that they don’t “care” about the short term at all. But it is a matter of deciding which has more impact, among many extraordinary ways of doing good which includes caring about famines and floods.
- “So in a nutshell longtermists say that the current conditions of our living don’t play a big role and a few million deaths are acceptable, so long as we don’t go extinct”. Nope, I don’t think so. Longtermism merely states that causing the deaths of a few billion people might be worse. Both (a million deaths and a billion deaths) are absolutely unacceptable, but I think what she misses is the trade-offs involved in doing good and the limited amount of time and resources that we have. I am surprised she misses the point that when one actually wants to do the most good, then one has to go about it in a rational way.
- “Have you ever put away a bag of chips because you want to increase your chances of having more children so we can populate the entire galaxy in a billion years? That makes you a longtermist.” Nope, I don’t think longtermism advocates for people going out of their way to make more children.
- She quotes a few opinion pieces that criticize longtermism: “.. the fantasy that faith in the combined power of technology and the market could change the world without needing a role for the government”, “..longtermism seems tailor-made to allow tech, finance and philosophy elites to indulge their anti-humanistic tendencies..”. Ironically, I think longtermists are more humanistic given that one of their primary goals is to ensure the long-term sustenance of humanity. Also, as far as I know, longtermism only says that given that technology is going to be important, it is best that we develop it in safer ways. It does not necessarily promote ideas of pushing for technological progress just for the sake of it. It also does not impose a technocractic moral worldview when it advocates for extinction risk prevention or prioritizing the survival of humanity and life as a whole.
With regards to Pascal Mugging that she brings up, I am uncertain how this can be resolved. I’ve read a few EA articles for and against this, but I’m still confused.
Would love to hear your thoughts on this video and what it could mean for the non-EAs that might be watching it.