A popular science channel host, Sabine Hossenfelder just posted this video on a critique of longtermism. 

I don't think it was a fair assessment and misunderstood key concepts of longtermism. It is sad to see it being misrepresented to thousands of general viewers who might turn away from even critically engaging with these ideas based on a biased general overview. It could be worth engaging with her and her viewers in the comment section, to both learn what longtermism might be getting wrong so as to update our own views, and to discuss some of the points she raises, more critically. I'm concerned that EA would become too strongly associated with longtermism which could make thousands of these viewers avoid EA.

Some of the points I agree with:

  • Her discomfort with the fact that many major ideas of EA and longtermism have all originated from a handful of philosophers all located at Oxford or related institutes.
  • This quote from Singer that she cites: “..just how bad the extinction of intelligent life on our planet would be depends crucially on how we value lives that have not yet begun and perhaps never will begin”. I agree that this is an important crux that makes for a good argument against longtermism, or for a more cautious advancement on the longtermism front.

Some of the points that I disagree with:

  • Unlike effective altruists, longtermists don’t really care about famines or floods because those won’t lead to extinction”. She mistakes prioritizing long term future over short term as an implication that they don’t “care” about the short term at all. But it is a matter of deciding which has more impact, among many extraordinary ways of doing good which includes caring about famines and floods.
  • So in a nutshell longtermists say that the current conditions of our living don’t  play a big role and a few million deaths are acceptable, so long as we don’t go extinct”. Nope, I don’t think so. Longtermism merely states that causing the deaths of a few billion people might be worse. Both (a million deaths and a billion deaths) are absolutely unacceptable, but I think what she misses is the trade-offs involved in doing good and the limited amount of time and resources that we have. I am surprised she misses the point that when one actually wants to do the most good, then one has to go about it in a rational way.
  • Have you ever put away a bag of chips because you want to increase your chances of having more children so we can populate the entire galaxy in a billion years? That makes you a longtermist.” Nope, I don’t think longtermism advocates for people going out of their way to make more children.
  • She quotes a few opinion pieces that criticize longtermism: “.. the fantasy that faith in the combined power of technology and the market could change the world without needing a role for the government”, “..longtermism seems tailor-made to allow tech, finance and philosophy elites to indulge their anti-humanistic tendencies..”. Ironically, I think longtermists are more humanistic given that one of their primary goals is to ensure the long-term sustenance of humanity.  Also, as far as I know, longtermism only says that given that technology is going to be important, it is best that we develop it in safer ways. It does not necessarily promote ideas of pushing for technological progress just for the sake of it.  It also does not impose a technocractic moral worldview when it advocates for extinction risk prevention or prioritizing the survival of humanity and life as a whole.

With regards to Pascal Mugging that she brings up, I am uncertain how this can be resolved. I’ve read a few EA articles for and against this, but I’m still confused.

Would love to hear your thoughts on this video and what it could mean for the non-EAs that might be watching it.

61

1
0

Reactions

1
0

More posts like this

Comments21
Sorted by Click to highlight new comments since: Today at 10:54 AM

Hossenfelder's final statement on Longtermism

I watched the video. Starting at 14:45, she spends the last minute of the video giving an appreciative take on longtermism (it makes sense to protect the long-term future, but that you also have to protect present-day people and the environment in order to do so).

How the political hit job works

The previous ~15 minutes are a political hit job, using these techniques:

  1. Grab a couple of potentially offensive-sounding sound bites from old papers, misrepresent and spin them to give a false impression of longtermist discourse (i.e. say that longtermists consider present-day people to be "expendable").
  2. Repeat the nastiest-sounding criticisms from newspaper columnists.
  3. Zeroing in on associations with Silicon Valley billionaires.
  4. Pit longtermists and non-longtermist EAs against each other (i.e. focusing on Peter Singer's criticism of Toby Ord complaining that too much money is spent on global health and not enough on longtermism).
  5. Claims of things "real longtermists" should do that nobody, including longtermists, actually do (i.e. asserting that longtermism insists we have as many babies as possible, so we should all diet, so we should all stop eating chips).
  6. Knocking down weakman/strawman arguments for longtermism.

How I'd prefer EA/longtermism respond to such criticisms

There are critical arguments in this video worth taking seriously, as listed in the OP. Hossenfelder is far from the first person to notice or be uncomfortable with them.

I find it suspect if what prompts us to engage with an argument is that Sabine Hossenfelder yelled at us about it on her Youtube channel. That makes me think both that we don't actually care about the criticism (we're just feeling defensive), and also that we're probably focusing on the wrong arguments (because hit-job critics like Hossenfelder are optimizing for making their targets look and feel bad, not the substantive problems with their targets).

If we're interested in taking those arguments seriously, we should either make them ourselves, or cite sources that do so in a professional and collegial manner. I think this is basic to self-respect. I would not tolerate the kind of cruelty Hossenfelder showers on longtermism from friends, colleagues, or teachers, and I don't particularly want to receive it from the movements I look to for decision-making advice and information.

I've watched a few of Sabine Hossenfelder's videos in the past. She didn't previously strike me as a "hit-job critic" -- for example, I remember thinking this video about nuclear power was reasonable (not an area I have expertise in, though).

Your model here seems to be that Sabine set out to make a hit job on longtermism. I think a more likely sequence of events was something like:

  • Sabine supplements her academic income by making Youtube videos about popular science.

  • The more videos she makes, the more money she makes.

  • Longtermism has been in the news recently; she decides to make a video about it.

  • She reads some news coverage of longtermism that ends up shaping her thinking about longtermism quite a lot.

  • The video ends up being essentially a repetition of talking points from the news coverage.

I think it's incorrect to believe that Sabine knows everything about longtermism that you do, and is seeking to intentionally distort it. It seems more likely to me that she is just repeating what has become the popular narrative about longtermism by this point. (Note: I haven't been paying much attention to longtermism news coverage. This is just a guess.)

"Never attribute to malice that which can be adequately explained by neglect." The video did not strike me as especially "cruel", in sense of deliberately seeking to cause harm. "Uncharitable" or "dismissive" seems more like it.

Anyway, if the above story is true, my takeaways would be:

  • Before popularizing a subtle idea like longtermism, there should be a red teaming process: thinking through how critics are likely to respond, and also how the meme might evolve when introduced to a broader audience. (Imagine the person you like least, then imagine them justifying their worst idea using longtermism. How to prevent this?)

  • If it's worthwhile to popularize an idea like longtermism, it's worthwhile to do it right. Responding to critics doesn't actually take that much time. (80/20 rule: Responding to 20% of critics gets you 80% of the benefit.) A few people can be paid to watch for longtermism discussion using Google Alerts etc. and offer polite corrections if bad arguments are made. Polite corrections probably won't cause the person who made the bad argument to reverse their position, but they can be persuasive to onlookers. If no counterargument is made, some onlookers will assume that's because no counterargument can be made, and some of those onlookers could be people who also have a big social media platform. Standard EA advice to ignore most critics makes little sense to me.

Before popularizing a subtle idea like longtermism, there should be a red teaming process: thinking through how critics are likely to respond, and also how the meme might evolve when introduced to a broader audience. (Imagine the person you like least, then imagine them justifying their worst idea using longtermism. How to prevent this?)

To me, this sounds like PR, and I agree with Anna Salamon that PR is corrosive, reputation is not. I view myself here as defending longtermism's reputation, or honor. When  somebody who's talking beyond their expertise besmirches the reputation of an idea, person, or group, then it's right to push back directly against this behavior. Not to try and somehow avoid that outcome from occurring by modifying how you show up in public.

A few people can be paid to watch for longtermism discussion using Google Alerts etc. and offer polite corrections if bad arguments are made. Polite corrections probably won't cause the person who made the bad argument to reverse their position, but they can be persuasive to onlookers. If no counterargument is made, some onlookers will assume that's because no counterargument can be made, and some of those onlookers could be people who also have a big social media platform.

I'd be supportive of a well thought through experiment to try this out. I am not sure how one would approach this, or get feedback. My own few experiences of trying to politely respond to public figures making ill-founded criticisms is that they just ignore me. I expect this would be the result.

Remember that Sabine Hossenfelder is a theoretical physicist. She went through and read papers. She is an extremely intelligent person. I am sure she's smarter than me. I think it is far more likely that she understood the ideas and deliberately decided to distort them for her own political agenda, or maybe just for clicks, than that she misunderstood them. I really think that longtermism is an easier topic to grasp than Collider signatures in the Planck regime. If she can publish the latter, I think she can grasp the former.

To me, this sounds like PR, and I agree with Anna Salamon that PR is corrosive, reputation is not.

I think any effort to popularize longtermism is in some sense a PR effort. If you're going to deliberately push a meme you should do it strategically. (Edit: To be clear, I'm not advocating for dishonesty.)

I think the "corrosiveness of PR" point applies more strongly to personal and organizational conduct than advocating for a new idea.

My own few experiences of trying to politely respond to public figures making ill-founded criticisms is that they just ignore me. I expect this would be the result.

Publicly admitting you're incorrect is disincentivized. Probably if someone finds your counterpoint persuasive, they will not say so, in order to save face. In any case, onlookers seem more important -- there are far more of them.

Also, if the counterpoint is published by a professional, they'll have a bit more of a platform, so the likelihood of them getting ignored will be a bit lower. (Edit: Clarification -- I'm advocating that you publish counterpoints specifically in places where people who saw the original are also likely to see the counterpoint. So e.g. if you have more Twitter followers, your reply to their tweet will be more visible.)

Remember that Sabine Hossenfelder is a theoretical physicist. She went through and read papers. She is an extremely intelligent person. I am sure she's smarter than me. I think it is far more likely that she understood the ideas and deliberately decided to distort them for her own political agenda, or maybe just for clicks, than that she misunderstood them. I really think that longtermism is an easier topic to grasp than Collider signatures in the Planck regime. If she can publish the latter, I think she can grasp the former.

I'm not referring to the difficulty of grasping it so much as the amount of time that was put in. Also, framing effects are important. Maybe Sabine just skimmed the paper to verify that the claims made in the media were correct. Maybe she doesn't have much experience with moral philosophy discourse norms. ("You would kill baby Hitler? Stop advocating for infanticide!!")

I'm not sure what you think her agenda is. If she was focused on advancing an agenda, such as attempting a "hit job", would it make sense to include the bit at the end about how she really appreciates the longtermist focus on the prevention of existential risks so we have a long-term strategy for the next 10 billion years? My guess is she is not deliberately pushing an agenda, so much as fitting longtermism into an existing worldview without trying to steelman it (or, adopting a frame from a someone else who did this).

Publicly admitting you're incorrect is disincentivized. Probably if someone finds your counterpoint persuasive, they will not say so, in order to save face. In any case, onlookers seem more important -- there are far more of them.

Mostly, I've contacted authors via email. I never get responses. This doesn't really surprise me, since they don't know who I am, stand to gain nothing by replying, and they might worry I'd use any reply they gave me to disparage them in public. Point being, though, that it's really not easy to foster dialog with a person who's already taken the step of disparaging you, your ideas, or your community in public.

I'm not referring to the difficulty of grasping it so much as the amount of time that was put in. Also, framing effects are important. Maybe Sabine just skimmed the paper to verify that the claims made in the media were correct. Maybe she doesn't have much experience with moral philosophy discourse norms. ("You would kill baby Hitler? Stop advocating for infanticide!!")

I'm not sure what you think her agenda is. If she was focused on advancing an agenda, such as attempting a "hit job", would it make sense to include the bit at the end about how she really appreciates the longtermist focus on the prevention of existential risks so we have a long-term strategy for the next 10 billion years? My guess is she is not deliberately pushing an agenda, so much as fitting longtermism into an existing worldview without trying to steelman it (or, adopting a frame from a someone else who did this).

Let's taboo "hit job," since it's adding heat rather than light at least in this discussion between us. I do think that it makes sense to acknowledge the common-sense (actual longtermist) viewpoint at the end in the context of making a political attack on longtermism. Hossenfelder knows that her audience is sympathetic to the view that we should care for the long-term future. That makes it difficult to just outright dismiss longtermism the way mainstream political ideologies dismiss each other.

So she has to present an insane type of what we might call "no chips" longtermism, then argue against that, with the little caveat at the end. This is going to be just one of many examples to code longtermism as some sort of whacko right-wing hypernatalist escape from the burning wreckage of Earth to Mars for the rich 0.1% fantasy.

Having watched the video, I just frankly find it hard to believe that anybody would watch it and not see it as a clear politically motivated attack/smear attempt on longtermism.

It's not so much a sense that you're seeing the young woman and I'm seeing the old lady.

This isn't meant to be disparaging, but it's more a sense that I'm seeing the rabbit, while you're simultaneously claiming you're not colorblind but do not see the rabbit.

I'm truly confused both about how you can watch Hossenfelder's video and not see it as a politically motivated attack, and also about how you imagine, in practical terms, that longtermism could have avoided becoming a target for such attacks.

I'm truly confused both about how you can watch Hossenfelder's video and not see it as a politically motivated attack

Supposing it is a politically motivated attack, what do you think her motivation was? Why would she craftily seek to discredit longtermism in the way you describe? I think that's the biggest missing piece for me.

(I also think it's dangerous to mistake criticism for deliberate persecution.)

how you imagine, in practical terms, that longtermism could have avoided becoming a target for such attacks.

One of the most common ways to argue in moral philosophy is to make use of intuition pumps. For example: "Do you believe fighting global warming should be a top priority, even if it means less growth in developing countries and therefore more suffering in the near term? If so, how would you justify that?"

Can you say more about how you see intuition pumps as a potential way for longtermism to avoid political attacks? Seems to me we use them all the time.

I think EA and longtermism are both coming under attack now because they are a currently visible/trendy competitor in the moral marketplace of ideas. I don’t have a great explanation for why people do this, but it’s a traditional human hobby. It just seems like a typical case of attacking a perceived outgroup, either because they seem like a legitimate threat to one’s own influence or because you think your followers will enjoy the roast.

Can you say more about how you see intuition pumps as a potential way for longtermism to avoid political attacks? Seems to me we use them all the time.

The thought is to tailor the intuition pump for your audience, e.g. if your audience is left-wing, leverage moral intuitions they already have.

The thought is to tailor the intuition pump for your audience

I would expect this would make the problem worse, because these attacks come from people looking for stuff to quote, and if you are saying different things to different people they can quote the stuff you said in one context to people in another.

I guess I'm not sure when the point is that you transition from writing straightforward academic articles to writing politically-targeted articles. Hossenfelder said she skipped reading the more recent work (i.e. MacAskill's "Doing Good Better") in favor of looking at old papers published before longtermism/EA was in the news. So unless weird little nascent philosophical movements are couching their arguments in language appealing to every possible future political critic years before those critics will deign to even read the paper, it doesn't seem like this strategy could have prevented Hossenfelder's criticism.

“Unlike effective altruists, longtermists don’t really care about famines or floods because those won’t lead to extinction”

I think this is an accurate characterization of the popular EV-maximizing total utilitarian longtermist worldview within the community. Note the "really", indicating that it's approximately true. I would be less comfortable attributing this view to longtermists as individuals, as a community or to weaker forms of longtermism, since longtermists aren't necessarily 100% bought into longtermism this strong.

Also, even shortermist EAs don't really care about famines or floods, as evidenced by our lack of work on these problems because there are still more cost-effective (less neglected) things to work on.

Rather than prioritization explaining that we do care, prioritization explains why we don't really care.

Let's taboo the word "care".  I expect the average longtermist thinks that deaths from famines and floods are about as bad as the average non-longtermist EA.    Problems do not become "less bad" simply because other problems exist.

Having different priorities, stemming from different beliefs about e.g. what things matter and how effectively we can address them, is orthogonal to relative evaluations of how bad any individual problem is.

They don't become less bad, but we pay less attention and devote fewer resources to them, which is a very plausible way of interpreting "caring less". On this interpretation, it's psychologically implausible that we can care as much about these other problems as others can, even if our abstract utility functions don't say they matter any less just because other things matter more.

I don't think the right response is to argue some definitional point. We should just own that we care less on a commonsense interpretation of the word, and explain why that's right to do.

I agree that the definitional point would be uninteresting, except that I think the commensense interpretation bundles a bunch of connotations which are wrong (and negative).  In context, people receiving this message will have systematically incorrect beliefs about longtermism and those who use it as a framework for prioritization.  This is plainly obvious if you e.g. go read pretty much any Twitter thread where people who are hearing about it for the first time (or were otherwise introduced to it in an adversarial context) are debating the subject.

They don't become less bad, but we pay less attention and devote fewer resources to them, which is a very plausible way of interpreting "caring less".

One meaning of "caring" (let's call it Caring-1) is the kind of care a parent provides for their child. This is precisely the type of care you're talking about here. It implies a responsibility to nurture, protect, and feel for and individual person, place, or thing. Common sense is that we have a responsibility to care for a very limited number of others in this way, and to at least be cognizant enough to do no harm to a much wider circle of others.

"Caring" can also refer to one's receptivity to "chance encounters with other people's problems." Let's call this Caring-2.

  • If you had a golden opportunity to help out with a certain problem, would you ("Do you want a hand with that")?
  • Do you approve of the fact that somebody out there is working on a certain problem ("X is doing amazing work on this problem!")?
  • Do you feel and express sympathy for a certain problem when it is brought to your attention ("I'm so sorry")?
  • Do you acknowledge the reality of the suffering various problems cause, even if you don't personally work on that problem yourself ("that is a really serious issue")?
  • Will you acknowledge that the problem seem like a plausible choice for extending Caring-1, even if you don't personally choose to do so ("somebody should do somethign!")?

Nobody can provide Caring-1 to every issue. The difference between short-termists and longtermists is to what sorts of issues they extend or reject Caring-2.

  • Longtermists may reject or downplay Caring-2 for major present-day issues (famines, floods, etc), in favor of extending either Caring-1 or Caring-2 for far-future issues (astronomical waste).
  • Short-termists may reject or downplay Caring-2 for far-future issues (astronomical waste) in order to focus more on present-day issues (famines and floods).

Hossenfelder expresses around 14:45 that she approves of extending Caring-2 to both the short-term and long-term future. What bothers her is the idea that we should extend no caring-2 or caring-1 to the present day, as well as some of the more far-out ideas longtermist thinkers have explored (i.e. simulation arguments).

Of course, Hossenfelder, a theoretical physicist, is smart enough to make this distinction herself. The fact that she chooses not to, and couches her argument in such heated language, says to me that this is just another crude political hit-job.

even if our abstract utility functions don't say they matter any less just because other things matter more

A utility function can't say anything else, in decision theory. Total caring is, roughly speaking, conserved.

The psuedo utility functions that a hedonic utilitarian projects onto others can introduce more caring for one thing without reducing their caring for other things, but they're irrelevant in this context. (and if you ask me, a preference utilitarian, they're not very relevant in the context of utilitarianism either, but never mind that.)

Hmm, although I think I get what you mean, I'm not sure how it could actually be true given that (preference) utility functions are scale and offset invariant, so the extent of an agent's caring can only be described relative to the other things they care about?

Just flagging that a group here in Israel, led by Sella Nevo, has been working on flood forecasting for years (among other cool projects).

I'm concerned by people connecting longtermism to Elon Musk (whom I think it's becoming increasingly harmful and naive) and I would be curious how the EA community van deal with him

Will MacAskill wrote a Twitter thread about agreements + disagreements with Elon after Elon recommended WWOTF and said "this is a close match for my philosophy."

Worth linking this video from Bentham's Bulldog, which critiques Sabine's video. It's nearly 90 minutes and sometimes optimizes for making fun of Sabine's video instead of giving a fair response, but it does have some good responses.

Curated and popular this week
Recent opportunities in Building effective altruism