Phil Torres has an article criticizing Longtermism. I'm posting here in the spirit of learning from serious criticism.  I'd love to hear others' reactions: https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk

Spelling out his biggest concern, he says "Even more chilling is that many people in the community believe that their mission to “protect” and “preserve” humanity’s 'longterm potential' is so important that they have little tolerance for dissenters." He asserts that "numerous people have come forward, both publicly and privately, over the past few years with stories of being intimidated, silenced, or 'canceled.'"  This doesn't match my experience.  I find the EA community loves debate and questioning assumptions.  Have others had this experience? Are there things we could do to improve as a community?

Another critique Torres makes comes down to Longermism being intuitively bad. I don't agree with that, but I bet it is a convincing argument to many outside of EA.  For a large number of people, Longtermism can sound crazy.   Maybe this has implications for communications strategy.  Torres gives examples of Longtermists minimizing global warming.  A better framing for Longtermists to use could be something like "global warming is bad, but these other causes could be worse and are more neglected." I think many Longtermists, including Rob Wiblin of 80,000 hours, already employ this framing.  What do others think?

Here is the passage where Torres casts Longtermism as intuitively bad:

If this sounds appalling, it’s because it is appalling. By reducing morality to an abstract numbers game, and by declaring that what’s most important is fulfilling “our potential” by becoming simulated posthumans among the stars, longtermists not only trivialize past atrocities like WWII (and the Holocaust) but give themselves a “moral excuse” to dismiss or minimize comparable atrocities in the future.

Comments13
Sorted by Click to highlight new comments since: Today at 4:09 PM

He asserts that "numerous people have come forward, both publicly and privately, over the past few years with stories of being intimidated, silenced, or 'canceled.'"  This doesn't match my experience.

I also have not had this experience, though that doesn't mean it didn't happen, and I'd want to take this seriously if it did happen.

However, Phil Torres has demonstrated that he isn't above bending the truth in service of his goals, so I'm inclined not to believe him. See previous discussion here. Example from the new article:

It’s not difficult to see how this way of thinking could have genocidally catastrophic consequences if political actors were to “[take] Bostrom’s argument to heart,” in Häggström’s words.

My understanding (sorry that the link is probably private) is that Torres is very aware that Häggström generally agrees with longtermism  and provides the example as a way not to do longtermism, but that doesn't stop Torres from using it to argue that this is what longtermism implies and therefore all longtermists are horrible.

I should note that even if this were written by someone else, I probably wouldn't have investigated the supposed intimidation, silencing, or canceling even in the absence of this example, because: 

  1. It seems super unlikely for people I know to intimidate / silence / cancel
  2. Claims of "lots of X has happened" without evidence tend to be exaggerated
  3. Haters gonna hate, the hate should not be expected to correlate with truth

But in this case I feel especially justified for not investigating.

Many thanks for this, Rohin. Indeed, your understanding is correct. Here is my own screenshot of my private announcement on this matter.

This is far from the first time that Phil Torres references my work in a way that is set up to give the misleading impression that I share his anti-longtermism view. He and I had extensive communication about this in 2020, but he showed no sympathy for my complaints. 

Thanks for sharing.  It looks like this article is less of a good faith effort than I had thought

I feel like that guy's got a LOT of chutzpah to not-quite-say-outright-but-very-strongly-suggest that the Effective Altruism movement is a group of people who don't care about the Global South. :-P

More seriously, I think we're in a funny situation where maybe there are these tradeoffs in the abstract, but they don't seem to come up in practice.

Like in the abstract, the very best longtermist intervention could be terrible for people today. But in practice, I would argue that most if not all current longtermist cause areas (pandemic prevention, AI risk, preventing nuclear war, etc.) are plausibly a very good use of philanthropic effort even if you only care about people alive today (including children).

Or, in the abstract, AI risk and malaria are competing for philanthropic funds. But in practice, a lot of the same people seem to care about both, including many of the people that the article (selectively) quotes. …And meanwhile most people in the world care about neither.

I mean, there could still be an interesting article about how there are these theoretical tradeoffs between present and future generations. But it's misleading to name names and suggest that those people would gleefully make those tradeoffs, even if it involves torturing people alive today or whatever. Unless, of course, there's actual evidence that they would do that. (The other strong possibility is, if actually faced with those tradeoffs in real life, they would say, "Uh, well, I guess that's my stop, this is where I jump off the longtermist train!!").

Anyway, I found the article extremely misleading and annoying. For example, the author led off with a quote where Jaan Tallinn says directly that climate change might be an existential risk (via a runaway scenario), and then two paragraphs later the author is asking "why does Tallinn think that climate change isn’t an existential risk?" Huh?? The article could  have equally well said that Jaan Tallinn believes that climate change is "very plausibly an existential risk", and Jaan Tallinn is the co-founder of an organization that does climate change outreach among other things, and while climate change isn't a principal focus of current longtermist philanthropy, well, it's not like climate change is a principal focus of current cancer research philanthropy either! And anyway it does come up to a reasonable extent, with healthy discussions focusing in particular on whether there are especially tractable and neglected things to do.

So anyway, I found the article very misleading.

(I agree with Rohin that if people are being intimidated, silenced, or cancelled, then that would be a very bad thing.)

Speaking of chutzpah, I've never seen anything quite like this:

“We can’t have people posting anything that suggests that Giving What We Can [an organization founded by Ord] is bad,” as Jenkins recalls. These are just a few of several dozen stories that people have shared with me after I went public with some of my own unnerving experiences.

He needs to briefly explain what the acronym 'GWWC' is - because otherwise the sentence will be incomprehensible - but because he wants to paint people as evil genocidal racists who don't care about the poor, he can't explain what type of organization GWWC is, or what the pledge is.

I think you raise a key point about theory of change and observed practice.  

I think we're in a funny situation where maybe there are these tradeoffs in the abstract, but they don't seem to come up in practice.

This "funny situation" means that something is up with the theoretical model.  If the tradeoffs do exist in the theoretical model but don't seem to in practice then: 

  • Practice is not actually based on the explicit theory but is instead based on something else, or
  • the tradeoffs do in fact exist in practice but are not noticed or acknowledged. 

Both of these would be foundational problems for a movement organized around rationality and  evidence based practice.  

Hmm, I guess I wasn't being very careful. Insofar as "helping future humans" is a different thing than "helping living humans", it means that we could be in a situation where the interventions that are optimal for the former are very-sub-optimal (or even negative-value) for the latter. But it doesn't mean we must be in that situation, and in fact I think we're not.

I guess if you think: (1) finding good longtermist interventions is generally hard because predicting the far-future is hard, but (2) "preventing extinction (or AI s-risks) in the next 50 years" is an exception to that rule; (3) that category happens to be very beneficial for people alive today too; (4) it's not like we've exhausted every intervention in that category and we're scraping the bottom of the barrel for other things ... If you believe all those things, then in that case, it's not really surprising if we're in a situation where the tradeoffs are weak-to-nonexistent. Maybe I'm oversimplifying, but something like that I guess?

I suspect that if someone had an idea about an intervention that they thought was super great and cost effective for future generations and awful for people alive today, well they would probably post that idea on EA Forum just like anything else, and then people would have a lively debate about it. I mean, maybe there are such things...Just nothing springs to my mind.

To me, the question is  "what are the logical conclusions that longtermism leads to?" The idea that as of today we have not exhausted every intervention available is less relevant in considerations of 100s of thousand and millions of years. 

I suspect that if someone had an idea about an intervention that they thought was super great and cost effective for future generations and awful for people alive today, well they would probably post that idea on EA Forum just like anything else, and then people would have a lively debate about it.

I agree. The debate would be whether to follow the moral reasoning of longtermism or not. Something that might be "awful for people alive today" is completely in line with longtermism - it could be the situation. To not support the intervention would constitute a break between theory and practice. 

I think it is important to address the implications of this funny situation sooner rather than later. 

mic
3y22
0
0

Phil Torres's tendency to misrepresent things aside, I think we need to take Phil Torres's article as an example of the severe criticism that longtermism is liable to attract, as currently framed, and reflect on how we can present it differently. It's not hard to read this sentence on the first page of (EDIT: the original version of) "The Case for Strong Longtermism":

The idea, then, is that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.

and conclude that, as Phil Torres does, longtermism means that we can justify causing present-day atrocities for a slight, let's say 0.1% increase in the subjective probability of a valuable long-term future. Thinking rationally, atrocities do not improve the long-term future, and longtermists care a lot about stability. But with the framing given by "The Case for Strong Longtermism", there is a small risk that is higher than it needs to be that future longtermists can be persuaded that atrocities would be justified, especially when subjective probabilities are so subjective. How can we reframe or redefine longtermism so that: firstly, we reduce the risk of longtermism being used to justify atrocities, and secondly (and I think more pressingly), reduce the risk that longtermism is generally seen as something that justifies atrocities?

It seems like this framing of longtermism is a far greater reputational risk to EA than, say, how 80,000 Hours over-emphasized earning to give, which is something that 80,000 Hours apparently seriously regrets. I think "The Case for Strong Longtermism" should be revised to not say things like "we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years", without detailing significant caveats. It's just a working paper—shouldn't be too hard for Greaves and MacAskill to revise. (EDIT: this has already happened, as Aleks_K has pointed out below.) If there are many more articles like Phil Torres's here written in other media in the near future, I would be very hesitant about using the term "longtermism". Phil Torres is someone who is sympathetic to effective altruism and to existential risk reduction, someone who believes "you ought to care equally about people no matter when they exist"; now imagine if the article were written by someone who isn't as sympathetic to EA.

(This really shouldn't affect my argument, but I do generally agree with longtermism.)

I think "The Case for Strong Longtermism" should be revised to not say things like "we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years", without detailing significant caveats.

 

FYI, this has already happened. The version you are linking to is outdated, and the updated version  here does no longer contain this statement.

Seems like the forum is a good place to discuss this (rather than Torres' twitter, which i suggest is a bad place). Heck, Torres is a member of the forum (I think) and I'd welcome his thoughts. 

I think it would be tempting to respond to him directly on twitter, but I think that would just give this article oxygen. I don't think we should reward a bad (in my opinion) article like this. Instead, I find Robert Wiblin's advice pretty good here.

"A large % of critiques of EA / longtermism seem driven by the author not being familiar with what they really are, or the best arguments in their favour. So rather than reply directly I find it better just to produce deeper & clearer materials explaining what I believe and why."

https://twitter.com/robertwiblin/status/1422213998527799307?s=20

So I guess my question is "how can we write materials which assuage or clarify these concerns such that if someone had read them they would not be convinced by Torres' article?"
 

Note that Torres was banned from the forum for a year following a previous discussion here where he repeatedly called another member a liar and implied that member should be fired from his job.

Posts that criticize Longtermism, posted directly by the writer, or in this case deliberately brought in, aren’t just bad in content by a huge margin, but also seem to be intellectually dishonest.

Frankly, it suggests criticism of this topic attracts a malign personality.

I am worried that this apparent pattern will send a signal that shades internal and external behavior among EAs.

This is because I know some EAs who are senior but are skeptical of Longtermism.

I am not as skeptical, because it seems natural to value (huge?) future populations, and it seems the pandemic has given overwhelming evidence that x-risk and related projects are hugely underserved.

My concern is that Longtermism benefits from many rhetorical issues that many do not perceive and this is unhealthy.

I don’t really want to lay out the case, as I am worried I will join the others in creating a stinker of a post or that I am indulging in some personal streak of narcissism.

However, I am worried this pattern of really bad content doesn't help. I do not know what to do.

Curated and popular this week
Relevant opportunities