Just stumbled upon this recent post and thought I would share it here for discussion. Feel free to also discuss on the substack itself if you have something worthwhile to share.

What do we, in 2023, ‘owe’ to future generations of humans? What about to future plants, animals, and ecosystems?

Rupert Read and Émile P. Torres dive deeply into these questions in their guest essay for us this week, and put forth a much-needed argument for why we must look more critically at dangerously seductive, radical forms of longtermism.

Aside from evoking the various unsettling aspects of longtermism, one of the most piercing elements of this piece is Read and Torres’ exposure of the paradox within this ideology: that ‘longtermism’ is in fact at odds with long-term thinking. Long-term thinking, as they define it, is an ‘ethical practice and commitment’; it requires deep reflection on the meaning of life; and it requires care for other humans, for future humans, and for our planet. They write, ‘it involves a recognition that there probably will be people long into the future, and that the quality of their lives and the options available to them depend to some nontrivial degree on our actions today.’

A carefully considered critique of radical ‘longtermism’ is therefore not a matter of throwing up one’s hands and ignoring the future of humanity. Drawing upon Hannah Arendt, Joseph Nye, David Graeber and David Wengrow, Read and Torres offer alternative pathways to the broadly utilitarian ideology of ‘longtermism’, rooted in a more temporally and ethically expansive sense of what it means to be human.

- Leigh Biddlecome, Visiting Editor & Curator, Perspectiva

-14

0
1

Reactions

0
1
Comments15
Sorted by Click to highlight new comments since: Today at 1:14 PM

Hey Alex, I wanted to respond earlier but I've been quite busy with work!

On your downvote total, this is another case where I'd really like to be able to differentiate voting up the comment for visibility and disagree vote to indicate I'm not a fan of its content. As such I've removed my downvote.

I think a charitable look at what the downvoters might be trying to signal is (and I agree) that Émile Torres is not a good-faith interlocutor, and in fact is not trying to aim for dialogue and truth but is trying to (in their eyes) save the world by destroying the EA movement, and thus that there actually isn't much to learn. There isn't much new if anything that hasn't been said before, hence the downvotes. YMMV of course, but I think a sentiment ~around this might be responsible for the negative score.

I think there are some interesting pieces/threads in the article, but there's so much crap that gets in the way (I actually tried to respond to some bullet points but in the end I think it would lead to more heat than light). I think the final thought experiment is interesting, especially why the authors think it's such a slam dunk (and the ways that they subtly try to influence the reader, 'partying'/'trashing' etc. Imagine I said compare 20 billion flourishing humans living over 100 centuries vs 5 billion humans over 100000 - which world is better)? I feel a better solution (with hindsight) would have been to just quote that part for discussion.

Final thought - I do applaud you for looking for critical articles on EA and trying to discuss the useful parts. It's a really good trait :)

  1. ^

    Though I accept that some significant sub-portion of EA may well believe this

  2. ^

    Very quick answer ofc, lots of nuance to add, but I think this debate really underlies of lot of this tension between the two camps

Wow, just downvotes without any critical engagement or justification… that’s not what I would have expected. I thought critical takes on longtermism would be treated as potentially helpful contributions to an open debate on an emerging concept that is still not very well understood?

I think the downvotes are coming from the fact that Émile P. Torres has been making similar-ish critiques on the concept of longtermism for a while now.  (Plus, in some cases, closer to bad-faith attacks against the EA movement, like I think at one point saying that various EA leaders were trying to promote white supremacism or something?)  Thus, people might feel both that this kind of critique is "old news" since it's been made before, and they furthermore might feel opposed to highlighting more op-eds by Torres.

Some previous Torres content which garnered more of the critical engagement that you are seeking:
- "The Case Against Longtermism", from three years ago was one of the earlier op-eds and sparked a pretty lively discussion; this is just one of several detailed Forum posts responding to the essay.
- Some more Forum discussion of "The Dangerous Ideas of 'Longtermism' and 'Existential Risk'", two years ago.

- By contrast, the more-recent "Understanding 'longtermism': Why this suddenly influential philosophy is so toxic", from only one year ago, didn't get many responses because I think people got the impression that Torres is still making similar points and, for their part, not giving much critical engagement to various rebuttals / defenses from the people that Torres is criticizing.

Yeah, I mean I understand that people don't really like Torres and this style of writing (it is pretty aggressive) but still there are some interesting points in the post, which I think deserve reflection and discussion. Just because "the other side" does not listen to the responses does not mean there is nothing to learn for oneself (or am I too naive in this belief?). So, I still think downvoting into oblivion is not the right move here. 

Just to give an example, I think the end of the post is interesting to contemplate and cannot just be "dismissed" without at least some level of critical engagement.

We end by noting a final, terminal irony about MacAskill’s book. The book is called ‘What We Owe the Future.’ But consider the following thought-experiment: in World A, 20 billion humans live all at once, trash everything, but party hard until the end. In World B, only 1 million exist per generation, but humanity persists, Earth-bound, into the future until a total of 10 billion people have existed. Assuming that a generation is 30 years, this means that people would exist in World B for another 300,000 years, which is roughly the same amount of time that Homo sapiens has so far existed, and 50 times longer than civilization has been around. Hence, twice as many people exist in World A than World B, but in World A, humanity’s future ends within a lifetime, while in World B, humanity’s future extends into the distant future. The question is: would World A be better or worse than World B?

‘Longtermists’ would surely argue that, other things being equal, World A is better, since it would contain more ‘happy’ people and, therefore, more total value. In contrast, we claim that World B would be better—this is, in our view, a more humane, a more beautiful, a more decent scenario than its alternative, which bears some alarming resemblances to the way we live right now: as Homo eversor, ‘man’ the endless destroyer. But MacAskill asks us to be impressed by the sheer weight of numbers. In a forced-choice situation, he would presumably pick World A over World B. Hence, the final irony is that ‘longtermists’ don’t actually care about the future after all. This is the ultimate irony, given the title of his book.

Caring about the future is more than wanting to maximize the total human population. It’s about preserving, protecting, and sustaining the succession of cohorts that extends from the distant past, through the present, into the indefinite future, and under conditions that favor the flourishing of those who come after us—whether human or nonhuman. People aren’t the mere ‘vessels’ of ‘value.’ Rather, we are one small link in an indefinitely long chain of being and becoming, and it is our collective parenting of the future that reveals to us who we really are. Perhaps ‘Homo curans’ is a good name for what our species should aspire to: the ‘caring human,’ or ‘humans who are defined by our capacity to care, not just for each other right now, but those who will likely come to exist in the future.’

Kind of a repetitive stream-of-consciousness response, but I found this both interesting as a philosophical idea and also annoying/cynical/bad-faith:

This is interesting but also, IMO, kind of a strawman -- what's being attacked is some very specific form of utilitarianism, wheras I think many/most "longtermists" are just interested in making sure that we get some kind of happy long-term future for humanity and are fuzzy about the details.  Torres says that "Longtermists would surely argue...", but I would like to see some real longtermists quoted as arguing this!!

Personally, I think that taking total-hedonic-utilitarianism 100% seriously is pretty dumb (if you keep doubling the number of happy people, eventually you get to such high numbers that it seems the moral value has to stop 2x-ing because you've got quadrillions of people living basically identical lives), but I still consider myself a longtermist, because I think society is underrating how bad it would be for a nuclear war or similar catastrophe to wreck civilization.

Personally I would also put some (although not overwhelming) weight on the continuity in World B on account of how it gives life more meaning (or at least it would mean that citizens of World B would be more similar to myself -- like me, they too would plan for the future and think of themselves as being part of a civilization that extends through time, rather than World A which seems like it might develop a weird "nothing matters" culture that I'd find alienating).  I think a lot of EAs would agree that something feels off about World A, although the extra 10 billion people is definitely a plus, and that overall it seems like an unsolved philosophical mystery whether it matters if your civilization is stretched out in time or not, or whether there is even an objective "right answer" to that question vs being a matter of purely personal taste.  At the end of the day I'm very uncertain as to whether world A vs B is better; population ethics is just a really tricky subject to think about.

So this is a good thought experiment!  But it seems pretty cynical to introduce this philosophical thought experiment and then:
1. say that your political opponents would obviously/unanimously endorse World A, when actually I think if you polled EAs you might get a pretty even split or they might favor World B.
2. say that this proves they "don't actually care about the future at all", despite the myriad real-world examples of EAs who are working hard to try and reduce long-term risks from climate change, pandemics, nuclear war, rouge AI, etc.

There is also maybe a bit of a sleight-of-hand in the fact that the total population in both scenarios is only 10-20 billion, which is much smaller than the total population of the best futures we could hope for.  This makes the 20-billion-people-all-at-once World A scenario feel like an imminent end of the world (nuclear war in 2100, perhaps?), which makes it feel very bad.  

But the only-1-million-people-alive-at-a-time scenario is also bad; Torres just doesn't dwell on it!  Maybe I should write an op-ed saying that Torres would "surely argue" in favor of stretching out modern human civilization, so that instead of all 8 billion of us hanging out together, 99.99% of us are in cryosleep at any given time and only a million humans are alive at any given time.  I could write about how this proves that Torres "doesn't really care about community or cultural diversity at all", since such a small population would surely create much more of a monoculture than the present-day earth.  Think about all the human connections and experiences (for example, the existence of communities built around very niche/rare hobbies, or the ability to go to a show and appreciate the talents of an artist/performer/athlete who's "one-in-a-million", or the experience of being in a big bustling city like New York, population 10m) that would be permanently ruled out in Torres's scenario!  (How would we decide who's awake for each generation?  Would we go by ethnicity -- first all the Chinese people, then all the Italians, then all the Norwegians, and so forth?  Surely it would make more sense to make each generation be a representative slice -- but then you'd destroy very small ethnic groups, like for instance the extremely unique and low-population Hazda, by forcing only one member of the Hazda to be woken up every 8 generations!  Would Torres "surely argue" in favor of this atrocity?!)  But such attacks would be silly; it seems silly to attack people based on treating their opinions on really abstract unsolved philosophical questions, as if they were concrete political agendas?  At the end of the day we're all on the same side just trying to make sure that we don't have a nuclear war; the abstract philosophy isn't driving many actionable political disagreements.

Yeah, I totally agree with you. This writing style is kind of annoying/cynical/bad-faith. Still it really does raise an interesting point as you acknowledge. I just wish more of the EA community would be able to see both of these points, take the interesting point on board, and take the high road on the annoying/cynical/bad-faith aspect.

For me the key insight in this last section is that utilitarianism as generally understood does not have an appreciation of time at all, it just cares about sums of value. Thus, the title of the book is indeed pretty ironic because the position presented in it does not really care about the "future" per se but about how to collect more containers of value. It just happens that we currently believe that many multitudes of those containers of value could theoretically still be brought into being in the future (i.e., the potential of humanity). Thus, it sketches an understanding of what is good in the world that is a little bit like pac man, where the goal is to just go for as many dots as possible without being eaten. What does pac man owe the future? I have never really thought about it this way before but I think that's not a totally unfair characterization. It reminds me of a similar critique of utilitarianism by Martha Nussbaum in her recent book "Justice for Animals". So at least to me, this argument does seem to have some force and made me think about the relationship between utilitarianism and longtermism in a new light.

But I totally agree with you that we should all stop with this annoying/cynical/bad-faith style of arguing. We are all in this together. I think we can also all learn from each other. While I do fear that philosophy and worldviews can make actual differences and do significantly influence politics, this just makes it just all the more important that we start talking WITH rather than fight against each other. When I posted the link, I was hoping for the former and it looks like we made some of that happen after all :) 

Thanks for engaging critically and being open minded even in the face of difficult content!

P.S. I think there are like 1-2 more interesting critiques/points in the post. One relating to the expressed view on animals and one regarding the assumed perspective on growth. Any interested reader is encouraged to hunt for them if they feel like it (maybe life is a little bit like pac man after all?). 

You might enjoy "On the Survival of Humanity" (2017) by Johann Frick. Frick makes the same point there that you quote Torres as making—that total utilitarians care about the total number and quality of experiences but are indifferent to whether these experiences are simultaneous or extended across time. Torres has favorably cited Frick elsewhere, so I wouldn't be surprised if they were inspired by this article. You can download it here: https://oar.princeton.edu/bitstream/88435/pr1rn3068s/1/OnTheSurvivalOfHumanity.pdf

I didn't downvote, but this article definitely would have warranted a downvote if it had been posted directly to the forum; if you think there are a few redeeming sections you should probably highlight them directly rather than asking people to sort through it all.

Mhh, I kind of disagree with the sentiment and assignment of responsibility here.

This is a link post to a critical post on EA-related ideas. I would hope this to spark more or less of an discussion of its merits. I get that some people may be tired of Torres but is this reason enough to actively try to prevent such a discussion? I mean nobody is forced to upvote but downvoting (in particular below 0) does limit the traction this gets from other people. To me this feels like trying to bury voices one doesn’t want to hear, which may be helpful in the short run (less stress) but is probably not the best long term strategy (less understanding).

Also, while I could have picked some ideas and quotes out to elaborate on their potential relevance, I don’t think the fact that I didn’t do this deserves downvoting. A comment asking for a summary or my take on the post would seem like a much more fruitful and adequate response.

Anyway, I get that I am asking for a lot here in terms of civility and critical engagement and that’s maybe unrealistic to expect from an online forum. Going forward, I will adjust my expectations and try to provide more context and explanation when I am posting links to critical content.

To me this feels like trying to bury voices one doesn’t want to hear, which may be helpful in the short run (less stress) but is probably not the best long term strategy (less understanding).

Time and attention are finite; I think a lot people people think they have spent a lot more time reading Torres and trying to give him the benefit of the doubt than they have given to almost anyone else, and a lot more than is deserved by the quality of the content.

I have nothing against that and think it’s a viable position to have if one has actually invested the time to reason through the challenges presented to a degree that they feel comfortable with. I only question whether this justifies downvoting because to some degree it keeps other people from forming their own opinions on the matter.

Maybe our difference in opinion stems from my perception that downvoting is a tool that should be carefully wielded and not be used to simply highlight disagreement. (I mean there is a reason why we have two voting mechanisms for comments after all)

Curated and popular this week
Relevant opportunities