I recently read Vaden Masrani’s post “A case against strong longtermism” for a book/journal club, and noted some reactions to the post as I went. I’m making this post to share slightly-neatened-up versions of those reactions.[1] I’ll split my specific reactions into separate comments, partly so it’s easier for people to reply to specific points.
Masrani’s post centres on critiquing The Case for Strong Longtermism, a paper by Greaves & MacAskill. I recommend reading that paper before reading this post or Masrani’s post. I think the paper is basically very good and very useful, though also flawed in a few ways; I wrote my thoughts on the paper here.
My overall thoughts on Masrani’s post are as follows:
- I think that criticism is very often valuable, and especially so for ideas that are promoted by prominent people and are influencing important decisions. Masrani’s post represents a critique of such an idea, so it’s in a category of things I generally appreciate and think we should generally be happy people are producing.
- However, my independent impression is that the critique was quite weak and that it involved multiple misunderstandings of the Greaves & MacAskill paper in particular, longtermist ideas and efforts more generally, and also some other philosophical ideas.
- Relatedly, my independent impression is that Masrani’s post is probably more likely to cause confusions or misconceptions than it is to usefully advance people’s thinking and discussions.
- All that said, I do think that there are various plausible arguments against longtermism that warrant further discussion and research.
- Some are discussed in Greaves and MacAskill’s paper.
- One of the best such arguments (in my view) is discussed in Tarsney’s great paper “The epistemic challenge to longtermism”.
- See also Criticism of effective altruist causes and What are the leading critiques of "longtermism" and related concepts.
(Given these views, I was also pretty tempted to call this A Case Against “A Case Against Longtermism”, but I didn’t want to set off an infinitely recursive loop of increasingly long and snarky titles!)
(Masrani also engaged in the comments section of their original post, wrote some followup posts, and has discussed similar topics on a podcast they host with Ben Chugg. I read most of the comments section on the original post and listened to a 3 hour interview they had with Fin and Luca of the podcast Hear This Idea, and continued to be unimpressed by the critiques provided. But I haven’t read/listened to the other things.)
[1] This seemed better than just making all these comments on Masrani’s post, since I had a lot of comments and that post is from several months ago.
This post does not necessarily represent the views of any of my employers.
Basically, I agree that longtermist interventions could have these downside risks, but:
I think this gets at part of what comes to mind when I hear objections like this.
Another part is: I think we could say all of that with regards to literally any decision - we'd often be less uncertain, and it might be less reasonable to think the decision would be net negative or astronomically so, but I think it just comes in degrees, rather than applying strongly to some scenarios and not at all applying to others. One way to put this is that I think basically every decision meets the criteria for complex cluelessness (as I argued in the above-mentioned links: here and here).
But really I think that (partly for that reason) we should just ditch the term "complex cluelessness" entirely, and think in terms of things like credal resilience, downside risk, skeptical priors, model uncertainty, model combination and adjustment, the optimizer's curse, best practice for forecasting, and expected values given all that.
Here I acknowledge that I'm making some epistemological, empirical, decision-theoretic, and/or moral claims/assumptions that I'm aware various people who've thought about related topics would contest (including yourself and maybe Greaves, both of whom have clearly "done your homework"). I'm also aware that I haven't fully justified these stances here, but it seemed useful to gesture roughly at my conclusions and reasoning anyway.
I do think that these considerations mostly push against longtermism and in favour of neartermism. (Caveats include things like being very morally uncertain, such that e.g. reducing poverty or reducing factory farming could easily be bad, such that maybe the best thing is to maintain option value and maximise the chance of a long reflection. But this also reduces option value in some ways. And then one can counter that point, and so on.) But I think we should see this all as a bunch of competing quantitative factors, rather than as absolutes and binaries.
(Also, as noted elsewhere, I currently think longtermism - or further research on whether to be longtermist - comes out ahead of neartermism, all-things-considered, but I'm unsure on that.)