Hide table of contents

(Also posted to my substack The Ethical Economist: a blog covering Economics, Ethics and Effective Altruism.)

Philosopher and Founder of the Happier Lives InstituteMichael Plant, recently wrote an open access review of Will MacAskill’s book What We Owe The Future.

Plant states that he found the case for longtermism put forward in the book “unpersuasive” and provides some reasons for this in the review. I didn’t find these reasons very convincing and am happy to remain a longtermist - this post explains why. My aim isn’t so much to defend What We Owe The Future as it is to defend longtermism.

Summary of Plant’s challenges to the book

Plant has four challenges to the book:

  1. The book is ambiguous on how seriously we should take longtermism (i.e. how much resource should go towards longtermist causes as opposed to say global poverty).
  2. The following premises aren’t as simple or uncontroversial as the book suggests:
    1. Future people count.
    2. We can make their lives go better.
  3. The book doesn’t do enough to demonstrate we can actually shape the long-term future as there aren’t clear examples of past successes of doing so.
  4. It is unclear whether longtermism, if true, would alter our priorities as we already have reason to worry about some ‘longtermist’ causes - for example risks from Artificial Intelligence.

#1 is a critique of the book and not longtermism per se, so I cover this briefly at the end of the post in an addendum. In short, this may be a valid criticism of the book - it has been a while since I read it but in podcasts I recall MacAskill saying that how much resource we should devote to longtermism is an open question. MacAskill does say however that it is clear we should be devoting a lot more resources than we currently do.

#4 is a related critique to #1 - without specifying how seriously we should take longtermism we can’t really say how much it would alter our current priorities. I will leave this to the addendum as well.

#2 and #3 is where I want to respond in some detail.

Plant misrepresents MacAskill’s view on the implications of the ‘intuition of neutrality’ for longtermism

The intuition of neutrality is the view in population ethics that, roughly, adding a person to the population is in itself ethically neutral. If it is ethically neutral to add people, we shouldn’t be concerned by the fact that extinction would remove the possibility of future people.

Plant states in his summary of the book:

If [the intuition of neutrality] is correct, this would present a severe challenge to longtermism: we would be indifferent about bringing about those future (happy) generations.

By including this in the summary section Plant is (incorrectly) implying that this is a view that MacAskill espouses in the book. Quoting from the book itself, MacAskill says (emphasis mine):

If you endorse the intuition of neutrality, then ensuring that our future is good, while civilisation persists, might seem much more important than ensuring our future is long. You would still think that safeguarding civilisation is good because doing so reduces the risk of death for those alive today, and you might still put great weight on the loss of future artistic and scientific accomplishments that the end of civilisation would entail. But you wouldn’t regard the absence of future generations in itself as a moral loss.

MacAskill does not think that longtermism would be undermined if the intuition of neutrality is correct. MacAskill does think that it would undermine the importance of ensuring we don’t go extinct, but he thinks there are other longtermist approaches that improve the quality of the future assuming we don’t go extinct, and that these would remain important.

As I have previously explained, these approaches might include:

  • Mitigating climate change.
  • Institutional design.
  • Ensuring aligned AI.

Even if it is Plant’s view, he is wrong to state as fact / imply that MacAskill believes that the intuition of neutrality being incorrect would “present a severe challenge to longtermism”. Similarly to MacAskill, I disagree that it would.

I am unconvinced that “many” disagree that ‘future people count’

From Plant’s review:

Far future people might little count in practice, because they are largely hypothetical, and hypothetical people may not count. The premise ‘future people count’ is not straightforward.

The intuition of neutrality is based on the notion there is a morally relevant distinction between people who do exist and those who could exist. MacAskill may not be sympathetic to the intuition of neutrality, but many philosophers think, after serious reflection, it is approximately correct.

Plant raises the “intuition of neutrality” as the justification for doubting that hypothetical future people count. Plant says that “many philosophers think, after serious reflection, it [the intuition of neutrality] is approximately correct”. He doesn’t define what “many” means, but does give two examples of people. I am left having to take Plant on his word that “many” philosophers accept the intuition of neutrality because he certainly hasn’t provided any evidence of that claim - one cannot reasonably consider two people to be “many”.

There is in fact evidence that people generally do not endorse the intuition of neutrality, viewing it as good to create new happy people and as bad to create new unhappy people. This is very unsurprising to me. For example, I think it would be difficult to find even a small number of people who would think it morally neutral to create a life that was certain to undergo immense agony for its entire existence.

What do philosophers think? Unfortunately I am unaware of a clear survey on the matter. Off the top of my head I am aware of Johann Frick, Jan Narveson and Melinda Roberts (after reading Plant’s review) who endorse the intuition of neutrality. I am aware of far more who reject it or on balance seem against it. These include: Will MacAskill, Hilary Greaves, John Broome, Derek Parfit, Peter Singer, Torbjörn Tännsjö, Jeff McMahan, Toby Ord, and probably each of the 28 people who publicly stated they don’t find the repugnant conclusion problematic (5 of which I have already listed).

Of course me being aware of more philosophers who are anti rather than pro the intuition of neutrality doesn’t prove anything. My point is that Plant certainly hasn’t achieved his goal of changing my impression that the intuition of neutrality is a pretty fringe view amongst both philosophers and the general public. To do that, he would have to provide some evidence.

Plant presents a very one-sided analysis of the non-identity problem

In arguing against MacAskill’s premise that “we can make [future people’s lives] go better”, Plant raises a (possibly) valid point:

Our actions today will change who exists later. If we enact some policy, then Angela will never exist, but Bob will. The result is that we cannot make the far future better for anyone in particular: it is not better for Angela not to exist, and it is not good for Bob to exist (intuitively, existence is never better for a person); this is the infamous non-identity problem. A direct implication of this, however, is that (3) seems false: we cannot make the lives of future people go better: all we can do is cause someone not to exist and someone else to exist instead. In one sense then, people far away in time are just like those far away in space: we are powerless to help them.

I’m fairly agnostic on most of what Plant is saying here. While it has been argued that it might be better for a person to exist than not, I am unclear on how many philosophers accept this and I am not sure where I personally land on this issue. Ultimately this is all a moot point for me because I am not really interested in making future lives go better. Instead I am interested in improving the future.

The impression Plant gives in his review is that improving the future is not possible because of the non-identity problem. This is a very one-sided view of the non-identity problem. As the Stanford Encyclopedia of Philosophy (SEP) page on the non-identity problem points out, one can also view the non-identity problem as a problem for the person-affecting intuition that “an act can be wrong only if that act makes things worse for, or (we can say) harms, some existing or future person”. Why? Because combining the person-affecting intuition with the fact that our actions change who will live in the future can lead to some very counterintuitive conclusions.

Let me give two examples:

  1. Emitting carbon dioxide in vast quantities is fine: emitting carbon dioxide in vast quantities would speed up climate change and make earth much hotter and much less pleasant to live on in the future. Emitting carbon dioxide however changes future identities, so doesn’t actually make things worse for anyone in particular. The conclusion? Emit carbon dioxide to your heart’s content.
  2. Enacting a policy that involves placing millions of landmines set to go off in 200 years in random locations on earth is fine: let’s assume future people don’t know about this policy. The landmines are going to cause a vast amount of suffering, killing people, leaving children parentless, and leaving people permanently incapacitated. Placing the landmines changes future identities though, so doesn’t actually make things worse for anyone in particular. Conclusion: go ahead, place the landmines, who cares?

The first example at least is very well known, and is powerful. Indeed, the SEP article states that rejecting the person-affecting intuition has been the most common response to the non-identity problem (which incidentally might also imply that most philosophers reject the ‘intuition of neutrality’).

Some philosophers have rejected the person-affecting intuition by adopting an ‘impersonal’ account of wrongdoing in which the identities of affected individuals is not important. A common ‘impersonal’ account is ‘total utilitarianism’ which involves judging states of the world by adding up the total amount of wellbeing in each state (not caring who this wellbeing accrues to). Adopting total utilitarianism implies that both of the actions in the above examples (emitting carbon dioxide and placing landmines) are likely wrong because they will reduce future wellbeing. This is a very intuitive way of judging these policies.

With regards to total utilitarianism, Plant says that “many (still) regard it as a non-starter” due to Parfit’s “repugnant conclusion”. Again, no evidence is provided for the claim that “many” hold this view.

Plant might say there wasn’t space to cover these nuances but, honestly, these aren’t nuances - these are extremely important points that are core to the non-identity problem and our moral consideration of the far future. I won’t go as far as to label Plant as dishonest by omitting these considerations but, given that Plant is a PhD philosopher, I find it puzzling that he could present such a one-sided analysis of the non-identity problem. Given this I personally find the following in his review quite ironic:

Hence, it seems objectionable to describe the premises as simple and uncontroversial, especially when the readers are primarily non-philosophers who are liable to take MacAskill at his word. I appreciate MacAskill is trying to spare the reader the intricacies of population ethics. Yet, there is a difference between saying ‘This is incredibly complicated, but everyone ultimately agrees’ and ‘This is incredibly complicated, people disagree furiously, and my argument (may) rely on a controversial view I will only briefly defend.’

Plant dismisses being able to shape the far future much too easily

In his review Plant expresses skepticism that ending slavery was an example of influencing the long-run future because he doesn’t think that it was “contingent” i.e. he doubts that that if the abolitionists hadn’t done what they did it might have been that slavery would never have been abolished (or at least not for a very long time).

Plant does no more than simply state that he doubts the abolition of slavery was contingent. He doesn’t engage with the arguments that MacAskill puts forward for contingency in his book. I’m going to put that aside and assume that Plant is right on this point.

Even if Plant is right on the abolition of slavery not being contingent, his dismissal that we can shape the future seems much too strong. Firstly he says:

I would like to have seen MacAskill explore the extent to which the case for longtermism depends on being able to point to past successes. We should be sceptical that we will succeed if others have failed – unless we can identify what is different now.

There are in fact a few things that are different now:

  1. We actually have a concept of longtermism now: and we have a whole community of people trying to influence the far future. Before Parfit’s Reasons and Persons was published in 1986, the moral importance of influencing the far future wasn’t really on anyone’s radar. Longtermism wasn’t solidified as a concept until around four years ago. Actually having a community of people trying to influence the far future seems like a good reason to believe it is now more likely than it once was that we will succeed in doing so.
  2. We now have the technological means to destroy / near destroy humanity: making humanity go extinct / destroying humanity to an extent that it would be unlikely to recover simply wasn’t possible in the past. Now we could do the following (none of which are overly crazy to imagine might happen):
    1. Destroy / near destroy humanity due to nuclear war.
    2. Cause runaway climate change that could result in a slower long-run growth rate or permanently reduce the planet’s carrying capacity.
  3. Technological progress means we have some concerning developments on the horizon: there are some (more speculative) existential risks on the horizon that could wipe out our future potential:
    1. Malicious actors could engineer pandemics that destroy / near destroy humanity.
    2. Artificial Intelligence (AI) could pose an existential risk (as many (yes “many”) believe, including some of those who were pivotal in the development of modern AI).

Plant goes on to doubt the claim that we are living at the ‘hinge of history’ i.e. that we are currently living at an especially pivotal, if not the most pivotal, period that will ever occur. For what it’s worth, I have some degree of sympathy for Plant’s view - overall I am unsure. However, it is not true that not being at the ‘hinge of history’ invalidates longtermism. As MacAskill’s previous work states, not being at the hinge implies that we should invest for a particularly pivotal time in the future e.g. by investing money, through movement-building, or through research. This is covered by the concept of ‘patient longtermism’ which has received some attention within the EA community.

Conclusion

Overall, I am left unmoved by Plant’s criticisms. He has omitted a lot of relevant details. I am aware that he wrote a book review rather than an attack on longtermism, which may explain some of his omissions. Having said that, his final decision to “sit out” on longtermism is not justified by the points he includes in his review. I wouldn’t want readers of his review to take an Oxford philosopher at his word when so much that is hugely important and relevant has been left unsaid.

Addendum - does longtermism mean revolution?

The above concludes my criticism of Plant’s review, but I wanted to opine on Plant’s point that MacAskill isn’t clear enough on how seriously we should longtermism and if longtermism would constitute a ‘revolution’. As I stated up front, I think it might be a fair criticism that MacAskill was too vague on how seriously we should take longtermism in terms of the amount of resources that should go towards longtermist causes (although I suspect MacAskill was intentionally vague for instrumental reasons).

My personal view is that, if we accept the premises of longtermism (as I do), we should take longtermism exceedingly seriously. After all, we have the whole future at stake. 

In practice, I think we would ideally entirely reorient society towards making the future go well. This would mean every single person (yes everyone) thinking how they can best improve the far future and acting on that basis. In practice this isn’t feasible, but I’m talking about the best case scenario.

Would that make our society into some hellscape where we let people die on the street because we don’t actually care about present people? In short - no. Such a society would not be a stable one - there would be revolutions and chaos and we certainly wouldn’t be able to focus on improving the far future. A world where we all come together and work collaboratively to improve the future is a world in which we necessarily help as many people as possible work productively towards that goal - and that means very little poverty and suffering. Those with the ‘personal fit’ to help current people work productively towards that goal would do so - not everyone can work on technical AI alignment.

Does that mean that a longtermist society actually looks quite similar to the one we have now after all? No, I don’t think so. I think a longtermist society is one in which every person is taught longtermism at school. Every person chooses their career based on longtermist principles. Virtually the whole machine learning community would be working on the technical problem of AI alignment. Governments would have departments for reducing existential risk / for future generations. 100% of philosophers would be working on the question “how can we best improve the far future”. We would save a lot more than we do and mitigate climate change far more than we do. We might even have widespread surveillance to ensure we don’t destroy ourselves (to be clear I am personally unsure if this would be required/desirable). We would have everyone on earth working together to improve the far future instead of what we have now - countries working against each other to come out on top.

So my answer to the question ‘would the truth of longtermism spur a revolution?’ is a resounding YES.

61

3
5

Reactions

3
5

More posts like this

Comments14
Sorted by Click to highlight new comments since: Today at 4:19 PM

Hello Jack, I'm honoured you've written a review of my review! Thanks also for giving me sight of this before you posted. I don't think I can give a quick satisfactory reply to this, and I don't plan to get into a long back and forth. So, I'll make a few points to provide some more context on what I wrote. [I wrote the remarks below based on the original draft I was sent. I haven't carefully reread the post above to check for differences, so there may be a mismatch if the post has been updated]

First, the piece you're referring to is a book review in an academic philosophy journal. I'm writing primarily for other philosophers who I can expect to have lots of background knowledge (which means I don't need to provide it myself).

Second, book reviews are, by design, very short. You're even discouraged from referencing things outside the text you're reviewing. The word limit was 1,500 words - I think my review may even be shorter than your review of my review! - so the aim is just to give a brief overview and make a few comments.

Third, the thrust of my article is that MacAskill makes a disquietingly polemical, one-sided case for longtermism. My objective was to point this out and deliberately give the other side so that, once readers have read both they are, hopefully, left with a balanced view. I didn't seek to, and couldn't possibly hope to, given a balanced argument that refutes longtermism in a few pages. I merely explain why, in my opinion, the case for it in the book is unconvincing. Hence, I'd have lots of sympathy with your comments if I'd written a full-length article, or a whole book, challenging longtermism.

Fourth, I'm not sure why you think I've misrepresented MacAskill (do you mean 'misunderstood'?). In the part you quote, I am (I think?) making my own assessment, not stating MacAskill's view at all. What's more, I don't believe MacAskill and I disagree about the importance of the intuition of neutrality for longtermism. I only observe that accepting that intuition would weaken the case - I do not claim there is no case for longtermism if you accept it. Specifically, you quote MacAskill saying:

[if you endorse the intuition of neutrality] you wouldn’t regard the absence of future generations in itself as a moral loss.

But the cause du jour of longtermism is preventing existential risks in order that many future happy generations exist. If one accepts the intuition of neutrality that would reduce/remove the good of doing that. Hence, it does present a severe challenge to longtermism in practice - especially if you want to claim, as MacAskill does, that longtermism changes the priorities.

Finally, on whether 'many' philosophers are sympathetic to person-affecting views. In my experience of floating around seminar rooms, it seems to be a view of the large minority of discussants (indeed, it seems far more popular than totalism). Further, it's taken as a default, or starting position, which is why other philosophers have strenuously argued against it; there is little need to argue against views that no one holds! I don't think we should assess philosophical truth 'by the numbers', ie polling people, rather than by arguments, particularly when those you poll aren't familiar with the arguments. (If we took such an approach, utilitiarianism would be conclusively 'proved' false.). That said, off the top of my head, philosophers who have written sympathetically about person-affecting views include Bader, Narveson (two classic articles here and here), Roberts (especially here, but she's written on it a few times), Frick (here and in his thesis), Heyd, Boonin, Temkin (here and probably elsewhere). There are not 'many' philosophers in the world, and population ethics is a small field, so this is a non-trivial number of authors! For an overview of the non-identity problem in particular, see the SEP.

Fourth, I'm not sure why you think I've misrepresented MacAskill (do you mean 'misunderstood'?). In the part you quote, I am (I think?) making my own assessment, not stating MacAskill's view at all.

You say the following in the summary of the book section (bold part added by me):

If correct, this [the intuition of neutrality] would present a severe challenge to longtermism

By including it in the 'summary' section I think you implicitly present this as a view Will espoused in the book - and I don't agree that he did.

But the cause du jour of longtermism is preventing existential risks in order that many future happy generations exist. If one accepts the intuition of neutrality that would reduce/remove the good of doing that. Hence, it does present a severe challenge to longtermism in practice - especially if you want to claim, as MacAskill does, that longtermism changes the priorities.

Sure, people talk about avoiding extinction quite a bit, but that isn't the only reason to care about existential risk, as I explain in my post. For example, you can want to prevent existential risks that involve locking-in bad states of the world in which we continue to exist e.g. an authoritarian state such as China using powerful AI to control the world. 

One could say reducing x-risk from AI is the cause du jour of the longtermist community. The key point is that reducing x-risk from AI is still a valid priority (for longtermist reasons) if one accepts the intuition of neutrality.

Accepting the intuition of neutrality would involve some re-prioritization within the longtermist community - say moving resources away from x-risks that are solely extinction risks (like biorisks?) and towards x-risks that are more (like s-risks from misaligned AI or digital sentience). I simply don't think accepting the intuition of neutrality is a "severe" challenge for longtermism, and I think it is clear Will doesn't think so either (e.g. see this).

Thanks for this reply Michael! I'll do a few replies and understand that you don't want to get in a long back and forth so will understand if you don't reply further.

Firstly, the following is all very useful background so I appreciate these clarifications:

First, the piece you're referring to is a book review in an academic philosophy journal. I'm writing primarily for other philosophers who I can expect to have lots of background knowledge (which means I don't need to provide it myself).

Second, book reviews are, by design, very short. You're even discouraged from referencing things outside the text you're reviewing. The word limit was 1,500 words - I think my review may even be shorter than your review of my review! - so the aim is just to give a brief overview and make a few comments.

Third, the thrust of my article is that MacAskill makes a disquietingly polemical, one-sided case for longtermism. My objective was to point this out and deliberately give the other side so that, once readers have read both they are, hopefully, left with a balanced view.

In light of this I think the wording "Plant presents a very one-sided analysis of the non-identity problem" is an unfair criticism. I'm still happy I wrote that section because I wanted to defend longtermism from your attack, but I should have framed it differently.

I lean towards thinking the following is unfair.

Third, the thrust of my article is that MacAskill makes a disquietingly polemical, one-sided case for longtermism.

If one were just to read WWOTF they would come away with an understanding of:

  • The intuition of neutrality - what it is, the fact that some people hold it, the fact that if you accept it you shouldn't care about losing future generations.
  • The non-identity problem - what it is and why some see it as an argument against being able to improve the future.
  • The repugnant conclusion - what it is, how some find it repugnant and why it is an argument against total utilitarianism.

This is all Will explaining the 'other side'. Sure he's one-sided in the sense that he also explains why he disagrees with these arguments, but that seems fine to me. He's not writing a textbook. What would have been an issue is if he had, say, just explained total utilitarianism without also explaining the repugnant conclusion, the intuition of neutrality or the non-identity problem.

Regarding the "polemical" description. I'm not really sure what you're getting at. Merriam-Webster defines a polemic as "an aggressive controversialist". Do you think Will was aggressive? As I say he presents 'the other side' while also explaining why he disagrees with it. I'm not really seeing an issue here.

I think this is another point where you're missing context. It's kind of a quirk of academic language, but "polemical" is usually used in contrast to analytical in texts like these - meaning that the work in question is more argumentative/persuasive than analytical or explicative, which I honestly think is a very apt description of WWTF. 

OK. I think Will intended WWOTF to be a persuasive piece so I’m not sure if this is a valid criticism. He wasn’t writing a textbook.

I think this is confused. WWOTF is obviously both aiming to be persuasive and coming from a place of academic analytical philosophical rigour. Many philosophers write books that are both, e.g. Down Girl by Kate Manne or The Right to Sex by Amia Srinivasan. I don't think a purely persuasive book would have so many citations. 
.

Book reviews are meant to be informative and critiques aren't always meant to be negative, so I don't know why you're framing it as an attack on WWTF or MacAskill. Knowing the tone of a work is valuable information for someone reading a book review.

On a personal note, I'll say that I also agree with the "disquieting" portion of "disquietingly polemical" - I had the sense that WWTF presented longtermism and caring about future generations as a kind of foregone conclusion and moral imperative rather than something to be curious about and think deeply on, but I prefer these kinds of books to be more proactive in very strongly establishing the opposing viewpoints, so it's probably more irksome to me than it would be to others. He wasn't writing a textbook and it's prerogative to write something that's an outright manifesto if he so chooses, but that doesn't make pointing out the tone an unvalid critique.

I'm not sure I have framed the review as an attack? I don't think it is. I have no problem with Michael writing the review, I just disagree with the points he made.

It was a while since I read the book in its entirety, but I will just leave a quote from the introduction which to me doesn't read as "disquietingly polemical" (bold emphasis mine):

For those who want to dig deeper into some of my claims, I have compiled extensive supplementary materials, including special reports I commissioned as background research, and made them available at whatweowethefuture.com. Despite the work done so far, I believe we have only scratched the surface of longtermism and its implications; there is much still to learn.

If I’m right, then we face a huge responsibility. Relative to everyone who could come after us, we are a tiny minority. Yet we hold the entire future in our hands. Everyday ethics rarely grapples with such a scale. We need to build a moral worldview that takes seriously what’s at stake.

The general tone of your comments + the line "I'm still happy I wrote that section because I wanted to defend longtermism from your attack" in one comment gives me the impression that you are, but I'm fully willing to accept that it's just the lack of emotive expressiveness in text. 

Yes, MacAskill does have these explicit lines at certain points (I'd argue that this is the bare minimum, but it's a problem I have with a large swathe of academic and particularly pop-philosophy texts and as I said it's in some measure a matter of personal preference), but the overall tone of the text and the way he engages with counterarguments and positions still came off as polemical to me. I admittedly hold seminal texts - which WWTF is obviously intended to be - up to particularly high standards in this regard, which I think is fair but completely understand if others disagree. To be clear, I think that this also weakens the argumentation overall rather than just being a lopsided defense or a matter of tone. I think the points raised here about the intuition of neutrality are an good example of this; a more robust engagement with the intuition of neutrality and its implications for longtermism could help specify longtermism and it's different strains to make it less of an amorphous moral imperative to "think/care about future generations" and a more easily operationalized and intellectually/analytically robust moral philosophy since it would create room for a deeper discussion of how longtermist approaches that prioritize the existence of future people differ from longtermist approaches that view the benefits for future people as secondary.

Ah ok I actually used the word “attack”. I probably shouldn’t have, I feel no animosity at all towards Michael. I love debating these topics and engaging with arguments. I wish he’d had more room to expand on his person-affecting leanings. In a sense he is “attacking” longtermism but in a way that I welcome and enjoy responding to.

I happen to think the level of attention Will gave to population ethics and the concepts of the non-identity problem, repugnant conclusion, and person-affecting intuition is fairly admirable for a book intended for a general non-philosophical audience. As I say, if you read the book you do understand why these three things can be seen as undermining longtermism. Saying up front that he has more material to engage with on his website seems great to me.

That said, off the top of my head, philosophers who have written sympathetically about person-affecting views include Bader, Narveson (two classic articles here and here), Roberts (especially here, but she's written on it a few times), Frick (here and in his thesis), Heyd, Boonin, Temkin (here and probably elsewhere). There are not 'many' philosophers in the world, and population ethics is a small field, so this is a non-trivial number of authors! For an overview of the non-identity problem in particular, see the SEP.

I agree we should be more swayed by arguments than numbers - I feel like it was you who played the numbers game first so I thought I'd play along a bit.

FYI I did reference that SEP article in my post and it says (emphasis mine):

Since the nonidentity problem became well-known through the work of Derek Parfit, James Woodward and Gregory Kavka in the early 1980s, most philosophers have accepted it as showing that at least one of the aforementioned intuitions must be false. Here, the most frequently identified culprit is intuition (1), that is, the person-based intuition itself.

Thanks for the reply and I think you make a lot of good arguments, I'm not sure where I sit on this issue!

I found your last paragraph a little disturbing, because even given the truth of long termism, some of these ideas seem like they wouldn't necessarily serve the present or the future particularly well. Would 100 percent of philosophies working on the question of the far future really be the best way to improve the field, with other important philosophical professions neglected? Widespread surveillance has already proven to be too unpalatable to most westerners and impractical even if it did prevent someone making a bioweapon. Personally I think even given longtermism, most people should be working on fixing existing issues (governance, suffering, climate change) as fixing things now will also get the future. Perhaps 100 to 1000x the current number off other working on longtermist causes would be ideal, but I think gearing the whole engine of the world towards longtermism might well be counterproductive.

"Virtually the whole machine learning community would be working on the technical problem of AI alignment. Governments would have departments for reducing existential risk / for future generations. 100% of philosophers would be working on the question “how can we best improve the far future”. We would save a lot more than we do and mitigate climate change far more than we do. We might even have widespread surveillance to ensure we don’t destroy ourselves (to be clear I am personally unsure if this would be required/desirable). We would have everyone on earth working together to improve the far future instead of what we have now - countries working against each other to come out on top."

Hey, thanks for you comment! To be honest my addendum is a bit speculative and I haven't thought about a huge amount. I think I may have been a little extreme and that factoring moral uncertainty would soften some of what I said.

Would 100 percent of philosophies working on the question of the far future really be the best way to improve the field, with other important philosophical professions neglected?

When you say "improve the field" I'm not sure what you mean. Personally I don't think there is intrinsic value in philosophical progress, only instrumental value. It seems desirable for the philosophy field to reorient in a way that focuses on improving the world as much as possible, and that is likely to mean at least some fields entirely or nearly die out (e.g. aesthetics?, philosophy of religion?). I suspect a lot of fields would continue if we were to focus on improving the future though, as most of them have some useful role to play. The specific questions philosophers work on within those fields would change quite a bit though. 

Widespread surveillance has already proven to be too unpalatable to most westerners and impractical even if it did prevent someone making a bioweapon.

I tried to express agnosticism on if this would be desirable and I am very sympathetic to arguments it wouldn't be.

Personally I think even given longtermism, most people should be working on fixing existing issues (governance, suffering, climate change) as fixing things now will also get the future.

I did mention the importance of alleviating suffering and tackling climate change in my post. I'm not sure if we disagree as much as you think we do. Governance is a bit vague, but many forms of governance can easily be justified on longtermist grounds (as can climate change).

I think gearing the whole engine of the world towards longtermism might well be counterproductive

It is possible that "obsessing" about the far future is counterproductive. At that point we would be justified in obsessing less. However, we would be obsessing less on longtermist grounds.

More from JackM
Curated and popular this week
Relevant opportunities