Hide table of contents

In a recent essay for Aeon, Phil Torres criticises longtermism. This post is a series of remarks and responses to some parts of that essay. 

There are some sentiments in his essay that I agree with, but here I have focussed my remarks on some, but not all, of the parts where I disagree or where I think he has been inaccurate. One of my main points is that Torres's piece ends up conflating a particular brand of technological utopianism with longtermism itself and that a large chunk of his criticism is really aimed at the former. I have some sympathy, because arguably this techno-utopian line of thought has been over-emphasized by some early proponents of longtermism and by well-recognized names, but it ultimately fails to faithfully represent the broader idea of longtermism which has emerged from it more recently. 

In writing this, I have not separately taken time to defend or promote longtermism in and of itself (and I in fact have my own reservations about it, which I make no effort to hide). Criticism is, of course, vital. And good faith criticism will hopefully do more to develop longtermism than to discrediting it. Successful critique of one specific ‘flavour’ or application of longtermism may in fact help to clarify what is good about the core idea and help us to find more desirable applications of it. Lastly, I have made no particular effort to ensure that one can comfortably read this post without already being at least somewhat familiar with Torres’s piece.

Since the Future of Humanity Institute (FHI) is mentioned explicitly in Torres's piece, it is worth me pointing out that I am currently employed by the FHI.

The present state of longtermism

After outlining the origins of longtermism, Torres writes that "It is difficult to overstate how influential longtermism has become." He goes on to say that "longtermists" have been changing the world with "extraordinary success". These seem like very big claims to me and I disagree. It is straightforward to exaggerate and overstate the influence of longtermism, using claims such as: ‘Most people are familiar with the main arguments and ideas of longtermism and have changed the way they live their lives because of longtermism’. Or I could say: ‘World leaders of major economies are all dedicated to the idea of longtermism and would be extremely unlikely to support policies that went against longtermist principles’. These statements are not absurdly superlative but probably aren’t even true of much more well-established moral views like feminism, anti-racism or environmentalism, let alone the more fledgling idea of longtermism. And what evidence is there that "longtermists" have been changing the world with extraordinary success? (Particularly if we ask this relative to say capitalists, neoliberals, environmentalists, or feminists?) Torres cites that Elon Musk gave $1.5m to the Future of Humanity Institute (FHI) via the Future of Life Institute and that Peter Theil has given lots of money to the Machine Intelligence Research Institute (MIRI). For better or worse, $1.5m is a modest or even small amount of money by Musk’s standards (and indeed by global standards for the funding of research) and in 2013 Theil’s donations to MIRI apparently added up to a similar kind of figure (“well over $1 million” according to HuffPost). Without some further convincing argument, it does not say a huge amount that a couple of billionaires gave a few million dollars to organizations that concern themselves with or are influenced by longtermism and Torres's other examples such as "funding essay contests and scholarships in an effort to draw young people into the community" and aiming to place longtermists in US policy-making positions still don't really speak to much in the way of actual success at changing the world either.  

The introductory section of his essay culminates in the conjecture that longtermism could be 

...one of the most influential ideologies that few people outside of elite universities and Silicon Valley have ever heard about. (Torres) 

I am prepared to accept this claim, but it is a relatively weak one once it’s been unravelled. I don’t think Torres quite intends to already backpedal from the idea that longtermism has influence which is difficult to overstate, but, to be clear: In this claim we are considering only ideological positions that an extremely small proportion of people have heard of (i.e. the union of a) a “few people”, b) some academics at (not even all) elite universities and c) some (but not all) people in Silicon valley). And then, among such relatively obscure ideological positions, longtermism is “one of’’ the most influential. My earlier comparisons to the influence and effectiveness of feminism, environmentalism, capitalism etc. are not relevant here. 

As Torres moves on to a more detailed explanation of the core of longtermism, he claims that longtermism, “as proposed by Bostrom and Beckstead” , is not equivalent to a simple, easy-to-digest and difficult-to-disagree-with idea like ‘caring about the long term’ or ‘valuing the wellbeing of future generations’. Instead Torres seems to argue that development from these innocent premises via what he considers to be a flawed analogy “between individual persons and humanity as a whole” leads to (the more extreme sounding) “central dogma” of longtermism that “nothing matters more, ethically speaking, than fulfilling our potential as a species of ‘Earth-originating intelligent life’.” 

MacAskill’s 2019 post on the Effective Altruism (EA) Forum outlines the genesis of the word ‘longtermism’ and discusses possible definitions of it. He writes:

Up until recently, there was no name for the cluster of views that involved concern about ensuring the long-run future goes as well as possible. 

And:

In October 2017, I proposed the term ‘longtermism’... (MacAskill)

At the very least, this raises the question of why Torres decides to begin a more detailed explanation of the term via other specific interpretations. While, like main other ‘-isms’, the exact definition of the word can be contentious (a point I will return to), MacAskill mentions the following various candidates in his article:

  • the view that the most important determinant of the value of our actions today is how those actions affect the very long-run future
  • an ethical view that is particularly concerned with ensuring long-run outcomes go well
  • a philosophy that is especially concerned with improving the long-term future. (Proposed by Ord)
  • the view that:
    1. Those who live at future times matter just as much, morally, as those who live today;
    2. Society currently privileges those who live today above those who will live in the future; and
    3. We should take action to rectify that, and help ensure the long-run future goes well.

If MacAskill can reasonably be said to have coined the term and so far been a central figure in the intellectual development of the idea (and I think that these things can be said of him), then why are these proposed definitions (which are much closer to the simple messages of ‘caring about the long term’ or ‘valuing the wellbeing of future generations’) absent from this part of Torres’s essay? One explanation is that there is a rather specific interpretation of longtermism which he wants in the crosshairs and arranging for this to be so means tacitly omitting a broader discussion of and different approaches to the idea. The specific interpretation, which he comes to later, is a kind of technological utopianism, but before we discuss this specifically, we will discuss the two big criticisms that appear in the next section of the essay: 

  1. That longtermism encourages indifference to threats which are serious but not existential, such as climate change, poverty, and animal suffering.
  2. That longtermism encourages fanaticism or otherwise promotes the use of arguments in which ‘the end justifies the means’ to an extreme degree.

I am not without sympathy and before we discuss each in turn, we will briefly explore a few general reasons why issues like this may tend to be raised at all. Fault may lie with the fact that there is too little clear communication of the underlying concepts of longtermism aimed at people outside of the communities in which the original ideas are gestating. On this point, I concede (as I will mention again later) that some of longtermism’s most vocal or visible advocates may be culprits: Even occasional indulgence in emotive language and grand rhetoric can alienate more casually interested audiences and obfuscate the more logical side of longtermism’s underpinnings. Or fault may lie with the fact that there is too little high-quality criticism of longtermism. The discussions that good, good-faith criticism triggers are almost always productive and lead to increased clarity. The lack of both excellent explanations or defences of longtermism by its adherents and excellent criticism by its opponents leaves space for misrepresentations and misunderstandings. (And in making this point, I do not assume that longtermism is correct or good, i.e. it would be perfectly plausible from my point of view for the excellent criticism to win out).

A related explanation is that it may well be the case that longtermism really actually isn’t very well worked out: Perhaps not merely insufficiently explained but poorly understood. Over the last few years now, Greaves and MacAskill have had their working paper The Case for Strong Longtermism publicly available. Strong Longtermism is not the same as ‘longtermism’ as used in MacAskill's forum post, but nevertheless, the fact that the main statements which Greaves and MacAskill attempt to rigorously defend have both changed significantly since 2019 does show that the ramifications of the idea are still very much being worked out. I do not wish to argue here that longtermism is intrinsically flawed (although I am open to such a possibility). I am rather pointing to the fact that it is under-developed, and that our (perhaps temporary) confusions about it and its consequences could reasonably be mistaken for or assumed to be caused by a more fundamental set of issues. 

So can a good-faith engagement really leave one with the impression that the longtermism is dangerous? And if so can we put it down to a side-effect of the immaturity of the idea? If one wants to develop longtermism and the answer to both questions is ‘yes’, then there is one motivation to ‘do the work’: To clarify, cohere and disseminate to the extent that there is no longer space through which the inaccurate message that the idea is dangerous may slip.

 

Insouciance to other serious matters

 

We’ll now discuss the claim that longtermism is bad because it encourages us to underemphasize any threat or risk which is not so great that it may curtail humanity’s long-term potential. Torres cites Bostrom’s point that ‘a non-existential disaster causing the breakdown of global civilisation is, from the perspective of humanity as a whole, a potentially recoverable setback.’ ("Future of Humanity") And then he essentially asks: Is it not the case that when one combines Bostrom’s point with the fact that what matters vastly more for humanity is achieving our potential in the very long run, then the conclusion is that a lot of very serious issues will be ignored, such as climate change, animal suffering, and reducing global poverty?

Torres writes that: 

multimillionaire tech entrepreneur Jaan Tallinn... doesn’t believe that climate change poses an ‘existential risk’ to humanity because of his adherence to the longtermist ideology. 

To my mind, the belief that climate change does not pose an existential risk to humanity is separate from longtermism and could be said to come about via the following two things: 

  1. A definition of the term "existential risk"; and
  2. An empirical claim that climate change does not meet the criteria of the definition.

I think that Torres’s remark about Tallinn may seem damning because we have an intuitive idea that climate change is a huge and scary problem for humanity. In a colloquial sense, it is ‘existential’ in its proportions, but it is important for readers not acquainted with the nomenclature of these spaces to know that there are often specific definitions of things like ‘existential risk’ that go beyond scare-mongering or just gesturing at the magnitude of a risk. Torres’s remark is not appended with whatever definition of existential risk Tallinn may be using. For example, Ord’s definition in The Precipice is essentially: A risk that threatens the destruction of the set of all possible futures that remain open to us (37). One of Bostrom’s earlier definitions is: A risk that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development ("Existential risk prevention", 15). Exactly how clear and useful these definitions are - and what assumptions they bake in - can be questioned, but Torres is not pursuing that point here.

It hardly needs to be said that one reason for attempting to give such a clear definition is so that we may test things against it: We can now hold up a candidate example of a risk to humanity like climate change and try to answer: Does it meet the criteria? Is this an existential risk? The reason that many people conclude that it is not is because they must - to some extent - believe that a synthesis of research including scientific climatological research looking at temperature change and rising sea levels etc. etc. and trying to piece together a reasonable answer to the question would reasonably conclude that climate change does not meet the criteria. This line of thinking seems independent of longtermism. 

It’s possible that Torres is trying to make a slightly different point about the potentially seductive and pernicious nature of believing in ‘longtermist ideology’ (but if that is the case, I feel that he falls short of being explicit about it). He could be suggesting that once one starts believing in longtermism, it necessarily clouds one’s judgement to the extent that, although climate change is an existential risk, one’s bias becomes so great that you are unwilling to admit this. I think it would be very difficult to make this argument; it is hard to imagine that those effects would be the fault of the ideas of longtermism per se. Another slightly different and more plausible argument that I can imagine being developed further is that the adjacent communities (effective altruism, rationality, AI Alignment, existential risk studies etc.) from which longtermism is currently drawing many of its adherents are inherently predisposed towards a certain contrariness, which may result in a bias towards certain claims, including “climate change is not an existential risk”. 

What then of the correctness of Tallinn’s viewpoint? Does the empirical part of the argument hold up? What is this aforementioned synthesis of research? I think Torres is right to be skeptical and take aim at it:

Although Ord gives a nod to climate change, he also claims – based on a dubious methodology – that the chance of climate change causing an existential catastrophe is only ∼1 in 1,000, which is a whole two orders of magnitude lower than the probability of superintelligent machines destroying humanity this century, according to Ord. (Torres)

I agree that the methodology is dubious and I certainly don't necessarily agree with the numbers. I have fairly strong reservations about the use and interpretation of probabilities in arguments like these. A recent paper of Cremer and Kemp ("Democratising Risk") has much more to say on this subject and on definitions of existential risk. So while I do think that it is inaccurate to say that people believe climate change not to be an existential threat because of longtermism, I agree that one can in principle and should be very skeptical about the methodology by which research about climate change, say, is synthesized and turned into a single ‘probability of existential catastrophe’. But although it is a cliche to say ‘more research needed’, we must at least be fair on Ord: He explicitly raises the uncertainty in these estimates, in particular the fact that the ‘tail risks’ i.e. small risks of large temperature changes due to climate change are still poorly understood. (108-10)

Torres continues:

What’s really notable here is that the central concern isn’t the effect of the climate catastrophe on actual people around the world (remember, in the grand scheme, this would be, in Bostrom’s words, a ‘small misstep for mankind’) but the slim possibility that, as Ord puts it in The Precipice, this catastrophe ‘poses a risk of an unrecoverable collapse of civilisation or even the complete extinction of humanity’. Again, the harms caused to actual people (especially those in the Global South) might be significant in absolute terms, but when compared to the ‘vastness’ and ‘glory’ of our longterm potential in the cosmos, they hardly even register.

This may misrepresent the structure of Ord’s arguments. In his book, Ord first takes time to provide us with numerous different perspectives on and arguments for why we should care about existential risk. One such perspective is ‘from the present’, in which he does argue explicitly that existential risks are “obviously horrific from the most familiar moral standards” and that the moral case for preventing them is “tremendously important”, even when measured “just in terms of lives cut short”(42-3). Now, after the case for caring about existential risk has been made, the particular suffering which catastrophes may bring about is indeed no longer what his arguments need to focus on. But at that point, it is not a criticism per se to say that “the central concern is...unrecoverable collapse or complete extinction”. Even if it is not an easy subject to discuss, one surely has the right to concern oneself with only existential catastrophes, as a matter of academic inquiry? 

However, having said all of this, it is clear to me that Bostrom and Ord don’t shy away from employing relatively lofty (and some ‘deontic’, i.e. "priority...should...be", "we mustn't"...."first great task is...") rhetoric, perhaps sometimes more designed to garner emotional response than to convey the underlying arguments. A little bit of this is OK. After all, e.g. The Precipice is not a journal article; it’s a book designed for a comparatively wide audience. But the whole subject is an emotionally evocative one: Particular matters can be delicate, controversial, and - like any trolley dilemma-esque situations - presumably one’s gut response to them can be sensitive to minor details and the specifics of the presentation. In their own ways, Torres, Bostrom and Ord have all played this game: Torres wants us to wince and balk at certain conclusions (one risk being that we do not adequately parse and take time to properly interrogate the actual arguments), whereas Ord wants us to be inspired by vastness of humanity’s potential future, and perhaps - to give Torres some benefit of my doubt - one of the risks is that we are so taken with the scale and beauty of ideas like this, that we make mistakes and overlook potential negative externalities that longtermism may bring with it. 

Returning to the claim that "the longtermist ideology inclines its adherents to take an insouciant attitude towards climate change" (Torres), we might ask whether there is any literal sense in which longtermism implies that we should be less concerned about risks from climate change? I will first admit that what longtermism does ‘tell’ us to do on a practical level can be very unclear. And essentially, this is one of the crises that longtermism may find itself stuck in. It is one thing to say that the most important aspect, ethically speaking, about an action that you might take is its effect on the very long-term future, but it is of course another thing to say: In the following, actual, real-life situation, given your present state of knowledge, you should do A rather than B. And if we can never get anything truly, practically, decision-relevant out of the whole idea, it is doomed to remain an abstract philosophical concept: Curious, but never reaching the impact and influence of the other -isms I have already dared to compare it to. I want to make it clear that that actually seems like a plausible way for things to pan out to me. But let’s remain charitable for the time being and recall the following line of thinking: Longtermism can be understood as saying something important about which actions are best

  1. At the current margins; and
  2. On the scale of humanity as a whole, rather than being applicable to each individual separately.

With this in mind, one can interpret the Bostrom quotations used throughout these parts of Torres’s essay as supporting the following two claims. Firstly: Should humanity allocate resources to deal with climate change? The answer is most clearly ‘yes’, if you also assume that all future trajectories do eventually lead to really excellent futures, (even if only very far down the road). i.e. If you think that climate change will indeed be just a blip on humanity’s long story and that we will almost certainly reach a truly excellent future eventually, then you may as well spend resources in the near or medium term to avoid climate change altogether. At the margins there would be a lot to gain. For humanity, a future trajectory which eventually ends up being truly excellent but also has a lot of climate-change induced suffering along the way is obviously much worse than the same kind of future except with no climate-change induced suffering. Secondly: You need to be concerned when, in order to deal with climate change, you would have to use up resources that you actually need to save for the purpose of securing an excellent future down the road. i.e. One way that it's not worth dealing with climate change is if the constraints on your resources were such that you could do at most one of: Deal with climate change in the medium term or secure an excellent future in the long term.

You may disagree with these arguments or believe that I should use a different or more detailed or more quantitative framework. In fact, whether or not they are even correct will turn out to be beside the point, so let us bear with them for now. The point is that arguments like this are too neat. There is often a tension between neat abstraction and practical relevance. Understood in (what I think is) their proper context as belonging more to the former category, these arguments can’t literally encourage us to ignore climate change. If we attempt to nudge them along to the second category - make them practically relevant - then they might only suggest such inaction on climate change in the implausible scenario that we somehow ‘know’ that fighting climate change would deplete resources that we actually need to save for the securement of a much better long term future. Let me say that again: At present, it seems to me almost ludicrous that we could possibly be in the position of knowing that some actual interventions to fight climate change would somehow use up resources that we would otherwise need for the purpose of ensuring a great future, way, way down the road. And therefore, while I enjoy reading and thinking about the abstract arguments, I feel I am in no danger - and nor should you be - of mistaking them for practical advice about what to do or not do about climate change.

The second of Torres’s criticisms concerns fanaticism.

 

Fanaticism

 

The second of Torres’s specific criticisms is the concern that any means can be justified in the pursuit of the kinds of “cosmically significant” moral ends as the well-being of an unthinkably large number of future humans (or future conscious beings).

In [Bostrom’s] words, even if there is ‘a mere 1 per cent chance’ of 10^54 people existing in the future, then ‘the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth 100 billion times as much as a billion human lives.’ Such fanaticism – a word that some longtermists embrace – has led a growing number of critics to worry about what might happen if political leaders in the real world were to take Bostrom’s view seriously. (Torres)

Again here we immediately return to the previous theme. We ought to avoid a conflation between on the one hand worrying that people will ‘take seriously’ Bostrom's philosophical, mathematical argument and on the other hand  - doing something closer to what Olle Häggström cautions against - which is worrying that the same argument might be "recognised among politicians and decision-makers as a guide to policy worth taking literally" (emphasis mine). We should neither misconstrue nor attempt to use abstract philosophical arguments as a kind of rigorous justification for policy. We are allowed to be convinced of the philosophical argument and yet be extremely wary of using it to justify our actions. In my opinion, regarding the case outlined above, for it to become a justification for action, the argument would have us in a situation in which you somehow knew with essentially mathematical certainty that your action would reduce existential risk by some amount. Cramer and Kemp discuss a point like this in the context of Bostrom’s argument regarding the use of extremely high levels of surveillance to prevent existential threats, arguing (24) that Bostrom’s writing on this topic does feature "clear policy recommendations" . I think that proponents of longtermism need to approach the idea that their philosophical longtermist thinking can result in clear policy recommendations with skepticism. Let’s return to Häggström's thought experiment: (the quotation that appears in Torres’s essay is slightly longer): 

"...Imagine a situation where the head of the CIA explains to the US president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders."

To me at least, this speaks to the worries: What if none of the complexities or caveats were explained? What if, by not correctly understanding the argument, the main thrust of it is incorrectly applied and transferred to a situation which is in fact not analogous!? The head of the CIA is giving a personal, subjective credence of one-in-a-million (perhaps or presumably based on the bogeyman of synthesizing quantitative and qualitative evidence). We should be skeptical of numbers that come about as the result of such processes (just as Torres is skeptical of Ord’s). The head of the CIA should not equate his or her number with the kind of probability that Bostrom uses in his abstract, philosophical argument. Then the president, either having been misled in this way or themselves misunderstanding this informal use of ‘probability’, then makes the following error: They conflate the definite removal of this particular threat by nuclear force with what Bostrom says which is "reducing existential risk". This is a very important point: There are plenty of other possible threats in Häggström's scenario and it is plausible - very likely, you might say - that a nuclear assault on Germany results in a much riskier, more dangerous world from the point of view of (say) global nuclear war (or many other risks which may cascade from this initial nuclear assault). So, ultimately, the nuclear strike could increase existential risk. But as Häggström says, the president “may conclude'' that it is worthwhile conducting the nuclear assault on Germany: Via various small misunderstandings, it is imaginable that one might come to that conclusion, but longtermism itself does not imply or suggest, based only on the facts of the story we are presented with, that he ought to. These would be errors on the part of the CIA and the president who ordered the strike, not longtermist ethics per se.

So, the criticism which I don’t feel that Torres is making explicitly but which I do have some sympathy for is that longtermism could be dangerous because it might be misconstrued in the ways outlined above. At the moment, longtermism may be misunderstood by policy-makers as some way of - in certain situations - giving some kind of clean, logical, justification for actions that could end up being extreme. 

Next, Torres claims that there are “additional, fundamental problems with this worldview that no one, to [his] knowledge, has previously noted in writing”. 

 

Remarks on ‘Longterm Potential’: 

 

The argument put forward by Torres relies on the idea that if one were to “unpack what longtermists mean by our ‘longterm potential’ ”, we would find there to be three main components: “transhumanism, space expansionism, and a moral view closely associated with what philosophers call ‘total utilitarianism’.” The first thing I contend is that it is inaccurate to suggest that there is a single notion of ‘our longterm potential’ which is agreed-upon by “longtermists”. 

On this point, I can scarcely do better than using the analogy of feminism as explained by Lara Rutherford-Morrison writing for Bustle in 2016:

Contrary what some people seem to believe, the term “feminist” doesn’t represent a single, homogenous group with an agreed upon set of goals and beliefs. In fact, it’s perfectly OK for feminists to disagree about… nearly everything, really, because we are, in fact, not all the same. Feminists don’t have membership cards, they don’t elect a leader, and they don’t have a set agenda. They don’t all look the same, or have the same background, or share the same beliefs. They are not all women. The one thing that they agree upon is something very basic: That men and women should have equal rights and opportunities. To quote both Chimamanda Ngozi Adichie and Merriam-Webster, they support “the political, economic, and social equality of the sexes.” That’s it.

(And yes, this of course means that not all those who call themselves feminists would agree with Rutherford-Morrison). On a similar note, (FHI colleague) Fin Moorhouse writes:

Some 'isms' are precise enough to admit of a single, undisputed, definition. Others, like feminism or environmentalism, are broad enough to accommodate many (often partially conflicting) definitions. It's unlikely that any precise or detailed definition will end up sticking for 'longtermism', so the question is whether it should have some minimal definition, or none at all.

I think the point is made. A different and more specific remark about the way Torres is using ‘longterm potential’ is that he has drawn disproportionately from early works of Bostrom and from Ord to try to say “what longtermists mean”. While it would be ridiculous to suggest that Bostrom has not heavily influenced longtermism, it is also worth pointing out that Bostrom does not even use the words ‘longtermist’ or ‘longtermism’ (as mentioned already, the term was coined by MacAskill some years after the papers of Bostrom from which Torres has quoted numerous times). The words do appear in Ord’s book The Precipice, but that is a book about existential risk per se, and not with a general focus on longtermism: Ord writes that “One doesn’t have to approach existential risk from [the direction of longtermism] - there is already a strong moral case…” (46). This comes from the part of the book (ch. 2) in which Ord covers different arguments for caring about existential risk, many of which do not rely on something that looks like Torres’s notion of longterm potential. So, longtermism is neither a direct extrapolation of Bostrom’s ideas, nor is it synonymous with a study of or movement to reduce existential risks. And relying predominantly on Bostrom and Ord - key names though they may be - to determine exactly how we should interpret longtermism, does not - in my opinion - result in an accurate impression of the current debate. 

Admittedly though, Moorhouse goes on to say that there is “clearly some common core to varieties of longtermism deserving of the name”. Would a common core necessarily contain transhumanism?

 

Transhumanism 

 

The connection with transhumanism is illustrated by Torres via appeals to Bostrom and Ord again:

As Bostrom put it in 2012, ‘the permanent foreclosure of any possibility of this kind of transformative change of human biological nature may itself constitute an existential catastrophe.’ Similarly, Ord asserts that ‘forever preserving humanity as it is now may also squander our legacy, relinquishing the greater part of our potential.’

Firstly, it seems to me just as worth remarking that both of them leave plenty of room in their statements for transhumanism not being a necessary aspect of longtermism: “...may …constitute an existential catastrophe” and “...may…squander our legacy”. And secondly, one must understand their comments in the context of their own definitions for existential catastrophe, and once those definitions have been made, it ought not be controversial per se to say that something does or does not fit the definition. As Cremer and Kemp point out, Bostrom explains in Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards that 

Because of the way we have defined existential risks, a failure to develop technological civilization would imply that we had fallen victims of an existential disaster. […] Without technology, our chances of avoiding existential risk would therefore be nil.

The emphasis is mine. Also, that paper is now 20 years old. That may not be a huge amount of time but considering the sheer youth of the subject (i.e. once again the term ‘longtermism’ is around 5 years old) and the current rate of growth of interest in it, it does help to complete the picture that this part of Torres’s piece may be aimed only at a particular interpretation of longtermism, based on a particular version of ‘longterm potential’ which has been arrived at via a straight extrapolation of Bostrom’s earlier writings. I am willing to guess that many people with a casual interest in longtermism today would find such an interpretation to be an unpalatable operationalization of longtermism. However, at present longtermism explicitly contains claims that the overarching idea is compatible with a much wider range of views and neglecting this latter fact seems unjustified. MacAskill writes:

the idea behind ‘longtermism’ is that it is compatible with any empirical view about the best way of improving the long-run future

The emphasis is mine. And Greaves and MacAskill write that so-called strong longtermism “…is at least fairly robust to variations in plausible axiological assumptions” (i.e. different assumptions about how to assign value to things now and in the future), although they say that “we leave the investigation of other possible variations for future research”. Moorhouse writes:


 …you might also care intrinsically about art and beauty, the reach of justice, or the pursuit of truth and knowledge for their own sakes… all of these things could flourish and multiply beyond levels they have ever reached before….

And

…longtermism is not claiming to know what humanity's future, or even its most likely future, will be. What matters is that the future could potentially be enormously large, and full of things that matter.

It is clear when one looks only a little more widely that transhumanism, nor any other particular view on what matters or what humanity must or will do in the future, is a completely necessary part of longtermism and that people interested in longtermism have gone out of their way to explicitly make this point. 

 

Total Utilitarianism

 

Of the three components to Torres’s notion of ‘our longterm potential’ it is total utilitarianism which probably has the strongest claim to being an essential part of longtermism, but I don’t necessarily agree that it is. Torres rejects the idea that some longtermists are not utilitarians, saying that the idea’s proximity to the effective altruism movement belies the fact that longtermism is just utilitarianism “repackaged” (a description he also simultaneously uses for effective altruism). In MacAskill and Pummer’s entry about effective altruism in the International Encyclopedia of Ethics (an essay from which Torres actually also quotes elsewhere), they write:

Many take effective altruism to be synonymous with utilitarianism, the normative theory according to which an act is right if and only if it produces no less well-being than any available act... This is a category mistake. Effective altruism is not utilitarianism, nor is it any other normative theory or claim. Instead, effective altruism is the project of using evidence and reason to try to find out how to do the most good, and on this basis trying to do the most good. (MacAskill and Pummer)

Claiming that effective altruism is ‘just’ a ‘repackaged’ utilitarianism and leaving it at that does not seem to engage with this point and may indeed commit this category error. And, like effective altruism, longtermism is not a normative claim either. I could say something like: Longtermism is the study of improving the long-term future, from the philosophical and ethical foundations through to the project of actually doing so. And the point is that a description like this would be comfortable supporting debates between different axiological perspectives and normative theories, e.g. you might want to promote the virtues of natural beauty and diversity, and I might be a committed transhumanist but it is possible that we both agree that what matters most is how our present actions influence the very long future, and we may both agree on the basic idea that potentially there is vastly more of what we value in the future (although in practice of course many would agree that in the present we are reducing Earth’s natural beauty and diversity). (To illustrate the full reach of this point as an abstract argument, consider the silly example that the states of the world I value most are those in which the number of marmite jars that exist is exactly one. And suppose that I see it as an ethical imperative to bring about such states of the world. We can also suppose that I am indifferent towards anything else that exists. This would be a ridiculous set of views but I can probably still agree that what matters most about my actions is how they affect whether or not a single jar of marmite exists for the vast future ahead of me: Ensuring that there is only one jar today or next week would be great, but ensuring that there were only one jar for millennia after I die would probably be much better. Personally, I feel like this distinction, while obviously very silly, comes much closer to distinguishing and illustrating what longtermist thought is than getting caught up about whether or not being a 1-marmite-jar-ist is or isn't a longtermist position.)

Having said all of this, Torres’s claim that the EA movement is “deeply utilitarian… in practice” (emphasis mine) does point to something that I recognise. This impels me to say again that longtermism is probably in want of more criticism and more diversity of opinion. I am not saying this under the assumption that increased scrutiny will vindicate the supporters of longtermism and/or its independence from utilitarianism. I currently see it as plausible that it develops to the point at which such a multitude of views are supported, with such an appreciation of unavoidable uncertainties and ignorance about the long-term future that longtermism ends up having few or no applications or implications we can agree on. I also see it as plausible that a more thorough investigation of the ethical requirements of the claims of longtermism ends up showing that some form  of utilitarianism is essential (i.e. not necessarily a strict total utilitarianism). I don't know the exact extent to which (for example) MacAskill, Ord, Bostrom or others believe that to be the case already.
 

Longtermism per se

 

The rest of Torres’s argument might be summarized as follows:

  1. We are duty-bound to try to ‘realise’ our long-term potential (transhumanism, space exploration, viewed through the lens of total utilitarianism).
  2. This means that we must
    1. Increase economic activity;
    2. Control and exploit nature;
    3. Make ever more powerful technologies.
  3. And this in turn leads to:
    1. A “Baconian, capitalist” pursuit of value;
    2. Destruction of our natural environment, lost biodiversity and natural beauty;
    3. Increased risks from advanced technologies.
  4. In conclusion, longtermism leads us to end up destroying the natural world and aggregating huge risks via advanced technology. We have given ourselves a bad and dangerous future via what we thought was an attempt to find a good one.

What I have tried to illustrate is that it is not longtermism per se that Torres has in the crosshairs. I have not developed any serious disagreement with the logic of the points 1. - 4. above but the notion of 'long-term potential' that Torres claims longtermists use suggests that what is really under attack is a certain brand of technological utopianism. Over the short history of longtermism, and for better or worse, it has been associated with technological utopianism. I have however also tried to illustrate along the way that a cleaner conception of longtermism, not tied to these other ideas, exists and has been explicitly put forward. I imagine that there is quite a healthy overlap between people who are interested in this more modern, cleaner, conception of longtermism and people who share or have sympathy with the criticisms that Torres makes (which I think are primarily about technological utopianism).

So while for Torres, longtermism is yet to separate itself from the study of existential risks and necessarily contains the essence of technological utopianism, it is interesting to note that for Cramer and Kemp, the relationships are different: They describe (strong) longtermism as being a constituent part of the techno-utopian approach to the study of existential risk (1). I am more inclined to agree with this permutation. Either way, it seems clear that as we develop these ideas, although many research directions will draw upon more than one of these themes, we will benefit from working hard to ensure that they are clearly delineated. 

 

 

                                    Works Cited

Bostrom, Nick. "Astronomical waste: The opportunity cost of delayed technological development." Utilitas, 15.3, 2003, pp. 308-314.

---. "Existential risks: Analyzing human extinction scenarios and related hazards." Journal of Evolution and technology ,9, 2002.

---. "Existential risk prevention as global priority." Global Policy, 4.1, 2013, pp. 15-31.

---. "The future of humanity." New waves in philosophy of technology, Palgrave Macmillan, London, 2009, pp. 186-215.

Cremer, Carla Zoe, and Luke Kemp. "Democratising Risk: In Search of a Methodology to Study Existential Risk." Available at SSRN, 3995225, 2021.

Greaves, Hilary and Will MacAskill. “The Case For Strong Longtermism”. GPI Working Paper, No. 5,2021.

Parfit, Derek. Reasons and persons. OUP Oxford, 1984

Pummer, Theron, and William MacAskill. "Effective altruism." International Encyclopedia of Ethics, John Wiley & Sons Ltd, 2020.

MacAskill, Will. “‘Longtermism’”. Effective Altruism Forum, 25 Jul. 2019 https://forum.effectivealtruism.org/posts/qZyshHCNkjs3TvSem/longtermism

Moorhouse, Fin. “Introduction to Longtermism”. EffectiveAltruism.org, 5 Nov. 2021  https://www.effectivealtruism.org/articles/longtermism

Ord, Toby. The Precipice: Existential Risk and The Future of Humanity. Hachette Books, 2020.

Rutherford-Morrison, Lara. “12 Things It's OK For Feminists To Disagree About”. Bustle, 14 Jun. 2016 https://www.bustle.com/articles/166121-12-things-its-ok-for-feminists-to-disagree-about

Torres, Phil. “Against Longtermism”. Aeon, 19 Oct. 2021, https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo

Tufnell, Nicholas. “A Singular Sort of Cult”. HuffPost, 15 Aug. 2013, https://www.huffingtonpost.co.uk/nicholas-tufnell/a-singular-sort-of-cult_b_3446089.html

 


 


 

43

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.