Hide table of contents

This piece is a response to two recent essays by Phil Torres which are critical of longtermism. It does not presume that the reader is familiar with longtermism, and is thus not directed towards regular Forum readers who likely will already be familiar with this material. 

 

Introduction 

Recently, Phil Torres wrote two essays which are critical of longtermism. For the sake of brevity, this piece does not summarize them but assumes the reader has previously read them. My view is that Torres misportrays longtermism. So, in this essay I introduce longtermism, and then explain and respond to the main criticisms that Torres offers of it. 

This is a long piece, so I encourage you to skip to the sections which most interest you. Here is the basic structure:

What is longtermism? 

Longtermism is the view that positively influencing the long term future is a key moral priority of our time. It’s based on the ideas that future people have moral worth, there could be very large numbers of future people, and that what we do today can affect how well or poorly their lives go [1].

Humanity might last for a very long time. A typical species’ lifespan would mean there are hundreds of thousands of years ahead of us [2] and the Earth will remain habitable for hundreds of millions of years [3]. If history were a novel, we may be living on its very first page [4]. More than just focusing on this mind bending scope, we can imagine — at least vaguely — the characters that might populate it: billions and billions of people: people who will feel the sun on their skin, fall in love, laugh at a joke, and experience all the other joys that life has to offer. Yet, our society pays relatively little attention to how our actions might affect people in the future.

Concern for future generations is not a new idea. Environmentalists have advocated for the interests of future people for many decades. Concern for future generations is enshrined in the Iroquois Nation’s constitution. ​​John Adams, the second U.S. president, believed American institutions might last for thousands of years [5], while Ben Franklin bequeathed money to American cities under the provision that it could only be used centuries later. 

That being said, there are several distinctive aspects of recent longtermist research and thinking, including the sheer timescales under consideration, the particular global problems that have been highlighted, and the consideration for the immense potential value of the future. Those engaged in longtermist research often look for events that will impact not just centuries, but potentially the whole future of civilisation — which might amount to millions or even billions of years. As for the global problems, a particular focus has been on existential risks: risks that threaten the destruction of humanity’s long-term potential [6]. Risks that have been highlighted by longtermist researchers include those from advanced artificial intelligence, engineered pathogens, nuclear war, extreme climate change, global totalitarianism, and others. If you care about the wellbeing of future generations, and take the long term seriously, then it’s of crucial importance to mitigate these or similarly threatening risks. Finally, recent longtermist thinking is distinct in its consideration of the magnitude of value that could exist, and the potential harm that could occur if we fail to protect it. For example, existential risks could bring about the extinction of humanity or all life on earth, the unrecovered collapse of civilisation, or the permanent, global establishment of a harmful ideology or some unjust institutional structure. 
 

Criticisms of longtermism

Phil Torres recently wrote two essays critical of longtermism (which this essay presumes the reader is already familiar with). Much of this criticism misses the mark because Torres does not accurately explain what longtermism is, and fails to capture the heterogeneity in longtermist thought. He does sometimes gesture at important issues that require further discussion and reflection among longtermists, but because he often misrepresents longtermist positions, he ultimately adds more heat than light to those issues. 

I do not mean to deter criticism in general. I have read critical pieces which helped refine and sharpen my own understanding of what longtermism should be aiming for, but I think it is also important to respond to criticism — particularly to the elements which seem off-base. 

One housekeeping note — this piece largely focuses on criticisms from the Aeon essay, as it is more comprehensive. I have tried to note when I am answering a point that is solely in the Current Affairs piece. 

Beware of Missing Context 

If this is what longtermism is, why does it seem otherwise in Torres’ articles? One answer is selective quotation. 

For example, Torres quotes Bostrom saying that “priority number one, two, three and four should … be to reduce existential risk”. But he omits the crucial qualifier at the beginning of the sentence: “[f]or standard utilitarians.” Bostrom is exploring what follows from a particular ethical view, not endorsing that view himself. In his case, Bostrom is not even a consequentialist [7]. Much the same can be said for MacAskill and Greaves’s paper “The case for strong longtermism,” where they work through the implications of variations on “total utilitarianism” before discussing what follows if this assumption is relaxed. Torres cuts the framing assumptions around these quotations, which are critical to understanding what contexts these conclusions actually apply in. 

More generally, it should be borne in mind that Torres quotes from academic philosophy papers and then evaluates the quoted statements as if they were direct advice for everyday actions or policy. It should not be surprising that this produces strange results — nor is it how we treat other philosophical works, otherwise we would spend a lot of energy worrying about letting philosophy professors get too close to trolleys.

In another instance, Torres quotes Bostrom’s paper “The Future of Humanity” to show how longtermism makes one uncaring towards non-existential catastrophes. In a section where Bostrom is distinguishing between catastrophes that kill all humans or permanently limit our potential, versus catastrophes that do not have permanent effects on humanity’s development, Torres highlights the fact that Bostrom calls this latter group of events “a potentially recoverable setback: a giant massacre for man, a small misstep for mankind.” Torres does not mention the very next line where Bostrom writes, “[a]n existential catastrophe is therefore qualitatively distinct from a ‘mere’ collapse of global civilization, although in terms of our moral and prudential attitudes perhaps we should simply view both as unimaginably bad outcomes.” Bostrom distinguishes between the concepts of an existential catastrophe and the collapse of civilization, and immediately suggests that we should regard both as unimaginably bad. The non-existential catastrophe does not shrink in importance from the perspective of longtermism. Rather, the existential catastrophe looms even larger — both outcomes remain so bad as to strain the imagination.

A particularly egregious example of selective quotation is when Torres quotes three sentences from Nick Beckstead’s PhD thesis, where Beckstead claims it is plausible that saving a life in a rich country is potentially more instrumentally important — because of its impacts on future generations — than saving a life in a poor country. In his Current Affairs piece, Torres claims that these lines could be used to show that longtermism supports white supremacy. All Torres uses to support this claim are three sentences from a 198-page thesis. He states, before he offers the line, that “[Toby] Ord enthusiastically praises [the thesis] as one of the most important contributions to the longtermist literature,” not bothering to note that Ord might be praising any of the other 197 pages. It is key to note that the rest of the thesis does not deal with obligations to those in rich or poor countries, but makes the argument that “from a global perspective, what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions, billions, and trillions of years.” It is primarily a forceful moral argument for the value of the long term future and the actions we can take to protect it. 

Torres also does not place the quotation with the relevant context about the individual. Nick Beckstead was among the first members of Giving What We Can, a movement whose members have donated over 240 million dollars to effective charities, primarily in lower income countries. Beckstead joined GWWC in 2010 when it was solely global poverty focused, founded the first GWWC group in the US, donated thousands of dollars to global poverty interventions as a graduate student making about $20,000 per year, and served on the organization's board. This was all happening during the time he wrote this dissertation. 

But what about that specific quote? In this quote, Beckstead is looking at which interventions are likely to save the lives of future people — who are, inherently, a powerless and voiceless group. When he says that he might favor saving a life in a wealthy country he is not saying this because he believes that person is intrinsically more valuable. As a utilitarian-leaning philosopher, Beckstead holds that these lives have the same intrinsic value. He then raises another consideration about the second-order effects of saving different lives: the person in the wealthy country might be better placed to prevent future catastrophes or invent critical technology that will improve the lives of our many descendents. Additionally, in the actual line Torres quotes, Beckstead writes that this conclusion only holds “all else being equal,” and we know all else is not equal — donations go further in the lower-income countries, and interventions there are comparatively neglected, which is why many prominent longtermists, such as Beckstead, have focused on donating to causes in low income countries. The quote is part of a philosophical exploration that raises some complex issues, but doesn't have clear practical consequences. It certainly does not mean that in practice longtermists support saving the lives of those in rich countries rather than poor. 

In general, throughout the two Torres pieces, one should be careful to take any of the particularly surprising quotations about longtermism at face value because of how frequently they are stripped of important context. Reading the complete pieces they came from will show this. It seems that Torres goes in with the aim to prove longtermism is dangerous and misguided, and is willing to shape the quotes he finds to this end, rather than give a more balanced and carefully-argued view on this philosophy. 

Now I would like to go through the various criticisms that Torres raises about longtermism and answer them in greater depth. 
 

Climate change 

Torres is critical of longtermism’s treatment of climate change. Torres claims that longtermists do not call climate change an existential risk, and he conflates not calling climate change an existential risk with not caring about it at all.  There are several questions to disentangle here:

  • A values question: do longtermists care about climate change or think it is worth working to mitigate?
  • An empirical question: will climate change increase the risk of the full extinction of humanity or an unrecoverable collapse of civilization?
  • And a terminological question: based on the answers to the two questions above, should we call climate change an existential risk?

The answer to the first question is straightforward. Longtermists do care about climate change. There are researchers at longtermist organizations who study climate change and there are active debates among longtermists over how best to use donations to mitigate climate change, and longtermists have helped contribute millions to climate change charities. There is active discussion about nuclear power and how to ensure that if geoengineering is done, that it is done safely and responsibly. These are not hallmarks of a community that does not care about climate change. Although it is fair to say that longtermists direct fewer resources towards it than they do towards other causes like biosecurity or AI safety — this also has to do with how many resources are already being directed towards climate change versus towards these other issues, which will be discussed more here

There is disagreement among longtermists on the empirical question about whether, and the degree to which, climate change increases the risk of the full extinction of humanity or an unrecoverable collapse of civilization. Some think climate change is unlikely to cause either outcome. Some think it is plausible that it could. Open questions include the extent to which climate change:

  • Exacerbates the risk of war between great powers [8]
  • Slows down technological progress [9]
  • Inhibits civilisational recovery after a collapse
  • Could trigger an extreme feedback effect (such as the burn-off of stratocumulus clouds, leading to 8 degrees of warming over the course of a year [10]).

Neither group disagrees that climate change will have horrible effects that are worth working to stop. 

Finally, on the terminological question: for longtermists who do not think climate change will cause the full extinction of humanity or an unrecoverable collapse of civilization, it makes sense that they do not call it an existential risk, given the definition of existential risk. We have terms to designate different types of events: if someone calls one horrible event a genocide and another a murder, this does not imply that they are fine with murders. Longtermists still think climate change is very bad, and are strongly in favour of climate change mitigation. 

Torres gestures angrily multiple times that longtermists are callous not to call climate change an existential risk, but he does not even argue that climate change is one. In the Aeon piece, he at times refers to it as a “dire threat” and that climate change will “caus[e] island nations to disappear, trigge[r] mass migrations and kil[l] millions of people.” Longtermists would agree with these descriptions — and would certainly think these are horrible outcomes worth preventing. What Torres does not argue is that climate change will cause the “premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development [11]"

There are many reasons for longtermists to care about climate change. These include the near-term suffering it will cause, that it does have long-term effects [12], and that climate change will also worsen other existential threats, which we will return to below. Additionally, many climate activists who have never heard the word “longtermism” are motivated by concern for future people, for example, appealing to the effects of climate change on the lives of their children and grandchildren. 

Potential 

Torres never clearly defines longtermism. Instead Torres writes, “The initial thing to notice is that longtermism, as proposed by Bostrom and Beckstead, is not equivalent to ‘caring about the long term’ or ‘valuing the wellbeing of future generations’. It goes way beyond this.” What Torres takes issue with is that longtermism does not just focus on avoiding human extinction because of the suffering involved in all humans being annihilated, but that there is some further harm from a loss of “potential.” 

Torres misses something here and continues to do so throughout the rest of the piece — potential is not some abstract notion, it refers to the billions of people [13] who now do not get to exist. Imagine if everyone on earth discovered they were sterile. There would obviously be suffering from the fact that many living wanted to have children and now realize they cannot have them, but there would also be some additional badness from the fact that no one would be around to experience the good things about living. We might be glad that no one is around to experience suffering. But there would also be no one around to witness the beauty of nature, to laugh at a joke, to listen to music, to look at a painting, and to have all the other worthwhile experiences in life. This seems like a tragic outcome [14]. 

Torres keeps his discussion of potential abstract and mocks the grand language that longtermists use to describe our “extremely long and prosperous future,” but he never makes the connection that “potential” implies actual experiencing beings. Extinction forecloses the lives of billions and billions. Yes, that does seem terrible. 

Like with longtermism, Torres does not offer a clear definition of existential risk. Existential risks are those risks which threaten the destruction of humanity’s long-term potential. Some longtermists prefer to focus only on risks which could cause extinction, because this is a particularly crisp example of this destruction. There are fairly intuitive ways to see how other outcomes might also cause this: imagine a future where humanity does not go extinct, but instead falls into a global totalitarian regime which is maintained by technological surveillance so effective that its residents can never break free from it. That seems like a far worse future than one where humanity is free to govern itself in the way it likes. This is one example of “locking in” a negative future for humanity, but not the only one. It seems that working to prevent extinction or outcomes where humanity is left in a permanently worse position are sensible and valuable pursuits.  He also quips that longtermists have coined a “scary-sounding term” for catastrophes that could bring about these outcomes: “an existential risk.” It seems deeply inappropriate to think that extinction or permanent harm to humanity should be met with anything other than a “scary-sounding” term. 

By leaving “potential” abstract, Torres conceals the fact that this refers to the billions of sentient beings who will not get to experience the world if an existential catastrophe occurs. When it is clear that this is what potential refers to, then it becomes much more obvious why longtermism places the importance it does on preventing existential catastrophes.

Non-existential catastrophes, prioritization, and difficult tradeoffs 

Limited time and resources force us to make painful and difficult tradeoffs in what we work on, and more crucially, in who we help. This is an awful situation to be in, but one we cannot escape. We all know the sensation of opening the newspaper or the newsfeed and feeling overwhelmed by the amount of suffering or injustice we see in the world, and not knowing how to begin alleviating it. It certainly can seem callous to work on reducing the risk of an existential catastrophe when so many people are suffering in the present — just as some may feel it is callous to work on climate change mitigation when so many are starving, or sick, or homeless. The climate activist might charge that they work on what they do so that more people are not starving or sick or homeless in the future as climate change worsens, just as the existential risk reduction advocate might charge that an engineered pandemic or nuclear war would similarly cause concrete suffering and is thus worth preventing. We do not mean to minimize the difficult choice of prioritizing who to help. This is one of the hardest things that caring people have to do. 

Torres is critical of how longtermism handles this difficult tradeoff. He indicates that longtermism says that if problems are not existential risks, we “ought not to worry much about them.” Longtermism does encourage people to focus attention on certain issues rather than others, but that is not at all the same as saying one should no longer “worry” about other catastrophes, and it is certainly not how longtermists actually think. The people who are working to prevent existential risks are deeply motivated by people living good lives. They hope to prevent suffering on a massive scale, like by preventing nuclear war or a gruesome pandemic, but that does not mean they are immune to noticing or caring about suffering on other scales. On the contrary, it is precisely this sensitivity to all kinds of everyday suffering that often motivates a special worry about the possibility of much larger-scale disasters. 

So while longtermists do worry and deeply care about all kinds of suffering, many longtermist researchers do encourage people to work on existential risks, as opposed to risks that play out at smaller scales. There are several factors contributing to this recommendation. 

First, longtermism works from the view that we should be impartial in our moral care. People are equally morally valuable regardless of their characteristics like race, gender, ethnicity, or critically, when they are born. Someone has no less moral worth because they are born ten, a hundred, or a thousand years in the future. 

Next, many longtermist researchers draw on the evaluative factors of importance, neglectedness, and tractability to help choose what to work on. These come from the effective altruism movement more broadly, but also motivate longtermist project selection. For more on the relationship between longtermism and effective altruism, see here. Importance is about the impact of a project, usually meaning how many lives are saved or improved. Neglectedness looks at how many people and resources are already being devoted to a problem — if this number is lower than what is going to some other project, adding more may have a higher marginal impact. Finally, tractability looks at how easy it is to make progress on a project. If something is very important and neglected, but impossible to change, it would not make sense to work on.

Preventing existential risks scores high on importance because it affects the entirety of humanity’s future.  To see the importance of avoiding extinction, imagine a nuclear war that kills 99% of people [15]. This would be a horrific tragedy involving unspeakable suffering. And imagine the world 50 years later, then 500, then several thousand. First there might be deep and crushing grief and mourning, then back-breaking struggle to survive, regrow, and rebuild, then possibly, if these few survivors persevere through unimaginable challenges, once again a world filled with people — people living, learning, loving, laughing, playing music, and witnessing the beauty of nature. Now imagine a nuclear war that kills 100% of people. There is the same horrible suffering during the event. But 50, 500, 1000 years later? A barren earth. No one to experience its joys and pains, no one to experience anything at all, ever again.  Torres seems to resist the process of comparing the “importance” of preventing different catastrophes. If you take on a moral view that does not allow for comparisons of badness — say you are faced with preventing a murder, a genocide, or the extinction of humanity, and your moral view gives you no way to choose between these — then the way that longtermism prioritizes does not and cannot make sense to you. But this is no weakness of longtermism. 

Next, longtermist work seems deeply neglected. There are very few people working on existential risk mitigation [16]. This means that each additional person causes a relatively large proportional increase in the amount of work that's being done. This connects back to how longtermists interact with climate change. Climate change is less neglected than other risks like nuclear war or biosecurity. For example, there are currently no major funders funding nuclear security, whereas climate change gets $5-9 billion from philanthropists every year, and hundreds of billions from governments and the private sector. That means that people are more likely to already be doing the most essential climate change work than the most essential work for the other areas.

It is important to recognize what question longtermists are answering, which is “how can we do the most good on the margin?" not “where should all of society’s resources go?” Longtermists have limited resources and so prioritize things like AI and biosecurity, and that is easily confused with the view that climate change should not get any money at all. I think almost all longtermists would agree that society should spend much more on climate change than it does right now. It can make sense to both be glad that a lot of work on mitigating climate change is happening and think that additional resources that we are able to direct are better used for potential big threats that are getting less attention at the moment.

What longtermism is directing people to work on should be taken in the context of what work is already being done: the world is currently spending close to nothing on protecting and improving the long-run future. As such, longtermists can and do disagree about how much of such spending would be ideal, while all agree that it makes sense to spend and do far more. For the last couple of years, roughly $200 million has been spent yearly on longtermist cause areas, and about $20 billion has so far been committed by philanthropists engaged with longtermist ideas (these posts give a good overview of the funding situation). For comparison, the world spends ~$60 billion on ice cream each year. Longtermism is shifting efforts at the margin of this situation — increasing the amount of money going towards longtermism, not redirecting all global resources away from their current purposes. 

Finally, tractability. It seems there are feasible ways to reduce existential risks, meaning that people should implement them. Some examples for reducing biorisks include advocacy and policy work to prevent “gain-of-function” research, where pathogens are made more deadly or more infectious; work to improve our ability to rapidly develop flexible vaccines that can be applied to a novel disease; and work to build an early detection capability which more systematically and proactively tests for new pathogens. These projects are feasible and seem likely to reduce biorisks. There are many more such projects, both for biorisks and other types of risks. 

I do not want to make it seem as if all longtermist work is trading off against nearer term or more certain benefits. There is some longtermist work which also helps with near-term problems, like work to improve institutional decision making. Likewise, some work to reduce existential risks also helps prevent non-existential catastrophes, like work to prevent pandemics or improve food security during catastrophes. Many longtermists interested in preventing biological existential risks worked on COVID, and projects that are likely to prevent the next COVID [17]. Some also argue that even purely focusing on current generations, the risk of an existential catastrophe in the next few decades is high enough that it makes sense to work on reducing these risks for the sake of people who are alive now. 

Longtermism tells people to work on existential risks because they seem very important, neglected, and tractable. It does not say that these are the only important things in the world. And it does seem odd to accuse those who work to prevent one kind of suffering of being callous for not working on another — one struggles to imagine Torres telling someone who is campaigning against genocide that they are being heartless for not focusing on homicides in their community. Deciding where to devote our limited time and resources is a painful and difficult decision. Longtermism is one framework for helping us decide. We do not pretend that making this choice is easy or pleasant, and we wish there were no constraints on our ability to work on every important problem at once.  

Limiting Conditions 

A related concern to longtermism’s handling of non-existential catastrophes is whether longtermism could be used to justify committing harms, perhaps even serious ones, if it helped prevent or lower the chance of an existential catastrophe. 

Of course, someone who doesn’t actually subscribe to longtermism could simply use its ideas as cover in a clearly disingenuous way — for example, an autocrat could claim he is acting to benefit the long-term future when really he is just refusing to take into account the needs of his current subjects for his own gain. This does not seem to be a feature unique to longtermism, and does not seem it should count strongly against it since so many ideologies could be used this way. 

A related criticism is that longtermism actually suggests we do things which would ultimately harm humanity’s long-term future, and that it is therefore self-defeating. It seems like the best conclusion here is not that longtermism is self-defeating, but instead that it just doesn’t suggest doing these things when there are better options. To the extent that longtermism seems to suggest we do things which would be bad by its own lights, this is likely just a sign that these criticisms apply to an overly simplistic version of longtermism — not that it undercuts the essence of this view. 

But what about the more worrying example of someone who has carefully considered and understood longtermism, and who believes that its conclusions instruct them to do something harmful? Ideologies — particularly utopian ideologies —  have led to some of the gravest atrocities in recent history. There is an open question of whether longtermism is more susceptible to this than other ideologies. 

It does not seem — either in practice or in theory — that longtermists are using or will use their philosophy to justify harm. 

There is one argument to be made just from the character and dispositions of longtermists: Those who are interested in longtermism often got into this field because they wanted to reduce suffering on the largest scale they could. These are not individuals who take causing harm lightly. These people are working to prevent nuclear war, bioweapons attacks, and the misuse of powerful new technologies — clearly they are attentive to the myriad ways that humans are vulnerable to injury and death, and are working continuously to reduce them. Many longtermists donate regularly to altruistic causes. A significant portion of longtermists are vegetarian or vegan out of concern for the welfare of animals, revealing a desire to reduce suffering regardless of species. It seems that in practice — if you look at what longtermists are doing — they are unusually careful not to cause harm. 

But what about future longtermists? Are there structural or philosophical features of longtermism that prevent it from being used to justify harm?

Torres ignores that this is a concern that longtermists themselves have raised, discussed, and written on. There are pieces on why you can’t take expected value too seriously (the former from Bostrom, which might temper Torres' accusations that Bostrom naively and dangerously used expected value to justify insane actions — Bostrom was one of the earliest in the longtermist world counseling against simply using expected value to guide actions). There are multiple papers written on the problem of fanaticism (while Torres says this is a term “some” longtermists embrace, he cites one, single-authored academic philosophy paper, and neglects to mention the multiple papers which point to fanaticism as a serious concern) [18]. The websites of multiple major longtermist organizations feature multiple pieces on why effective altruists and longtermists should not cause harm even in pursuit of laudable goals. There is also a strong emphasis on moral uncertainty, which tempers any tendency towards extreme actions that are not robustly good by the lights of multiple views. 

Torres almost makes the connection that this is a live part of longtermist discussion when he includes a thoughtful quote from Olle Haggstrom, in which he worries about a potential lack of limiting conditions in longtermism. Instead of recognizing that the fact that this quote exists reveals something healthy and promising about longtermism’s ability to address these concerns, Torres only remarks that despite this quote Haggstrom “perplexingly... tends otherwise to speak favourably of longtermism.” It should be noted that Haggstrom resents the way this quote has been cherry-picked, because by only showing his (reasonable) criticism, Torres makes it seem as if he rejects longtermism when in fact he supports it. 

Longtermism is not all-encompassing and dogmatic. Figuring out what matters in the world and how we should act are some of the hardest and most important problems we face. By highlighting the importance of mitigating existential risks, longtermists do not pretend to have solved what to do — they have proposed one plausible answer (out of potentially many more) that they are continuously scrutinizing, questioning, challenging, and improving. 

Nor is longtermism wedded to a particular view of ethics. In fact, two key longtermist researchers, Toby Ord and William MacAskill, wrote a book about how to take ‘moral uncertainty’ seriously. There are many different moral frameworks that can support longtermism [19] — not just utilitarianism, as some have argued. Longtermists also place value on finding conclusions that are supported by multiple views. Having spent time in this community of researchers, I've been surprised by how pluralistic longtermism is. There are longtermist leftists and libertarians, Christians and Jews, utilitarians and deontologists [20]. Because longtermism recognizes the difficulty of its project and the grave importance of getting it right, longtermism is open to criticism. Critics are invited to speak in prominent forums, and longtermists frequently criticize longtermism with the view that criticism is stimulating and healthy, and improves the ideas that make up longtermism. 

Does potential inherently include transhumanism, space expansionism, and total utilitarianism?

Torres claims that one can “unpack what longtermists mean by our ‘long-term potential’” into “three main components: transhumanism, space expansionism, and a moral view closely associated with what philosophers call ‘total utilitarianism’.” However, there is wide disagreement among longtermists about what exactly potential involves. 

Potential could mean many things 

Like I said before, when longtermists speak about potential, they mean first and foremost all the sentient beings who could exist in the future and experience the world. There is reason to believe that future beings could have even better lives than present beings. There is a general trend of humans getting healthier and wealthier, living longer, being more literate. Potentially, these trends could continue far into the future, meaning that if some catastrophe wipes out humanity and prevents all future generations from existing, then those beings won’t get to experience those good things. 

But if we don’t go extinct, what exactly do we want? There is much more agreement in the longtermist world about what we want to avoid than what we want to obtain. And that’s ok! It seems appropriate to recognize that we do not know exactly what future generations will want, and to focus on giving them self-determination. That’s why we want to avoid extinction, but also various kinds of “lock-in” where decisions are made that are hard for future people to reverse (like the surveillance enabled totalitarian example). 

Most longtermists are not utopians: they very deliberately do not have some well-described grand future in mind. Many longtermists do believe that the the world can be far, far better than it is today. But it could be better in so many ways, including ways we're only dimly aware of. This means we want to keep our options open, and not be so presumptuous to declare, or overly constrain, what the future should look like. Some have even suggested the need for a long period of reflection where humanity works out its values before it engages in any more ambitious or irreversible projects. 
 

Some longtermists do support space expansionism, transhumanism, and total utilitarianism — here’s why 

Torres argues that potential is a much more laden concept than I have laid out above, one that does not just denote the experiencing beings who could exist, but also specific facts about their existence. Some longtermists do think our potential will only be realized if we become transhuman and spacefaring. I want to explain why they might think this. 

First, it is important to note that transhumanism refers to an eclectic mixture of ideas and is, in my experience, a fringe view within longtermism, and that for any particular 'transhumanist' claim or goal there is a good chance that most longtermists have never heard of it let alone agree with it. But I will still offer a basic argument for why some longtermists do see transhumanism as part of fulfilling our potential: It seems there are many ways we can improve our existence. Over time, humans have invented medicines and health treatments that reduce physical suffering and extend our lifespan. These seem to be very good developments. Transhumanism, in its most basic form, is just an extension of this: can we be healthier and live longer, can we be more empowered in our physical forms. Those who think becoming transhuman is part of fulfilling our potential are focusing on the quality aspect of future lives: they want future generations to not just exist, but also exist with even better experiences than we have. Torres mentions the connection between transhumanism and eugenics to suggest that those who support transhumanism are similarly sullied. Transhumanism has been connected with dark practices. So has modern medicine. It would seem wrong for someone to argue that anyone who supported medical trials was morally suspect because some medical trials had been conducted on prisoners of war or done in other unethical ways. Obviously, doing medical trials like that is wrong, just as it would be wrong to use transhumanism in the way that eugenicists used it. But going back to the basic premise that there are ways to improve the human condition, and the transhumanist project focuses on finding those ways, it seems plausible that this could be part of fulfilling humanity’s potential. 

Space expansionism is the next element that Torres views as an inherent component of what longtermists mean by potential. While again, it is not inherently part of longtermist’s definition of potential, there is a reason why some might include it: the future could be home to far more people than are currently alive, perhaps even spread out much further across space than today — just as we far outnumber, and are spread far more widely, than our forager ancestors. There are other benefits from space expansionism too: Resources brought back from space would improve the lives of those living on earth. Settlements on more planets could protect against certain existential risks. Allowing for settlements on new planets could allow for a greater diversity of ways of living, with residents of new planets able to try out new political or economic systems. Ultimately all of these come back to more people living worthwhile lives. But we simply don't need to decide whether a spacefaring future would be best today — what matters is that future people have the opportunity to decide.

Utilitarianism and the Total View

This leads right into the discussion about population ethics. This is a thorny and complex topic that is likely too esoteric to lay out here. Torres uses a thought experiment about comparing sizes of imaginary populations to try to discredit a view within population ethics called the “total view.” The example does not seem to represent a choice that we will ever actually face in the real world. Philosophers have debated population ethics for decades, and it is generally agreed that all views have highly counterintuitive conclusions. It is therefore flawed to say "look, this view implies something that seems weird, therefore it must be wrong,” since this would imply that no view is correct. The total view does not seem to face more counterintuitive implications than other views. 

Torres moves on from the total view to discussing utilitarianism more broadly (which does not have to be combined with the total view). Torres is wrong when he claims longtermism is “utilitarianism repackaged.” One does not have to be a total utilitarian, or any kind of utilitarian, to be a longtermist [21]. Torres argues that all longtermists are utilitarians, which is simply false. I am not. Nor is Bostrom [22]. Nor are many others. That being said, there are a significant number of utilitarian longtermists, and Torres does not do this ethical view justice with his description.

Utilitarianism aims to improve the lives of all sentient beings, giving equal moral consideration to the wellbeing of all individuals. Utilitarianism’s focus on being impartial about who to help is radical in a world where most people feel tight allegiances to those near them, like to their own race or nationality. Many people find a philosophy that instructs one to impartially consider the value of all beings — that someone in a distant country could be worth as much as you or someone near and dear to you — sensible and compelling; It seems natural to extend this to future generations. Utilitarianism can also point to its track record: Torres gives short shrift to a philosophy that was far ahead of the curve on topics like abolitionism, animal rights, and women’s rights

Utilitarianism comes in many varieties, so it is hard to speak for all of them, but what Torres seems to miss is that utilitarianism is not some abstract aim, but is ultimately grounded in the welfare of conscious beings, because someone must be around to be having the positive experiences that this view values. In that respect, utilitarianism is humanistic. It underlines that there is no abstract sense of value outside of the lives of living, experiencing beings.

Specific blueprints or inspiration? 

Torres primarily uses two pieces to justify his very specific vision of what longtermism aims at: one of the final chapters of Toby Ord’s The Precipice and Bostrom’s Letter from Utopia. These pieces are meant to provide two speculative, inspirational pictures of what the future might hold, not to lay out precise guidelines for what realizing our potential involves. They are not meant to predict the future or instruct it. Torres misses the huge emphasis within Ord’s work on “the long reflection,” [23] essentially some window of time where humanity can, as a whole, reflect on what it wants to be and to achieve. Obviously, the long reflection is idealized and may never happen for a range of reasons, but the fact that Ord presents it as desirable reveals something key about his (and many other longtermists) view on humanity’s potential: we do not know exactly what realizing it looks like, future humanity has to work that out for itself. Torres pulls out two quotes from Bostrom and Ord to try to prove that they view transhumanism as inherently part of realizing humanity’s potential, but the quotes don’t say that. Instead, they say that what has to be avoided is permanently taking away this choice from future humanity. Bostrom wants to avoid “permanent foreclosure of any possibility of this kind of transformative change” and Ord wants to avoid “forever preserving humanity as it is now.” Both focus on the “permanent” and the “forever” — they want to avoid lock-in, which is actually radically empowering to future generations, not forcing them to become transhuman, but fighting to preserve the choice for them to become what they want.  

Torres concludes his argument on potential as transhumanism, space expansionism, and total utilitarianism by saying “[t]hat is what our ‘vast and glorious’ potential consists of: massive numbers of technologically enhanced digital posthumans inside huge computer simulations spread throughout our future light cone.” Yes, some longtermists might support these as elements of fulfilling our potential. Others might view “fulfilling our potential” as involving a flourishing earth-based humanity that stays embodied but lives out the next billion years on a more peaceful, prosperous, equal, and healthy planet. Some might reject the idea of becoming “posthuman” through enhancement, some might reject a highly technological future, some might reject the idea that existing in a virtual or simulated environment could be as good as existing in the real one. These are real questions that future generations will need to work out. And what is most clear is that Torres is wrong to present this as the sole or consensus view on what longtermism aims for — longtermism aims to avoid extinction or lock-in, and to give future generations the chance to work out what they want.  

Technological Development 

As Torres concludes his Aeon piece, he turns towards longtermism’s relationship to technological development. He claims it is “self-defeating,” essentially that longtermism’s support for advanced technological development will bring about existential risks. Torres writes in his concluding paragraph that “technology is far more likely to cause our extinction before this distant future event than to save us from it.” This is a claim that many longtermists would actually agree with — as demonstrated by their recognition that the largest existential risks are anthropogenic in nature, particularly from advanced technologies like AI and biotechnology [24]. 

What Torres misses in this section is that longtermists are not acting alone in the world. Even if they are acutely aware of technology's risks, longtermists cannot unilaterally decide to stop technological progress, despite Torres implying that they can. Longtermists are keenly aware of the dangers of technology, and “steering” technological progress is a core strategy of longtermism. Yes, longtermist researchers typically do not advocate for the wholesale pause or reversal of technological progress — because, short of a disaster, that seems deeply implausible. As mentioned above, longtermism pays attention to the “tractability” of various problems and strategies. Given the choice, many longtermists would likely slow technological development if they could. 

Also, advocating for a pause or reversal would likely lose longtermists the opportunity to do something which is possible — direct that technological development in ways that are better and safer than it would have gone otherwise. Longtermist researchers frequently work with the safety teams in leading AI labs like Deepmind and OpenAI. Longtermism has originated research on “differential technological development,” essentially how to develop safe and defensive technologies faster than offensive and dangerous ones, how to slow or speed the development of various technologies, and what order technologies should arrive in. In the biosecurity realm, longtermist researchers are working to improve lab safety and to prevent “gain of function” research in biology. These are hallmarks of a philosophy and movement that take the risks of technology very seriously, and are working urgently to mitigate them.

Longtermists do, also, see the benefits of technology to mitigate certain other risks or to just generally improve standards of living. Take the fight against climate change: developing better technology for clean energy is a core tool in our arsenal. 
 

Conclusion

Torres opens his Aeon piece by listing risks like pandemics, nuclear war, nanotechnology, geoengineering, and artificial intelligence. He believes that fears about extinction are based on “robust scientific conclusions.” He seems to think extinction would be very bad and he believes “you should care about the long term.” But he claims, vehemently, that he is not a longtermist. I would argue that Torres is a longtermist. He pays attention to the value of the future and he connects reaching it to overcoming certain large-scale risks. That being said, I don’t care what Torres calls himself. Longtermism is not an identity and certainly not an ideology — it is a shared project animated by concern for the long-run future, which can and should contain many conflicting viewpoints.

What is important is that we work to set the world on a positive trajectory, and work to reduce existential risks, both to protect the present generation from harm and to ensure that there will be future generations living worthwhile lives. We should aim to leave a better world for our descendents stretching far, far into the future. That future might be embodied and limited to this planet. It might be populated by barely recognizable beings scattered throughout the galaxy. I think that Torres and I can agree that that is for future generations to decide. Let’s ensure they have the chance to. 

……………..
 

Although this piece is long, it used to be much longer. If there is some point I failed to address, please reach out, since I may already have written something on it. For example, I can share sections on longtermism’s relationship to 1) Nature 2) Surveillance and Preemptive War 3) Seeking Influence, which didn’t make it in the final draft in an attempt to be concise. 

 

End Notes 

[1]  Will MacAskill, "What We Owe the Future."

[2]  Barnosky et al. 2011

[3] Wolf & Toon 2015

[4]  Will MacAskill, “What We Owe the Future.”

[5]  From John Adams’s Preface to his A Defence of the Constitutions of Government of the United States. In: The Works of John Adams, Second President of the United States: with a Life of the Author, Notes and Illustrations, by his Grandson Charles Francis Adams (Boston: Little, Brown and Co., 1856). 10 volumes. Vol. 4. P. 298 https://oll.libertyfund.org/title/adams-the-works-of-john-adams-vol-4#Adams_1431-04_948

[6] Toby Ord, “The Precipice.”

[7] https://www.nickbostrom.com/ethics/infinite.html, https://www.nickbostrom.com/papers/pascal.pdf , https://www.nickbostrom.com/ethics/dignity-enhancement.pdf

[8]  E.g. Hsiang, Solomon M., Marshall Burke, and Edward Miguel. "Quantifying the influence of climate on human conflict." Science 341.6151 (2013).

[9]  Dell, Melissa, Benjamin F. Jones, and Benjamin A. Olken. Climate change and economic growth: Evidence from the last half century. No. w14132. National Bureau of Economic Research, 2008.

[10]   “The breakup of the stratocumulus clouds is more rapid than it would be in nature because of the unrealistically small thermal inertia of the underlying slab ocean” Tapio Schneider, Colleen M. Kaul, and Kyle G. Pressel, ‘Possible Climate Transitions from Breakup of Stratocumulus Decks under Greenhouse Warming’, Nature Geoscience 12, no. 3 (March 2019): 163–67

[11] https://www.existential-risk.org/concept.html#:~:text=As%20noted%2C%20an%20existential%20risk,the%20entire%20future%20of%20humankind.

[12]  CO2 stays in the atmosphere for hundreds of thousands of years! That certainly seems to qualify as long-term effects that we should be wary of saddling our descendents with.

[13]  While I use “people” here, we should also consider animals. Any sentient being seems worth our moral consideration. 

[14]  Some other examples which might make this intuitive: If you learned that an asteroid was coming in 200 years to destroy us, should we ignore that because the people involved are merely potential? When we store nuclear waste, should we only worry about storing it safely for several generations, or take the additional resources to store it until it is no longer dangerous? 

[15]  This thought experiment is taken from Parfit https://wmpeople.wm.edu/asset/index/cvance/videos

[16]  If we purely look at EAs, we get a number of several thousand, although there are likely more non EAs also working on these problems https://forum.effectivealtruism.org/posts/zQRHAFKGWcXXicYMo/ea-survey-2019-series-how-many-people-are-there-in-the-ea

[17] Some examples: https://www.fhi.ox.ac.uk/the-effectiveness-and-perceived-burden-of-nonpharmaceutical-interventions-against-covid-19-transmission-a-modelling-study-with-41-countries/ , https://www.nature.com/articles/d41586-021-02111-7

[18]  Not a paper, but a quote from someone seen as a leading longtermist researcher highlighting fanaticism as a problem https://twitter.com/anderssandberg/status/1452561591304605698

[19]  Toby Ord, the Precipice, pages 65-81 and The Case for Strong Longtermism 

[20]  https://link.springer.com/article/10.1007/s42048-018-0002-3, also https://plato.stanford.edu/entries/justice-intergenerational/#CurrInteJust

[21]   See: The Precipice, pages 65-81, The Case for Strong Longtermism, https://globalprioritiesinstitute.org/wp-content/uploads/Stefan-Riedener_Existential-risks-from-a-Thomist-Christian-perspective.pdf

[22]  https://www.nickbostrom.com/ethics/infinite.html , https://www.nickbostrom.com/papers/pascal.pdf , https://www.nickbostrom.com/ethics/dignity-enhancement.pdf

[23]  Toby Ord, "The Precipice," 297-298.

[24] https://forum.effectivealtruism.org/tag/anthropogenic-existential-risk and The Precipice 

Comments31
Sorted by Click to highlight new comments since: Today at 5:10 AM

Hey! I've only skimmed through this piece, but I'd like to recommend that you adapt it for a general audience and submit it to an online publication, especially Current Affairs or Aeon (the websites on which Torres's pieces were published). This would be beneficial for two reasons:

  1. It would get in front of non-EAs who might read the Torres piece, since they're likely to read websites for broad audiences like Aeon but unlikely to read a post on the EA Forum.
  2. Works published in established venues with editorial control over their content are more likely to be treated as reliable sources on Wikipedia than self-published sources like the EA Forum. This means that the Torres essays are more likely to be cited in a Wikipedia article (for example, the Aeon one is already cited in the longtermism article) than the many rebuttals to them written by thinkers in the EA community.

Not sure how much to weight this, but perhaps it would be better to have a straightforwardly pro-longtermism piece in one of these outlets, rather than a response to Torres. If edited for Aeon or Current Affairs as a response piece this would need to offer detailed exposition of Torres's arguments, and might just result in getting more people to read the original. 

I don't know if either outlet publishes a "letter to the editor" style post. If they did, that might be a better short format which would mostly reach readers of Torres's article, rather than a full article which would likely just expand the reach of the original. 

Let me know if I misunderstood something or am reading your post uncharitably, but to me this really looks like an attempt at hiding away opinions perceived harmful. I find this line of thinking extremely worrying.

EA should never attempt to hide criticism of itself. I am very much a longtermist and did not think highly of Torres article, but if people read it and think poorly of longtermism then that's fine.

Thinking that hiding criticism can be justifiable because of the enormous stakes, is the exact logic Torres is criticising in the first place!

Framing my proposal as "hiding criticism" is perhaps unduly emotive here. I think that it makes sense to be careful and purposive about what types of content you broadcast to a wider audience which is unlikely to do further research or read particularly critically. I agree with Aaron's comment further down the page where he says that the effect of Torres's piece is to make people feel "icky" about longtermism. Therefore to achieve the ends which I take as implicit in evelynciara's comment (counteract some of the effects of Torres's article and produce a piece of work which could be referenced on wikipedia), I think it makes more sense to just aim to write a fairer piece about longtermism, than to draw more attention to Torres's piece. I'm all for criticism of longtermism and I think such an article would be incomplete without including some, I just don't think Torres's piece offers usable criticism. 

Makes sense, I agree with that sentiment.

But if his text is so bad, why should anyone feel "icky" about longtermism because of it? Although I'm by far not stranger to longtermism (I'm here!), I'm really not too much into EA and I'm not a phylosopher nor have I studied it ever, so my theoretical knowledge of the topic is limited, and when I read Torres' texts it is clear to me that they don't really hold. 

When I'm interested in one topic for which I'm not really qualified to know if what I read/hear about it holds true or is one sided, I tend to search for criticisms about it to check. What I've read from Torres or linked by him about longtermism, actually make me think that it seems to be difficult to fairly criticise longtermism.

I think reading Torres' texts may well turn people away if they don't really know much else about the topic, but "getting more people to read the original [Torres' paper]" after having read a good piece shouldn't be a problem.

And coming back to my starting question, if a person who has good information sources feel "icky" about a topic because of a bad piece of information, maybe it is okay that he/she is not too involved in the topic, no?

Commenting from five months into the future, when this is topically relevant:

I disagree. I read Torres' arguments as not merely flawed, but as attempts to link longtermism to the far right in US culture wars. In such environments people are inclined to be uncharitable, and to spread the word to others who will also be uncharitable. With enough bad press it's possible to get a Common Knowledge effect, where even people who are inclined to be openminded are worried about being seen doing so. That could be bad for recruiting, funding, cooperative endeavors, & mental health.

Now, there's only so many overpoliticized social media bubbles capable of such a wide effect, and they don't find new targets every day. So the chances of EA becoming a political bogeyman are low, even if Torres is actively attempting this. But I think bringing up his specific insinuations to a new audience invites more of this risk than is worth it.

It is long time ago now, but I don't remember having the feeling that he linked longtermism to the far right in that text. I don't know about in other places.

Thanks for writing this. I think the main points Torres gets wrong are 

a) his insinuation that longtermists do or would actually support serious active harms (even on a large scale) to prevent extinction, rather than just entertain the possibility theoretically, and 

b) the characterization of longtermists as pro-risky tech. Longtermists would generally prefer such tech to develop more slowly to have more time to work on safety (even though they might want a given technology developed eventually, but safely).

 

Some comments/feedback on specific points:

 

Torres misses something here and continues to do so throughout the rest of the piece — potential is not some abstract notion, it refers to the billions of people [13] who now do not get to exist. Imagine if everyone on earth discovered they were sterile. There would obviously be suffering from the fact that many living wanted to have children and now realize they cannot have them, but there would also be some additional badness from the fact that no one would be around to experience the good things about living.

I don't agree that this loss of future people involves additional badness at all, and I would suggest rephrasing to not claim so as if it's definitely the case, rather than just your own view. There are also longtermists with person-affecting or otherwise asymmetric views, like negative utilitarianism.

 

The example does not seem to represent a choice that we will ever actually face in the real world. Philosophers have debated population ethics for decades, and it is generally agreed that all views have highly counterintuitive conclusions. It is therefore flawed to say "look, this view implies something that seems weird, therefore it must be wrong,” since this would imply that no view is correct. The total view does not seem to face more counterintuitive implications than other views.

For what it's worth, different people find different things counterintuitive. Some people don't find the repugnant conclusion counterintuitive at all.

 

Torres gives short shrift to a philosophy that was far ahead of the curve on topics like abolitionism, animal rights, and women’s rights

I think it's the impartiality of utilitarianism, not the total view, that's responsible for these things. Torres is explicit in one of the articles that he's referring to total utilitarianism. That being said, these early utilitarians were total utilitarians, AFAIK.

The total view could also imply the logic of the larder (it's good to farm and kill animals, as long as they're happy) or favour restrictions on contraception and abortion, at least compared to other utilitarian or consequentialist views (variable value views, critical level utilitarianism, person-affecting views, negative utilitarianism).

 

Torres moves on from the total view to discussing utilitarianism more broadly (which does not have to be combined with the total view). Torres is wrong when he claims longtermism is “utilitarianism repackaged.” One does not have to be a total utilitarian, or any kind of utilitarian, to be a longtermist [21]. Torres argues that all longtermists are utilitarians, which is simply false. I am not. Nor is Bostrom [22]. Nor are many others. That being said, there are a significant number of utilitarian longtermists, and Torres does not do this ethical view justice with his description.

I think he is using "utilitarianism repackaged" informally here, to highlight what EA (not just longtermism) and utilitarianism have in common, based on the linked article, although what they have in common doesn't imply the bad things he claims longtermism does in the article (nor does total utilitarianism necessarily, either, in practice), which I think is the best response here. He doesn't literally mean that all longtermists must be utilitarians. He wrote "the EA movement is deeply utilitarian, at least in practice". He also doesn't argue "that all longtermists are utilitarians", and you've done exactly what he anticipated, which is insist that some longtermists aren't utilitarians. This is what Torres wrote:

This leads to the third component: total utilitarianism, which I will refer to as ‘utilitarianism’ for short. Although some longtermists insist that they aren’t utilitarians, we should right away note that this is mostly a smoke-and-mirrors act to deflect criticisms that longtermism – and, more generally, the effective altruism (EA) movement from which it emerged – is nothing more than utilitarianism repackaged. The fact is that the EA movement is deeply utilitarian, at least in practice, and indeed, before it decided upon a name, the movement’s early members, including Ord, seriously considered calling it the ‘effective utilitarian community’.

To be clear, he is factually incorrect about that claim. I never seriously considered calling it that.

One of the major points of effective altruism in my mind was that it isn't only utilitarians who should care about doing more good rather than less, and not only consequentialists either. All theories that agree saving 10 lives is substantially more important than saving 1 life should care about effectiveness in our moral actions and could benefit from quantifying such things. I thought it was a great shame that effectiveness was usually only discussed re utilitarianism and I wanted to change that.

Hi! Thank you so much for this article. I have only skimmed it, but it appears substantive, interesting, and carefully done. 

Please don't take my question the wrong way, but may I ask what the motivation is for writing this article? Naively, this looks very detailed (43 minutes read according to the EAF, and you mention that you had to cut some sections) and possibly the most expansive public piece of research/communication you've done in effective altruism to date. While I applaud and actively encourage critiques of effective altruism and related concepts, as well as responses to them, my own independent impression is that the Torres pieces were somewhat sloppily argued. And while I have no direct representative evidence like survey results, my best guess based on public off-hand remarks and online private communications is that most other longtermist researchers broadly agree with me. So I'm interested in your reasoning for prioritizing this article over addressing other critiques, or generating your own critiques of longtermism, or other ways to summarize/introduce longtermism, or other ways to spend researcher time and effort. 

I want to re-iterate a general feeling of support and appreciation of someone a) taking critiques seriously and b) being willing to exhaustively analyze topics. I do think those are commendable attributes,  and my brief skim of your article suggests that your responses are well-done.

I note the rider says it's not directed at regular forum users/people necessarily familiar with longtermism. 

The Torres critiques are getting attention in non-longtermist contexts, especially with people not very familiar with the source material being critiqued. I expect to find myself linking to this post regularly when discussing with academic colleagues who have come across the Torres critiques; several sections (the "missing context/selective quotations" section in particular) demonstrate  effectively places in which the critiques are not representing the source material entirely fairly.

I totally understand your concerns. FWIW as a former group organizer, as the Torres pieces were coming out, I had a lot of members express serious concerns about longtermism as a result of the articles and ask for my thoughts about them, so I appreciate having something to point them to that (in my opinion) summarizes the counterpoints well.

ab
2y43
0
0

Hello Linch, Sean and Marisa capture the reasons well. I have had several people outside EA/LT ask about the Torres essays and I didn't have a great response to point them to so this response is written for them. I also posted it here in case others have a similar use for it. 

Thanks for the response, from you and others! I think I had a large illusion of transparency about how obviously wrong Torres' critiques are to common-sense reason and morality. Naively I'd have thought that they'd come across as clearly dumb to target audiences the way (e.g.) the 2013 Charity Navigator critique of EA did. But I agree that if you and others think that many people who could potentially do useful work in EA (e.g., promising members of local groups, or academic collaborators at Cambridge) would otherwise have read Torres' article and been persuaded, then I agree that pointing out the obvious ways in which he misrepresents longtermism makes sense and is a good use of time!

I still vaguely have the gut feeling of "don't feed the energy creatures" where it's unwise to dedicate a lot of time to exhaustively try to engage with someone arguing in bad faith. So my first pass is that 1-2k words spent on quickly dissecting the biggest misrepresentations should be enough. But I think this feeling isn't very data- or reason- driven, and I don't have a principled policy of how applicable that feeling is in this case.

I don't think people being "persuaded" by Torres is the primary concern — rather, I think Torres could make people feel vaguely icky or concerned about longtermism, even if they still basically "believe in it", in a way that makes them less likely to get fully engaged / more likely to bounce off toward other EA topics. Even if those other topics are also very valuable, it seems good to have a reaction piece like this to counter the "ick" reactions and give people an easy way to see the full set of relevant arguments before they bounce.

For what it's worth, I think the basic critique of total utilitarianism of 'it's just obviously more important to save a life than to bring a new one into existence' is actually very strong. I think insofar as longtermist folk don't see that, it's probably a) because it's so obvious that they are bored with it now and b) Torres tone is so obnoxious and plausibly motivated by personal animosity. But neither of those are good reason to reject the objection!

First, longtermism is not committed to total utilitarianism.

Second, population ethics is notoriously difficult, and all views have extremely counterintuitive implications. To assess the plausibility of total utilitarianism—to which longtermism is not committed—, you need to do the hard work of engaging with the relevant literature and arguments. Epithets like "genocidal" and "white supremacist" are not a good substitute for that engagement. [EDIT: I hope it was clear that by "you", I didn't mean "you, Dr Mathers".]

If you think you have valid objections to longtermism, I would be interested in reading about them. But I'd encourage you to write a separate post or "shortform" comment, rather than continuing the discussion here, unless they are directly related to the content of the articles to which Avital was responding. 

First, longtermism is not committed to total utilitarianism.

I think this is not a very good way to dismiss the objection, given the views actual longtermists hold and how longtermism looks in practice today (a point Torres makes). I expect that most longtermists prioritize reducing extinction risks, and the most popular defences I'm aware of in the community relate to lost potential, the terminal value from those who would otherwise exist, whether or not it's aggregated linearly as in the total view. If someone prioritizes reducing extinction risk primarily because of the deaths in an extinction event, then they aren't doing it primarily because of a longtermist view; they just happen to share a priority. I think that pretty much leaves the remaining longtermist defences of extinction risk reduction as a) our descendants' potential to help others (e.g. cosmic rescue missions), and b) replacing other populations who would be worse off, but then it's not obvious reducing extinction risks is the best way to accomplish these things, especially without doing more harm than good overall, given the possibility of s-risks, incidental or agential (especially via conflict).

The critique 'it's just obviously more important to save a life than to bring a new one into existence' applies to extinction risk-focused longtermism pretty generally, I think, with some exceptions. Of course, the critique doesn't apply to all longtermist views, all extinction risk-focused views, or even necessarily the views of longtermists who happen to focus on reducing extinction risk (or work that happens to reduce extinction risk).

 

Second, population ethics is notoriously difficult, and all views have extremely counterintuitive implications. To assess the plausibility of total utilitarianism—to which longtermism is not committed—, you need to do the hard work of engaging with the relevant literature and arguments. Epithets like "genocidal" and "white supremacist" are not a good substitute for that engagement. [EDIT: I hope it was clear that by "you", I didn't mean "you, Dr Mathers".]

This is fair, although Torres did also in fact engage with the literature a little, but only to support his criticism of longtermism and total utilitarianism, and he didn't engage with criticisms of other views, so it's not at all a fair representation of the debate.

 

If you think you have valid objections to longtermism, I would be interested in reading about them. But I'd encourage you to write a separate post or "shortform" comment, rather than continuing the discussion here, unless they are directly related to the content of the articles to which Avital was responding.

I think his comment is directly related to the content of the articles and the OP here, which discuss total utilitarianism, and the critique he's raising is one of the main critiques in one of Torres' pieces. I think this is a good place for this kind of discussion, although a separate post might be good, too, to get into the weeds.

[I made some edits to make my comment clearer.]

I think this is not a very good way to dismiss the objection, given the views actual longtermists hold and how longtermism looks in practice today (a point Torres makes).

I wouldn't characterise my observation that longtermism isn't committed to total utilitarianism as dismissing the objection. I was simply pointing out something that I thought is both true and important, especially in the context of a thread prompted by a series of articles in which the author assumes such a commitment. The remainder of my comment explained why the objection was weak even ignoring this consideration.

Here are two nontrivial ways in which you may end up accepting longtermism even if you reject the total view. First, if you are a "wide" person-affecting theorist, and you think it's possible to make a nonrandom difference to the welfare of future sentient beings, whom you expect to exist for a sufficiently long time regardless of your actions. (Note that this is true for suffering-focused views as well as for hedonistic views, which is another reason for being clear about the lack of a necessary connection between longtermism and total utilitarianism, since utilitarianism is hedonistic in its canonical form.) Second, if you subscribe to a theory of normative uncertainty on which the reasons provided by the total view end up dominating your all-things-considered normative requirements, even if you assign significant credence to views other than the total view.

Separately, the sociological fact (if it is a fact) that most people who defend longtermism are total utilitarians seems largely irrelevant for assessing the plausibility of longtermism: this depends on the strength of the arguments for that view.

This is fair, although Torres did also in fact engage with the literature a little, but only to support his criticism of longtermism and total utilitarianism, and he didn't engage with criticisms of other views, so it's not at all a fair representation of the debate.

Yeah, by "engage with the literature" I meant doing so in a way that does reasonable justice to it. A climate change skeptic does not "engage with the literature", in the relevant sense, by cherry-picking a few studies in climate science here and there.

I think his comment is directly related to the content of the articles and the OP here, which discuss total utilitarianism, and the critique he's raising is one of the main critiques in one of Torres' pieces. I think this is a good place for this kind of discussion, although a separate post might be good, too, to get into the weeds.

I suggested using a separate thread because I expect that any criticism of longtermism posted here would be met with a certain degree of unwarranted hostility, as it may be associated with the articles to which Avital was responding. Although I am myself a longtermist, I would like to see good criticisms of it, discussed in a calm, nonadversarial manner, and I think this is less likely to happen in this thread.

a series of articles in which the author assumes such a commitment.

 

As I mentioned in a top-level comment on this post, I don't think this is actually true. He never claims so outright. The Current Affairs piece doesn't use the word "utilitarian" at all, and just refers to totalist arguments made for longtermism, which are some of the most common ones. His wording from the Aeon piece, which I've bolded here to emphasize, also suggests otherwise:

To understand the argument, let’s first unpack what longtermists mean by our ‘longterm potential’, an expression that I have so far used without defining. We can analyse this concept into three main components: transhumanism, space expansionism, and a moral view closely associated with what philosophers call ‘total utilitarianism’.

I don't think he would have written "closely associated" if he thought longtermism and longtermists were necessarily committed to total utilitarianism.

This leads to the third component: total utilitarianism, which I will refer to as ‘utilitarianism’ for short. Although some longtermists insist that they aren’t utilitarians, we should right away note that this is mostly a smoke-and-mirrors act to deflect criticisms that longtermism – and, more generally, the effective altruism (EA) movement from which it emerged – is nothing more than utilitarianism repackaged. The fact is that the EA movement is deeply utilitarian, at least in practice, and indeed, before it decided upon a name, the movement’s early members, including Ord, seriously considered calling it the ‘effective utilitarian community’.

The "utilitarianism repackaged" article explicitly distinguishes EA and utilitarianism, but points out what they share, and argues that criticisms of EA based on criticisms of utilitarianism are therefore fair because of what they share. Similarly, Dr. David Mathers never actually claimed longtermism is committed total utilitarian, he only extended a critique of total utilitarianism to longtermism, which responds to one of the main arguments made for longtermism.

Longtermism is also not just the ethical view that some of the primary determinants of what we should do are the consequences on the far future (or similar). It's defended in certain ways (often totalist arguments), it has an associated community and practice, and identifying as a longtermist means associating with those, too, and possibly promoting them. The community and practice are shaped largely by totalist (or similar) views. Extending critiques of total utilitarianism to longtermism seems fair to me, even if they don't generalize to all longtermist views.

As I mentioned in a top-level comment on this post, I don't think this is actually true. He never claims so outright.

In one of the articles, he claims that longtermism can be "analys[ed]" (i.e. logically entails) "a moral view closely associated with what philosophers call 'total utilitarianism'." And in his reply to Avital, he writes that "an integral component" of the type of longtermism that he criticized in that article is "total impersonalist utilitarianism". So it looks like the only role the "closely" qualifier plays is to note that the type of total utilitarianism to which he believes longtermism is committed is impersonalist in nature. But the claim is false: longtermism is not committed to total impersonalist utilitarianism, even if one restricts the scope of "longtermism" to the view Torres criticizes in the article, which includes the form of longtermism embraced by MacAskill and Greaves. (I also note that in other writings he drops the qualifier altogether.)

Dr. David Mathers never actually claimed longtermism is committed total utilitarian, he only extended a critique of total utilitarianism to longtermism, which responds to one of the main arguments made for longtermism

I agree (and never claimed otherwise). 

Extending critiques of total utilitarianism to longtermism seems fair to me, even if they don't generalize to all longtermist views.

I'm not sure what exactly you mean by "extending".  If you mean something like, "many longtermist folk accept longtermism because they accept total utilitarianism, so raising objections to total utilitarianism in the context of discussions about longtermism can persuade these people to abandon longtermism", then I agree, but only insofar as those who raise the objections are clear that they are directly objecting to total utilitarianism. Otherwise, this is apt to create the false impression that the objections apply to longtermism as such. In my reply to David, I noted that longtermism is not committed to total utilitarianism precisely to correct for that potential misimpression.

Ok, I don't find this particularly useful to discuss further, but I think your interpretations of his words are pretty uncharitable here. He could have been clearer/more explicit, and this could prevent misinterpretation, including by the wider audience of people reading his essays.

EDIT: Having read more of his post on LW, it does often seem like either he thinks longtermists are committed to assigning positive value to the creation of new people, or that this is just the kind of longtermism he takes issue with, and it's not always clear which, although I would still lean towards the second interpretation, given everything he wrote.

In one of the articles, he claims that longtermism can be "analys[ed]" (i.e. logically entails) "a moral view closely associated with what philosophers call 'total utilitarianism'."

This seems overly literal, and conflicts with other things he wrote (which I've quoted previously, and also in the new post on LW).

" And in his reply to Avital, he writes that "an integral component" of the type of longtermism that he criticized in that article is "total impersonalist utilitarianism".

He wrote:

As for the qualifier, I later make the case that an integral component of the sort of longtermism that arises from Bostrom (et al.)’s view is the deeply alienating moral theory of total impersonalist utilitarianism.

That means he's criticizing a specific sort of longtermism, not the minimal abstract longtermist view, so this does not mean he's claiming longtermism is committed to total utilitarianism. He also wrote:

Second, it does not matter much whether Bostrom is a consequentialist; I am, once again, criticizing the positions articulated by Bostrom and others, and these positions have important similarities with forms of consequentialism like total impersonalist utilitarianism.

Again, if he thought longtermism was literally committed to consequentialism or total utilitarianism, he would have said so here, rather than speaking about specific positions and merely pointing out similarities.

He also wrote:

Indeed, I would refer to myself as a "longtermist," but not the sort that could provide reasons to nuke Germany (as in the excellent example given by Olle Häggström), reasons based on claims made by, e.g., Bostrom.

Given that he seems to have person-affecting views, this means he does not think longtermism is committed to totalism/impersonalism or similar views.

 

So it looks like the only role the "closely" qualifier plays is to note that the type of total utilitarianism to which he believes longtermism is committed is impersonalist in nature.

Total utilitarianism is already impersonalist, from my understanding, so to assume by "moral view closely associated with what philosophers call 'total utilitarianism'", he meant "total impersonalist utilitarianism", I think you have to assume he didn't realize (or didn't think) total utilitarianism and total impersonalist utilitarianism are the same view. My guess is that he only added the "impersonalist" to emphasize the fact that the theory is impersonalist.

ab
2y18
1
0

Phil Torres wrote a response to my piece and as he is not currently on the forum, I offered to post a link to it. Here it is: https://www.lesswrong.com/posts/7WxH7fAq76Mvkx5YC/a-harmful-idea I am not endorsing it, but I think it is important to give people a chance to respond! If you are curious what he thought of this piece, I encourage you to read it. 

The link isn't working for me.

A wonderful piece, Avital. I am eager to write a full response after finals subside. But until then, just wanted to say that it is a fantastic piece. It gave context and responses to several major claims that I have been seeking more clarity on for quite a while. Without a doubt, it is where I'll be sending folks who are curious about defenses of Longtermism from now on. Brava! 

ab
2y3
0
0

Thank you Coleman! Looking forward to reading it 

More from ab
Curated and popular this week
Relevant opportunities