The word "longtermism" was coined in 2017 and discussed here on the Effective Altruism Forum at least as early as 2018.[1] In the intervening eight years, a few books on longtermism have been written, many papers have been published, and countless forum posts, blog posts, tweets, and podcasts have discussed the topic.

Why haven’t we seen a promising longtermist intervention yet? For clarity, longtermist interventions should meet the following criteria:

  • Promising: the intervention seems like a good idea and has strong evidence and reasoning to support it
  • Novel: it’s a new idea proposed since the term "longtermism" was coined in 2017 and it was first proposed by someone associated with longtermism in explicit connection to the term "longtermism"
  • Actionable: it’s something people could realistically do now or soon
  • Genuinely longtermist: it’s something that we wouldn’t want to do anyway based on neartermist concerns

In my view, the strongest arguments pertaining to the moral value of far future lives are arguments about existential risk. However, the philosopher Nick Bostrom’s first paper on existential risk, highlighting the moral value of the far future, was published in 2002, which is 15 years before the term "longtermism" was coined. The philosopher Derek Parfit discussed the moral value of far future lives in the context of human extinction in his 1984 book Reasons and Persons.[2] So, the origin of these ideas goes back much further than 2017. Moreover, existential risk and global catastrophic risk has developed into a small field of study of its own, and a topic that was well-known in effective altruism before 2017. For this reason, I don’t see interventions related to existential risk (or global catastrophic risk) as novel longtermist interventions.

Many of the non-existential risk-related interventions I’ve heard about are things people have been doing in some form for a very long time. General appeals to long-term thinking, as wise as they might be, do not present a novel idea. The philosophers Will MacAskill and Toby Ord coined the term "longtermism" while working at Oxford University, which is believed to be at least 929 years old. I’ve always thought it was ironic, therefore, to present long-term thinking as novel. ("You think you just fell out of a coconut tree?")

I have seen that (at least some) longtermists acknowledge this. In What We Owe the Future, MacAskill discusses the Haudenosaunee (or Iroquois) Seventh Generation philosophy, which enjoins leaders to consider the effects of their decisions on the subsequent seven generations. MacAskill also acknowledges the California non-profit the Long Now Foundation, created in 1996, which encourages people to think about the next 10,000 years. While 10,000 years is not the usual timespan people think about, some form of long-term thinking is an ancient part of humanity.

Two proposed longtermist interventions are promoting economic growth and trying to make moral progress. These are not novel; people have been doing both for a long time. Whether these ideas are actionable is unclear, since so much effort is already allocated toward these goals. It’s also unclear whether they are genuinely longtermist. The benefits of economic growth and moral progress start paying off within one’s own lifetime, and seem to be sufficient motivation to pursue them to nearly the maximum extent.

Other projects like space exploration — besides not being a novel idea — might be promising and genuinely longtermist, but not actionable in the near term. The optimal strategy with regard to space exploration, if we’re thinking about the very long-term future, is probably procrastination. The cost of delaying a major increase in spending on space exploration for at least a few more decades, or even for the next century, is small in the grand scheme of things. There is Bostrom’s astronomical waste argument, sure — every moment we delay interstellar expansion means we can reach fewer stars in the fullness of time — but Bostrom and everyone else believed that doing well over the next century or so, and securing a path to a good future, is more important than rushing to expand into space as fast as possible. Right now, we have problems like global poverty, factory farming, pandemics, asteroids, and large volcanoes to worry about. If everything goes right, in a hundred years, we’ll be in a much better position to invest much more in space travel. 

Another proposal is patient philanthropy, the idea that longtermists should set up foundations that invest donations in a stock market index fund for a century or more. The idea is to allow the wealth to compound and accumulate. There are various arguments against patient philanthropy. Patient philanthropy mathematically blows up within 500 years because the wealth concentrated in the foundations grows to a politically (and morally) unacceptable level, i.e., from 40% to 100% of all of society’s wealth. Some people define longtermism as being concerned with outcomes 1,000 years in the future or more, so an intervention that can’t continue for even 500 years maybe shouldn’t count as longtermist. It’s also unclear if this should count as an intervention in its own right. Patient philanthropy doesn’t say what the money should actually be used for, it just says that the money should be put aside indefinitely so it grow and be used later, with the decision about what to use it for and when to use it deferred indefinitely.[3] 

The rationale for patient philanthropy is that the money can be used to respond to future emergencies or for exceptionally good opportunities for giving. However, it isn’t clear why patient philanthropy would be the best way to make that funding available. We saw in 2020 the huge amount of resources that societies can quickly mobilize to respond to emergencies. Normal foundations that regularly disburse funds are often already on the lookout for good opportunities; we should expect, barring catastrophe, foundations like these will exist in the future. The promisingness of patient philanthropy is, therefore, dubious.

This is the pattern I keep seeing. Every proposed longtermist intervention I’ve been able to find so far fails to meet at least one of the four criteria listed above (and often more than one). This wouldn’t be so bad if not for the way longtermism has been presented and promoted. We have been told that longtermism is bracing new idea of great moral importance, around which the effective altruist movement, philanthropy, and possibly much else besides should change course. I think it’s a wonderful thing to generate creative or provocative ideas, but the declaration that an idea is morally and practically important should not get ahead of producing some solid advice, motivated by the new idea, that is novel and actionable. 

Occasionally, I’ll see someone in the wider world mention longtermism as a radical, unsettling idea. It typically seems like they’ve confused longtermism with another idea, like transhumanism. (In fairness, I’ve also seen people within the effective altruism community conflate these ideas.) As I see it, the problem with longtermism is not that it’s radical and unsettling, but that it’s boring, disappointing, overly philosophical, and insufficiently practical. If longtermism is such a radical, important idea, why haven’t we seen a promising longtermist intervention yet?

  1. ^

    Edited on December 18, 2025 at 7:10 PM Eastern to add:

    Correction, sorry. I originally said the term longtermism was first used on the EA Forum in 2017. The post I was thinking of was actually from 2019 and said the term had been coined in 2017. I changed this sentence to reflect the correct information.

    Apologies for the error.

  2. ^

    Nick Bostrom cites Derek Parfit's argument in Reasons and Persons in his 2013 TEDxOxford talk on existential risk.

  3. ^

    If I put money in a bank account now and earmark it for "longtermist interventions", does that, in itself — me putting the money in a bank account — count as a longtermist intervention? Or do I need to come up with a more concrete idea first?

8

2
6

Reactions

2
6

More posts like this

Comments21
Sorted by Click to highlight new comments since:

I don't think longtermism necessarily needs new priorities to be valuable if it offers a better perspective on existing ones (although I don't think it does this well either). 

Understanding what the far future might need is very difficult. If you'd asked someone 1000 years what they should focus on to benefit us, you'd get answers largely irrelevant to our needs today.[1] If you asked someone a little over 100 years ago their ideas might seem more intelligible and one guy was even perceptive enough to imagine nuclear weapons, although his optimism about what became known as mutually assured destruction setting the world free looks very wrong now, and people 100 years ago that did boring things focused on the current world did more for us than people dreaming of post-work utopias. 

To that extent, the focus on x-risk seems quite reasonable: still existing is something we actually can reasonably believe will be valued by humans in a million years time[2] Of course, there are also over 8 billion reasons to try to avoid human extinction alive today (and most non-longtermists consider at least as far as their children) but longtermism makes arguments for it being more important than we think. This logically leads to willingness to allocate more money to x-risk causes, and consider more unconventional and highly unlikely approaches x-risk. This is a consideration, but in practice I'm not sure that it leads to better outcomes: some of the approaches to x-risk seeking funding make directionally different assumptions about whether more or less AGI is crucial to survival: they can't both be right and the 'very long shot' proposals that only start to make sense if we introduce fantastically large numbers of humans to the benefit side of the equation look suspiciously like Pascal's muggings.[3] 

Plus people making longtermist arguments typically seem to attach fairly high probabilities to stuff like AGI that they're working on in their own estimations, which if true would make their work entirely justifiable even focusing only on humans living today.

 

(A moot point but I'd have also thought that although the word 'longtermist' wasn't coined until much later, Bostrom and to a lesser extent Parfit fit in with the description of longtermist philosophy. Of course they also weren't the first people to write about x-risk)

  1. ^

    I suspect the main answers would be to do with religious prophecies or strengthening their no-longer-extant empire/state

  2. ^

    Notwithstanding fringe possibilities like the possibility humans in a million years might be better off not existing, or for impartial total utilitarians humanity might be displacing something capable of experiencing much higher aggregate welfare.

  3. ^

    Not just superficially in that someone is asking to suspend scepticism by invoking huge reward, but also that the huge rewards themselves make sense only if you believe in very specific claims about x-risk over the long term future being highly concentrated in the present (very large numbers of future humans in expectation or x-risk being nontrivial for any extended period of time might seem superficially uncontroversial possibilities but they're actually strongly in conflict with each other). 

One of the more excellent comments I've ever read on the EA Forum. Perceptive and nimbly expressed. Thank you.

people 100 years ago that did boring things focused on the current world did more for us than people dreaming of post-work utopias.

Very well said!

To that extent, the focus on x-risk seems quite reasonable: still existing is something we actually can reasonably believe will be valued by humans in a million years time

I totally agree. To be clear, I support mitigation of existential risks, global catastrophic risks, and all sorts of low-probability, high-impact risks, including those on the scale of another pandemic like covid-19 or large volcanic eruptions. I love NASA's NEO Surveyor.

I think, though, we may need to draw a line between acts of nature — asteroids, volcanoes, and natural pandemics — and acts of humankind — nuclear war, bioterror, etc.

The difference between nature and humankind is that nature does not respond to what we do. Asteroids don't try to foul our defenses. In a sense, viruses "try" to beat our vaccines and so on, but that's already baked-in to our idea what viruses have been for a long time, and it isn't the same thing as what humans do when we're in an adversarial relationship with them.

I certainly think we should still try our absolute best to protect humanity against acts of humankind like nuclear war and bioterror. But it's much harder, if not outright impossible, to get good statistical evidence for the probability of events that depend on what humans decide to do, using all their intelligence and creativity, as opposed to a natural phenomenon like an asteroid or a virus (or a volcano). We might need to draw a line between nature and humankind and say that rigorous, cost-effectiveness estimates on the other side of the line may not be possible, and at the very least are much more speculative and uncertain.

I don't think that's an argument against doing a lot about them, but it's an important point nonetheless.

With AI, the uncertainty that exists with nuclear war and bioterror is cranked up to 11. We're talking about fundamentally new technologies based on, most likely, new science yet to be discovered, and even new theoretical concepts in the science yet to be developed, if not an outright new theoretical paradigm. This is quite different from bombs that already exist and have been exploded before. With bioterror, we already know natural viruses are quite dangerous (e.g. just releasing natural smallpox could be bad), and I believe there have been proofs-of-concept of techniques bioterrorists could use. So, it's more speculative, but not all that speculative.

Imagine this idea: someday in the hazy future, we invent the first AGI. Little do we know, this AGI is perfectly "friendly", aligned, safe, benevolent, wise, nonviolent, and so on. It will be a wonderful ally to humanity, like Data from Star Trek or Samantha from Her. Until... we decide to apply the alignment techniques we've been developing to it. Oh no, what a mistake! Our alignment techniques actually do the opposite of what we wanted, and turn a friendly, aligned, safe, benevolent AI into an unfriendly, misaligned, unsafe, rogue, dangerous AI. We caused the very disaster we were trying to prevent!

How likely is such a scenario? There's no way to know. We simply have no idea, and we have no way of finding out.

This helps illustrate, I hope, one of the problems (out of multiple distinct problems) with precautionary arguments about AGI, particularly if back-of-the-envelope cost-effectiveness calculations are used to justify spending on precautionary research. There is no completely agnostic way to reduce risk. You have to make certain technical and scientific assumptions to justify funding AI alignment research. And how well-thought-out, or well-studied, or well-scrutinized are those assumptions?

Why on earth would you set 2017 as a cutoff? Language changes, there is nothing wrong with a word being coined for a concept, and then applied to uses of the concept that predate the word. That is usually how it goes. So I think your exclusion of existential risk is just wrong. The various interventions for existential risks, of which there are many, are the answer to your question.

If you're saying that longtermism is not a novel idea, then I think we might agree.

Everything is relative to expectations. I tried to make that clear in the post, but let me try again. I think if something is pitched as a new idea, then it should be a new idea. If it's not a new idea, that should be made more clear. The kind of talk and activity I've observed around "longtermism" is incongruent with the notion that it's an idea that's at least decades and quite possibly many centuries old, about which much, if not most, if not all, the low-hanging fruit has already been plucked — if not in practice, than at least in research.

For instance, if you held that notion, you would probably not think the amount of resources — time, attention, money, etc. — that was reallocated around "longtermism" roughly in the 2017-2025 period would be justified, nor would the rhetoric around "longtermism" be justified.

You can find places where Will MacAskill says that longtermism is not a new idea, and references things like Nick Bostrom's previous work, the Long Now Foundation, and the Seventh Generation philosophy. That's all fine and good. But What We Owe The Future and MacAskill's discussions of it, like on the 80,000 Hours Podcast, don't come across to me as a recapitulation of a decades-old or centuries-old idea. I also don't think the effective altruism community's energy around "longtermism" would have been what it's been if they genuinely saw longtermism as non-novel.

For example, MacAskill defines longtermism as "the idea that positively influencing the long-term future is a key moral priority of our time." Why our time? Why not also the time of the founders of Oxford University 929 years ago or whenever it was? Sure, there's the time of perils argument, but, objections to the time of perils argument aside, why would a time of perils-esque argument also apply to all the non-existential risk-related things like economic growth, making moral progress, and so on?

I'm not especially familiar with the history - I came to EA after the term "longtermism" was coined so that's just always been the vocabulary for me. But you seem to be equating an idea being chronologically old with it already being well studied and explored and the low hanging fruit having been picked. You seem to think that old -> not neglected. And that does not follow. I don't know how old the idea of longtermism is. I don't particularly care. It is certainly older than the word. But it does seem to be pretty much completely neglected outside EA, as well as important and, at least with regard to x-risks, tractable. That makes it an important EA cause area.

Wow, this makes me feel old, haha! (Feeling old feels much better than I thought it would. It's good to be alive.)

There was a lot of scholarship on existential risks and global catastrophic risks going back to the 2000s. There was Nick Bostrom and the Future of Humanity Institute at Oxford, the Global Catastrophic Risks Conference (e.g. I love this talk from the 2008 conference), the Global Catastrophic Risks anthology published in 2008, and so on. So, existential risk/global catastrophic risk was an idea about which there had already been a lot of study even going back about a decade before the coining of "longtermism". Imagine my disappointment when I hear about this hot new idea called longtermism — I love hot new ideas! — and it just turns out to be rewarmed existential risk.

I agree that it might be perfectly fine to re-brand old, good ideas, and give them a fresh coat of paint. Sure, go for it. But I'm just asking for a little truth in advertising here.

Yes. One of the Four Focus Areas of Effective Altruism (2013) was "The Long-Term Future" and "Far future-focused EAs" are on the map of Bay Area memespace (2013). This social and ideological cluster has existed long before this exact name was coined to refer to it.

The only intervention discussed in relation to the far future at that first link is existential risk mitigation, which indeed has been a topic discussed within the EA community for a long time. My point is that if such discussions were happening as early as 2013 and, indeed, even earlier than that, and even before effective altruism existed, then that part of longtermism is not a new idea. (And none of the longtermist interventions that have been proposed other than those relating to existential risk are novel, realistic, important, and genuinely motivated by longtermism.) Whether people care if longtermism is a new idea or not is, I guess, another matter.

MacAskill:

Up until recently, there was no name for the cluster of views that involved concern about ensuring the long-run future goes as well as possible. The most common language to refer to this cluster of views was just to say something like ‘people interested in x-risk reduction’. There are a few reasons why this terminology isn’t ideal [...]

For these reasons, and with Toby Ord’s in-progress book on existential risk providing urgency, Toby and Joe Carlsmith started leading discussions about whether there were better terms to use. In October 2017, I proposed the term ‘longtermism’, with the following definition:

I think my basic reaction here is that longtermism is importantly correct about the central goal of EA if there are longtermist interventions that are actionable, promising and genuinely longtermist in the weak sense of "better than any other causes because of long-term effects", even if there are zero examples of LT interventions that meet the "novelty" criteria, or lack some significant near-term benefits. 

Firstly, I'd distinguish here between longtermism as a research program, and longtermism as a position about what causes should be prioritized right now by people doing direct work. At most criticisms about novelty seem relevant to evaluating the research program, and deciding whether to fund more research into longtermism itself. I feel like they should be mostly irrelevant to people actually doing  cause prioritization over direct interventions. 

Why? I don't see why longtermism wouldn't count as an important insight for cause prioritization if it was the case that thinking longtermistly didn't turn up any new intervention that we're not already known to be good, but it did change the rankings of interventions so that I changed my mind about which interventions were best. That seems to be roughly what longtermists themselves think is the situation with regard to longtermism. It's not that there is zero reason to do X-risk reduction type interventions even if LT is false, since they do benefit current people. But the case for those interventions being many times better than other things you can do for current people and animals rests on, or at least is massively strengthened by Parfit-style arguments about how there could be many happy future people. So the practical point of longtermism isn't to produce novel interventions, necessarily, but also to help us prioritize better among the interventions we already knew about. Of course, the idea of Parfit-style arguments being correct in theory is older than using it to prioritize between interventions, but so what? Why does that effect whether or not it is a good idea to use it to prioritize between interventions now? The most relevant question for what EA should fund isn't "is longtermist philosophy post -2017 simultaneously impressively original and of practical import" but "should we prioritize X-risk because of Parfit-style arguments about the number of happy people there could be in the future." If the answer to the latter question is "yes", we've agreed EAs should do what longtermists want in terms of direct work on causes, which is at least as important than how impressed we should or shouldn't be with the longtermists as researchers.* At most the latter is relevant to "should we fund more research into longtermism itself", which is important, but not as central as what first-order interventions we should fund. 
To put the point slightly differently, suppose I think the following: 

1) Based on Bostrom and Parfit-style arguments-and don't forget John Broome's case for making happy people being good-I think it's at least as influential on Will and Toby-the highest value thing to do is some form of X-risk reduction, say biorisk reduction for concreteness.

2) If it weren't for the fact that there could exist vast numbers of happy people in the far future, the benefits on the margin to current and near future people of global development work would be higher than biorisk reduction, and should be funded by EA instead, although biorisk reduction would still have significant near-term benefits, and society as a whole should have more than zero people working on it. 

Well, then I am a longtermist, pretty clearly, and it has made a difference to what I prioritize. If I am correct about 1), then it has made a good difference to what I prioritize, and if I am wrong about it, it might not have done. But it's just completely irrelevant to whether I am right to change cause prioritization based on 1) and 2) how novel 1) was if said in 2018, or what other insights LT produced as a research program.  

None of this is to say 1), or its equivalent about some other purpoted X-risk, is true. But I don't think you've said anything here that should bother someone who thinks it is. 

Whether society ends up spending, in the end, more money on asteroid defense or, possibly, more money on monitoring large volcanoes, is orders of magnitude more important than whether people in the EA community (or outside of it) understand the intellectual lineage of these ideas and how novel or non-novel they are. I don't know if that's exactly what you were saying, but I'm happy to concede that point anyway.

To be clear, NASA's NEO Surveyor mission is one of the things I'm most excited about in the world. It makes me feel so happy thinking about it. And exposure to Bostrom's arguments from the early 2000s to the early 2010s is a major part of what convinced me that we, as a society, were underrating low-probability, high-impact risks. (The Canadian journalist Dan Gardner's book Risk also helped convince me of that, as did other people I'm probably forgetting right now.)

Even so, I still think it's important to point out ideas are not novel or not that novel if they aren't, for all the sorts of reasons you would normally give to sweat the small stuff, and not let something slide that, on its own, seems like an error or a bit of a problem, just because it might plausibly benefit the world in some way. It's a slippery slope, for one...

I may not have made this clear enough in the post, but I completely agree that if, for example, asteroid defense is not a novel idea, but a novel idea, X, tells you that you should spend 2x more money on asteroid defense, then spending 2x more on asteroid defense counts as a novel X-ist intervention. That's an important point, I'm glad you made it, and I probably wasn't clear enough about it.

However, I am making the case that all the compelling arguments to do anything differently, including spend more on asteroid defense, or re-prioritize different interventions, were already made long before "longtermism" was coined.

If you want to argue that "longtermism" was a successful re-branding of "existential risk", with some mistakes thrown in, I'm happy to concede that. (Or at least say that I don't care strongly enough to argue against that.) But then I would ask: is everyone aware it's just a re-branding? Is there truth in advertising here?

Great question

Based on conversations I've had, I believe the focus in EA on longtermism has been off-putting for a lot of people and has probably cost a lot of support and donations for other EA causes.

Was it all a terrible waste?

Quick question - Is AI safety work considered a "long-termist" intervention? I know it has both short term and long term potential benefits, but what do people working on it generally see it as?

I suppose if you are generally pretty doomer, it wouldn't meet your 4th criteria. "Genuinely longtermist: it’s something that we wouldn’t want to do anyway based on neartermist concerns"

Also one would hope that it wouldn't be too long before @Forethought has cranked out one or too, as I think finding these is a big part of why they exist...

I mentioned that you often see journalists or other people not intimately acquainted with effective altruism conflate ideas like longtermism and transhumanism (or related ideas about futuristic technologies). This is a forgivable mistake because people in effective altruism often conflate them too.

If you think superhuman AGI is 90% likely within 30 years, or whatever, then obviously that will impact everyone alive on Earth today who is lucky (or unlucky) enough to live until it arrives, plus all the children who will be born between now and then. Longtermists might think the moral value of the far future makes this even more important. But, in practice, it seems like people who aren't longtermists who also think superhuman AGI is 90% likely within 30 years are equally concerned about the AI thing. So, is that really longtermist?

Something can be a promising X intervention even if its something that had been thought of before in connection with another purpose.

For example, GLP-1 blockers are promising obesity interventions. When we discovered they were very effective at weight loss, this was an important intellectual contribution to the world. It gave fat people a new reason to take the drugs. This is true even though GPL-1s were already an approved medical intervention for a different purpose (diabetes).

Even beyond this, I think Nick's Astronomical Waste argument is Longtermist. So in that sense it is a novel Longtermist idea, even if it predates the term 'Longtermism'.

I agree that the scholarship of Bostrom and others starting in the 2000s on existential risk and global catastrophic risk, particularly taking into account the moral value of the far future, does seem novel, and does also seem actionable and important, in that it might, for example, make us re-do a back-of-the-envelope calculation on the expected value of money spent on asteroid defense and motivate us to spend 2x more (or something like that).

As someone who was paying attention to this scholarship long before anyone was talking about "longtermism", I was pretty disappointed when I found out "longtermism" was just a recapitulation of that older scholarship, plus a grab bag of other stuff that was really unconvincing, or stuff that societies had already been doing for generations, or stuff that just didn't make sense.

My biggest takeaway from the comments so far is that many/most of the commenters don't care whether longtermism is a novel idea, or at least care about that much less than I do. I never really thought about that before — I never really thought that would be the response.

I guess it's fine to not care about that. The novelty (or lack thereof) of longtermism matters to me because it sure seems like a lot of people in EA have been talking and acting like it's a novel idea. I care about "truth in advertising" even as I also care about whether something is a good idea or not.

I think the existential risk/global catastrophic risk work that longtermism-under-the-name-"longtermism" builds on is overall good and important, and most likely quite actionable (e.g. detect those asteroids, NASA!), even though there may be major errors in it, as well as other flaws and problems, such as a lot of quite weird and farfetched stuff in Nick Bostrom's work in particular. (I think the idea in Bostrom's original 2002 paper that the universe is a simulation that might get shut down is roughly on par with supernatural ideas about the apocalypse like the Rapture or Ragnarök. I think it's strange to find it in what's trying to be serious scholarship, and it makes that scholarship less serious.)

The fundamental point about risk is quite simple and intuitive: 1) humans are biased toward ignoring low-probability events that could have huge consequences and 2) when thinking about such events, including those that could end the world, we should think not just about the people alive today, the world today, but the consequences for the world for the rest of time and all future generations. 

That's a nearly perfect argument! It's also something you can explain in under a minute to anyone, and they'll intuitively get it, and probably agree or at least be sympathetic. 

As I recall, when NASA did a survey of or public consultation with the American public, the public's desire for NASA to work on asteroid defense was overwhelmingly higher than NASA expected. I think this is good evidence that the general public finds arguments of this form persuasive and intuitive. And I believe when NASA learned that the public cared so much, that led NASA to prioritize asteroid defense much more than it had been previously.

I don't have any data on this right now (I could look it up), but to the extent that people — especially outside the U.S. — haven't turned covid-19 into a politically polarized, partisan issue and haven't bought into conspiracy theories or pseudoscience/non-credible science, I imagine that, among people who thought the virus was real, the threat was real, and the alarmed response was appropriate, there would be strong support for pandemic preparedness. This isn't rocket science — or, with asteroid defense, it literally is, but understanding why we want to launch the rockets isn't rocket science.

I think the fact that the term didn't add anything new is very bad because it came with a great cost. When you create a new set of jargon for an old idea you look naive and self-important. The EA community could have simply used framing that people already agreed with, instead they created a new term and field that we had to sell people on.

Discussions of "the loss of potential human lives in our own galactic supercluster is at least ~1046 per century of delayed colonization" were elaborate and off-putting, when their only conclusions were the same old obvious idea that we should prevent pandemics, nuclear war and SkyNet (The idea of humans not becoming extinct goes back at least to discussions of nuclear apocalypse in the 40s, Terminator came out in 1984).

I agree with your first paragraph (and I think we probably agree on a lot!), but in your second paragraph, you link to a Nick Bostrom paper from 2003, which is 14 years before the term "longtermism" was coined.

I think, independently from anything to do with the term "longtermism", there is plenty you could criticize in Bostrom's work, such as being overly complicated or outlandish, despite there being a core of truth in there somewhere.

But that's a point about Bostrom's work that long predates the term "longtermism", not a point about whether coining and promoting that term was a good idea or not.

I agree that trying to make moral progress has near-term benefits but, particularly in some areas like animal welfare, progress can feel dishearteningly slow. The accumulated benefit from 1000 years of tiny steps forward in terms of moral progress could be pretty huge, but perhaps it won't ever feel massively significant within any one person's lifetime. That makes it feel longtermist, though I accept that it feels quite vague to be considered an actionable longtermist intervention.

I hope that moral progress on animal rights/animal welfare will take much less than 1,000 years to achieve a transformative change, but I empathize with your disheartened feeling about how slow progress has been. Something taking centuries to happen is slow by human (or animal) standards but relatively fast within the timescales that longtermism often thinks about.

Curated and popular this week
Relevant opportunities