Cross-posted from Cold Button Issues.

Sometimes philosophers make bold, sweeping claims for other philosophers and modest, palatable claims for the general public. Consider Peter Singer’s writing on philosophy which includes  endorsing situational infanticide versus his more popular writing where he makes hard to dispute claims like “[l]iving a minimally acceptable ethical life involves using a substantial part of our spare resources to make the world a better place.”

Will MacAskill and Hillary Greaves wrote a paper arguing for strong longtermism the “the view that impact on the far future is the most important feature of our actions today.” Then Will MacAskill wrote a New York Times best-selling book that argued that caring about the future is somewhat morally important.

MacAskill didn’t need to water down his claims to convince me. In theory, I’m fully on board with longtermism. There’s probably  tons of future people who matter just as much as we do so let’s prioritize them, hurray! Despite being willing to endorse the philosophy of longtermism, I think building a movement around longtermism or taking actions for the sake of the longterm future are likely to backfire.

Some friends of mine in the effective altruism movement have said they would be excited about the shift to longtermism if there were successful past examples of longtermist movements. 

But I think past examples of longtermism are easy to find- it’s just hard to find successful examples.

When GiveWell was relatively young and not as influential as it is today, it commissioned work on the history of philanthropy, to answer questions like when did ambitious philanthropists succeed, when did they fail, and what effective altruists could learn from the past. I think repeating such a process for longtermism, by taking even a quick look at past efforts to prioritize the longterm future- casts doubt on longtermist efforts.

Benjamin Franklin, Failed Longtermist

If George Washington was the Captain America of the Founding Fathers, Benjamin Franklin was Iron Man. The fun one, the cool one, the guy who invented the lightning rod. He’s probably the closest thing America has to a Leonardo Da Vinci. He signed the Declaration of Independence, ran the post office, was the ambassador to France, and kept on inventing things.

He also tried to be a longtermist. When he died, he left a bequest to the cities of Boston and Philadelphia that was to accrue interest for the next 200 years before the cities could access the whole principal. As Will MacAskill recounts, the amount grew to $5 million and $2 million respectively. The money mostly went to fund a private college.

Because of this he’s sometimes favorably cited as a successful example of how people can intentionally try to help the longterm future and succeed. The problem is, of all the things that Franklin did that shaped the future, his intentional future-oriented bequest was basically a rounding error. No disrespect to the Benjamin Franklin Institute of Technology which benefited from his generosity, but that’s not what made Benjamin Franklin important to the world.

What else could he have spent this money on? Taking better care of his health so he lived longer, supporting relatives to start a family yielding hundred of additional Franklin descendants over the years, running a few more experiments…. He could have thrown another party! This might sound like I’m joking but a big part of an ambassador’s jobs is to be a charming bon vivant and a great party host. An even stronger friendship between the United States and France would surely have been more consequential than founding this private college.

There’s no contradiction between spending money now to benefit the future and being a longtermist. Still, even a man as brilliant as Benjamin Franklin totally flopped when he tried to go longtermist.


Perpetual Foundations

Because the proponents of longtermism within effective altruism are unusually intelligent, well-educated, and articulate, it sometimes seems like effective altruists are the first to think of every single good idea in philanthropy. Sometimes that’s true- I think 80,000 Hours, for example,  is truly unique.

But dedicating philanthropic resources to the betterment of the future is common, even prosaic and routine. The institution of the “perpetual foundation” might not be described as a product of philosophical consequentialism or longtermist thinking. Yet the core concept of a perpetual foundation, a charitable trust that donates little enough of its capital each year that the principal amount stays the same or increases, is essentially a commitment to treat people in each year from here to eternity as equally deserving of philanthropic resources. Truly impressive moral impartiality!

As far as I know, longtermists have not cited perpetual foundations as a precedent. One reason might be that the typical perpetual foundation is not very adept with philosophical rhetoric. But the structure is very longtermist. 

Perpetual foundations have supported hugely impressive things. Both the Ford and Rockefeller Foundation funded the Green Revolution, but both foundations were relatively young at the time.

One danger of perpetual foundations is that they might end up pursuing goals antithetical to what their creators wanted. The classic example here is the Ford Foundation,  founded by successful capitalists within the Ford family, which eventually took a sharp left turn, causing Ford family members at the time to distance themselves from the foundation. Whether the leftward shift is good and a happy accident or sign of moral progress, or bad and a betrayal of its founders, isn’t that relevant in this case. From the perspective of its creators, it backfired after only a few short decades.

Perpetual foundations have also fallen out of favor in the philanthropic press and world of institutional philanthropy more broadly in the last few decades; instead  spend-down or sunsetting foundations have gained favor.  The Gates Foundation has pledged to exhaust its resources within 20 years of the deaths of Bill and Melinda Gates. Atlantic Philanthropies lists Cari Tuna and Dustin Moskovitz, Jack Ma, and Mark Zuckerberg as also avoiding perpetual foundations. 

A powerful critique of charitable foundations is that they’re uniquely unaccountable- not answering to the market like corporations or to voters like politicians. Longterm-oriented philanthropy seems like it would be even less accountable. If a malaria charity fails to achieve anything, people with malaria can try to complain. But if a longterm-oriented charity fails to achieve anything, the only people who could complain would be time travelers!

Communist Revolutions

Communist revolutionaries fought for a utopia. Many Communists suffered social ostracism, torture, and death across many continents in the hopes of bringing about a positive long-term future. Many Communist thinkers endorsed a substantial and bloody transition period between capitalism and a true classless society, that they claimed would be pretty awesome. This seems risky because when costs are upfront and really extreme, your predictions of longterm benefits have to be pretty accurate for you to end up ahead in terms of human well-being.

I judge Communism as a total failure that plausibly killed nearly  100 million and  an example of failed longtermism. Some  people view various Communist regimes and revolutions as good overall, and dispute claims about alleged Communist crimes. If that’s you, you might  view Communism as a hopeful precedent for longtermism. .



In What We Owe the Future, MacAskill wrote: 

“To illustrate, suppose that a highly educated person in the year 1500 tried to make the longterm future go well. .. many issues wouldn’t occur to them. The ideas that the earth’s habitable life span could be a billion years and that the universe could be so utterly enormous, yet almost entirely uninhabited, would not have been on the table.”

 I have to say the time aspect of this statement is almost certainly false. The poor, benighted 1500er would likely be a devout Christian or Muslim or Hindu or some other religious adherent who believed in the immortal soul. Next to secular effective altruists who hope humanity survives merely till the heat death of the universe, they were the real longtermists and would scoff at unambitious hopes of merely surviving a few billion years.

Religion  is probably the best prospect for showing that longtermist movements are a good idea. Many religions present themselves as eternal truths (inherently longterm), incorporate claims about an eternal afterlife (also super-longterm), and inspire people to make huge sacrifices to ensure the longterm survival of their faith such as the martyrs or missionaries who set off on dangerous missions to spread the word.

Along the way, most major religions have inspired major altruistic works and charitable institutions- although some would argue they inspire comparable harm. And if Christianity or Islam or some other religion is true, then people who worked on spreading the true faith may have done infinite good, easily outweighing the finite failures of the other categories.

I think it’s indisputable that there are several religions that for centuries or millennia have shifted human values in their direction, shaped the decisions of billions, and created and destroyed powerful institutions and states.

If you’re religious, then maybe you should be relatively bullish on longtermism. Of course the specifics of your religion might endorse longtermist actions much different than those typically supported by effective altruists- spreading the Gospel, starting a religious order, intercessory prayer, etc… But if you’re not religious or not eager to claim that the fledgling effective altruist longtermist movement is essentially a religion, then it might  look like most movements or institutions that aim explicitly at longtermist ends are failures. This new longtermist movement also seems to lack many of the features that have made religions so successful- hope for the afterlife, a clear set of moral values, community, and a psychological framework for life that’s relevant across society.

The Longtermist Paradox

One way to reject my argument would be to find lots of examples of longtermist movements that actually work.. Another would be to argue that longtermism is so unique and unprecedented, we have little to learn from past failures.

For instance, you can claim that MacAskill and Ord and Beckstead and Bostrom and Greaves and so forth, are smarter and better-meaning than  proponents of most past longtermist movements. That seems plausible to me! And I do think the world would probably be better overall, if those philosophers could guide a few more billion dollars to dealing with biorisk or safe development of artificial intelligence or other concrete actions they’ve endorsed.

 But I don’t view the effective altruist version of longtermism as particularly unique or unprecedented.I think the dismal record of (secular) longtermism speaks for itself.  

Hedonism is often criticized by arguing that the pursuit of pleasure fails to produce it. I think the same thing holds true for longtermism. Great men and women who made their descendants  better off were actively working to make their peers or maybe their children and grandchildren better off- not their distant descendants. The ones who aimed at the distant future mostly failed. The longtermist label seems mostly unneeded and unhelpful- and I’m far from the first to think so.

We can salvage many of the specific concerns of longtermism- climate change, nuclear proliferation, biorisk, artificial intelligence- by appealing to how they threaten our generation and the next few. I’d like to keep humans from going extinct but I don’t want the effective altruism movement to fall into the same reference class as so many disappointing or harmful past movements.

The best I think we can do is think of human history as a ladder extending into a long and hopefully bright future. We should do our best to make it through the next few rungs, passing on the world in as good of a condition as we can manage it. After that it’s up to our descendants to tackle the next few rungs. Looking too far ahead might cause our grip to slip.

Since I’ve conceded that many of the specific concerns of longtermists are warranted while criticizing their unfortunate conceptual focus, perhaps I have made a similar error by writing about a relatively meta and conceptual issue instead of focusing on my current day job, which ironically is in a field, prioritized by longtermists. But as more time and more money explicitly revolves around longtermism, a movement with many failed predecessors and few if any hopeful examples, I worry that effective altruism is going down the same road.



New Comment
28 comments, sorted by Click to highlight new comments since: Today at 9:05 PM

This argument has some force but I don't think it should be overstated.

Re perpetual foundations: Every mention of perpetual foundations I can recall has opened with the Franklin example, among other historical parallels, so I don't think its advocates could be accused of being unaware that the idea has been attempted!

It's true at least one past example didn't pan out. But cost-benefit analysis of perpetual foundations builds in an annual risk of misappropriation or failure. In fact such analyses typically expect 90%+ of such foundations to achieve next to nothing, maybe even 99%+. Like business start-ups, the argument is that the 1 in 100 that succeeds will succeed big and pay for all the failures.

So seeing failed past examples is entirely consistent with the arguments for them and the conclusion that they are a good idea.

Re communist revolutions: Many groups have tried to change how society is organised or governed hoping that it will produce a better world. Almost all the past examples of such movements I can think of expected benefits to come fairly soon — within a generation or two at most — and though advocates for such changes usually hoped the benefits will be long-lasting, benefits to be derived millions of years in the future is hardly what motivated their participants.

Many, especially the most violent ones, have been disastrous, like the various communist revolutions you refer to. Others of course have been by and large positive, such as people advocating for broadening the franchise, ending slavery, weakening the power of monarchies or phasing out non-self-governing territories. Many also died for those causes. Advocates for those ideas hoped to benefit both current and future generations in much the same way as did communists, and on the whole I think human governance has improved as a result of all these efforts in aggregate over the last thousand years.

On balance I don't think communists are a closer match to longtermism than many other incremental and radical political movements (culturally it's probably the opposite). And if you broaden the range of reform movements considered then it's hard to know whether they've been a success or failure.

(Or, as they say, it's still too soon to tell whether the French revolution was a good idea!)

Re religions: Religions are a possible analogy for longtermism but while very numerous I think the similarities are not enough to make them an especially compelling reference class.

What's most distinctive about religions isn't that they're focused on the very long term. In fact many of them are millennialist, or see the universe as fundamentally cyclical, or are focused on reaching an unchanging Platonic realm in which time is a meaningless concept.

For comparison one could also argue from 'almost all religions proposed are wrong' (necessarily so because they are numerous and contradictory) to 'almost all broad worldviews or opinions are wrong, including the pro-incrementalist one you present in this blog post'. I don't find that a very strong rebuttal to your view for the same reason I don't find it a strong rebuttal of longtermism.

Re the longtermist paradox: I agree 99%+ of improvements to the world have been driven by people trying to improve things in a more immediate way than longtermists typically do. But using the same definition I also think 99.9%+ of all human effort has gone towards such things.

We need to divide the impact by the inputs to see how cost-effective those actions are. Even if longtermism is far more cost-effective than non-longtermism, we should expect non-longtermism to be dominating total impact because it's wildly wildly larger in scale.

Through history so few people have done things that would only be particularly recommended by longtermism that we just don't know yet whether it will pan out in practice. The distribution of impact across projects should be expected to be very fat tailed, so even after the fact it will require a huge sample to empirically assess in a statistically sound way. Sometimes life just sucks that way!

Advocates for those ideas hoped to benefit both current and future generations in much the same way as did communists, and on the whole I think human governance has improved as a result of all these efforts in aggregate over the last thousand years.

That sounds doubtful to me. Hegelian ideas about the nature of history are an important part of communism and part of how communist thinkers thought their actions will have effects on the far future.

Just because you observe that people who argued for ending slavery had positive longterm effects doesn't imply that those longterm effects were central to how those people thought about the issue. 

Your response to perpetual foundations seems to have surprising and suspicious convergence. 

At some base-rate we cannot just keep saying "oh, another failed past example of perpetual foundation. But that is ok, this is actually consistent with the conclusion that they are still a good idea."

The dis-analogy here with business startups is that we actually have clear evidence that some business startups do drastically succeed to make up for all the failures. Granted, this could just be because there haven't been enough perpetual foundations for us to finally hit on the great result that will make up for the failures.

So, although I agree your response on perpetual foundations is warranted, it still makes me raise my eyebrow.

Thanks for writing this. I think there are actually some pretty compelling examples of people/movements being quite successful at helping future generations (while partly trying to do so):

  • Some sources suggest that Lincoln had long-term motivations for permanently abolishing slavery, saying, "The abolition of slavery by constitutional provision settles the fate, for all coming time, not only of the millions now in bondage, but of unborn millions to come--a measure of such importance that these two votes must be procured." Looking back now, abolition still looks like a great move for future generations.
    • I don't know how accurate those sources are, but at least a U.S. constitutional amendment is structured to have very long-lasting impacts, given the extreme difficulty of undoing it.
  • The U.S. constitution appears to have been partly aiming to create a long-lasting democracy, citing "our posterity" in its preamble. It seems to have largely worked.
  • Proponents of measures to avoid nuclear war and reduce nuclear weapons testing often cited future generations as one motivation. (For example, in a famous speech he gave before launching U.S.-Soviet cooperation on limiting nuclear testing and nonproliferation, Kennedy appealed to the importance of "not merely peace in our time but peace for all time" and to "the right of future generations to a healthy existence.") These efforts have been quite successful; we've had about 77 years with no wartime use of nuclear weapons, nuclear testing has plummeted, and far few states than once feared now have nuclear weapons.

[Edited to add] All this looks to me like a mixed (and maybe fairly good overall) track record, not a terrible one. (Though a deeper problem is that we can't justifiably draw almost any conclusions about base rates from these or the post's examples, since we've made no serious efforts to find a representative sample of historical longtermist efforts.)

Thanks for the counterexamples!

I'm trying to think of a way to get a fair example: Coding party manifestos by attention to long-term future and trying to rate their success in office? I'm really unsure.

And worth noting that Ben Franklin was involved in the constitution, so at least some of his longtermist time seems to have been well spent.

This is a cool post. Though, I wonder if there's switching between longtermism as a theory of what matters vs the idea you should try to act over long timescales (as with a 200yr foundation).

You could be a longtermist in terms of what you think is of moral value, but believe the best way to benefit the future (instrumentally) is to 'make it to the next rung'. Indeed this seems like what Toby, Will etc. basically think.

Maybe then relevant reference class is more something like 'people motivated to help future generations but who did that by solving certain problems of the day', which seems a very broad and maybe successful reference class - eg encompassing many scientists, activists etc.

PS shouldn't the environmentalism, climate change and anti nuclear movements be part of your reference class?

I was thinking the reference class was something like "people explicitly orienting their actions for the benefit of  far future generations." 

I was trying to be more specific than every good deed that also benefits the future. I didn't want to include things like "this vaccine will save our children (and future generations)" or "we will win this war against our evil enemy (and also for our children's sake)". 

What seems new about longtermism to me is not the belief that good things will have positive consequences in the long-run- "classic" EA and bednet funders think that- but that decisions should be made specifically with the future as the primary end in mine. That's what seems to distinguish Ord and MacAskill from other EA's and altruists in general.

I agree with your suggested examples in the end- and some other commenters suggested some other movements to consider- so I want to revisit this topic with a better collection of examples. I don't want to be unfair to longtermists despite my skepticism of their emphasis.

Anti-nuclear advocates frequently talk about the long time that certain isotopes need to decay.

Stewart Brand who came out of the environmentalist field founded the LongNow foundation and there are plenty of people in that field who think similar to him.

[-][anonymous]5mo 6

One obvious point that isn't mentioned in your replies that most EA longtermists also believe in human-level AI coming over the next century or two with high probability.

And superhuman AI can in theory be:

  • radically more authoritarian than anything in the past
  • radically more capable, for instance it may solve most of the socialist calculation problem
  • much better at moral philosophy than we are, finding out "better" values and intended desires not just what we reveal surface-level
  • radically more value-stable than human ancestries or institutions, with well-defined values or value drift processes lasting for upto billions of years.

There's only so much we can learn from the past when dealing with upside and downside of this unique event.

A key point about Ben Franklin is that his longtermist efforts were for the benefit of the future, whereas EA-style longtermist causes like AI risk and biosecurity are about ensuring there actually is a future. 

I think as long as there are x-risks that we can plausibly influence there will be people carrying the torch for longtermism in one form or another. 

If a malaria charity fails to achieve anything, people with malaria can try to complain. But if a longterm-oriented charity fails to achieve anything, the only people who could complain would be time travelers!

I appreciate this concern, but as I explained in this comment here, I think this is not a very strong argument against longtermism. To briefly summarize the ideas I explain in more detail there:

  1. Many people are already not held very accountable in the near term, despite what we might hope.
  2. Near-term interventions can also prove to be relatively unimportant from a long-term lens.
  3. You can definitely be held accountable or feel guilty if it becomes apparent in the near-term that your arguments/proposals will actually be bad in the long-term.
  4. Due to the massive expected value, Long-termism can probably still just bite the bullet here even if you mostly dismiss the previous points.

Will's book mentions USA Founding Fathers as an example. Success? Seems good so far. But too early to tell, perhaps.

Religion is probably the best prospect for showing that longtermist movements are a good idea.

Hmm. Maybe. I could imagine Tom Holland making a case for this.

I've not looked into this much [1], but my guess is that a bunch of religious leaders have in fact thought pretty long term and quite deeply about what matters and what kinds of guidance humanity most needs (e.g. Saint Paul).

Even more speculative: perhaps one of the main weaknesses of most contemporary religious leaders is not knowing much about Bostromian / FHI-ish stuff on technology.

  1. Mainly just watched The Young Pope which is outstanding and many people should try, especially season 2 but don't skip season 1. ↩︎

The ones who aimed at the distant future mostly failed. The longtermist label seems mostly unneeded and unhelpful- and I’m far from the first to think so.


Firstly, in my mind, you're trying to say something akin to that we shouldn't advertise longtermism as it hasn't worked in the past. Yet this is a claim about the tractability of the philosophy and not necessarily about the idea that future people matter.

Don't confuse the philosophy with the instrumentals, longtermism matters, but the implementation method is still up for debate.

 But I don’t view the effective altruist version of longtermism as particularly unique or unprecedented.I think the dismal record of (secular) longtermism speaks for itself.  

Secondly, I think you're using the wrong outside view. 

There is a problem with using historical presidents as you assume similar conditions exist in the EA community as it did in the other communities. 

An example of this is HPMOR and how unpredictable the success of this fan fiction would have been if you looked at an average Harry Potter fan fiction from before. The underlying outside view is different because the underlying causal thinking is different. 

As Nasim Nicholas Taleb would say, you're trying to predict a black swan, an unprecedented event in the history of humanity. 

What is it that makes longtermism different? 

There is a fundamental difference in understanding of the world's causal models in the EA community. There is no outside view for longtermism as its causal mechanisms are too different from existing reference classes.

To make a final analogy, it is useless to predict gasoline prices for an electric car, just like it is useless to predict the success of the longtermist movement from previous ones.

(Good post, though, interesting investigation, and I tend to agree that we should just say holy shit, x-risk instead)

There is a fundamental difference in understanding of the world's causal models in the EA community. There is no outside view for longtermism as its causal mechanisms are too different from existing reference classes.

What do you mean by this? 

Essentially that the epistemics of EA is better than in previous longtermist movements. EA's frameworks are a lot more advanced with things such as thinking about the traceability of a problem, not Goodharting on a metric, forecasting calibration, RCTs... and so on with techniques that other movements didn't have.

Whether or not AI risk is tractable is in doubt. Eliezer argued that it's likely not tractable but that we should still invest in it. The longermist arguments about the value of the far future suggest that even if there's only a 0.1% chance that AI risk is tractable we should still fund it as the most important cause. 

Related: Hero Licensing (the title of the first section is "Outperforming the outside view"). 

Thank you! I was looking for this one but couldn't find it

A short objection:

One danger of perpetual foundations is that they might end up pursuing goals antithetical to what their creators wanted. The classic example here is the Ford Foundation, founded by successful capitalists within the Ford family, which eventually took a sharp left turn...

I don't think this is a problem. Yes, I'm biased because I'm a leftist - but I also think values change with time, and we needn't worry about our children or grandchildren having different values than us, as theirs might be better. Looking over history, it looks like they probably will be.

As long as we care about "making sure our money benefits the entire future", the only value we need to lock in somehow is that one itself.

ColdButtonIssues makes some good points about religion as a major traditional  focus of longtermist imagination and morality in most human societies. I agree that EA should pay closer attention to the intellectual/moral history of religious conceptions of the afterlife & reincarnation -- at least as cautionary tales about how runaway consequentialist reasoning can go astray. (eg 'it's worth burning this person at the stake if there's even a 1% chance that the excruciation will make them recant their apostasy to save their immortal soul, which could enjoy heaven for >10 quadrillion times as long as they're being burned....').

I would add that political conservatism has also been more longtermist, traditionally, than most revolutionary movements such as communism (which may give lip service to future generations, but is often more interested in immediate vengeance against the local bourgeoisie). This seems especially true of the more family-values, pronatalist versions of conservatism that focus on multi-generational/dynastic thinking. 

This also seems true of conservative political philosophers who tend to ask questions like:

'How will this new policy really affect our grand-kids and great-grand-kids?

'Is it really worth giving up this tradition that's worked for dozens of generations (that's 'Lindy'), to try something new and untested?'

'Will this new social/technical/governance innovation really prove stable for multiple generations against perturbation, exploitation, propaganda, mission drift, regulatory capture, security exploits, savvy adversaries, and future folly?'

'Is this proposed strategy even a Nash equilibrium in the game of life, given all the easily-anticipated counter-strategies?'

Implicit in a lot of conservative/traditionalist political philosophy is a concern for how stable certain social arrangements will be, in the long term, against future adversaries, activists, virtue-signalers, foreign powers, misguided do-gooders, government powers, self-interested elites, etc. 

In other words, there's a focus on which strategies are evolutionarily stable equilibria in long-term iterated games. In that regard, I see a strong overlap between conservative political philosophy and some current EA longtermist concerns such as the game theory involved in AI alignment, or the geopolitical governance issues around nuclear war and bioweapons development.

  1. It seems like if the issue is Benjamin Franklin's endowment wasn't used well enough, maybe he should have thought more about setting conditions on how it would be used most helpfully. That seems like a useful data point that can be used to do better the next time someone tries to improve the future rather than that it's too difficult to even try. 
  2. If you are going to say communist revolution is an example of a longtermist movement because some people involved cared about the future, don't you have to say the same for democratic revolutions? Or any revolution?
  3. Also, is caring about the future really enough to meaningfully equate movements with vastly different ideas about how to improve the world? 
  4. If it does, then does focus on the present connect all movements meaningfully? I think communists and democrats and basically every movement that failed or succeeded to some extent was concerned with the effects of problems like the distribution of power on the world today, but I'm not sure that means focusing on today is bad and it didn't seem to lock them into the same outcomes as each other.  

I do think Communism was on average a more longtermist movement than democratic revolutions. Maybe the typical revolutionary in all revolutions had similar goals, but Marx and many of his followers had a vision for how history was supposed to play out, and envisioned an intermediate form of society, between the revolution and an eventual classless society.

In contrast, a lot of democratic revolutions were more like "King George bad." I don't think the American founding fathers were utopian in the same sense as a lot of Marxists.

You don't think the Russian revolution was like "Tsar Nicholas bad"?

I mean, "liberty and justice for all" sounds like a pretty strong vision of the future to me. 

I guess I'd like to see more evidence that 1) there were significant differences in caring about the future between movements and 2) how these differences contributed to movement failures concretely. 

If I had to guess, I'd hypothesize that there's something else that is the main factor(s), like social dominance orientation of leaders and the presence or absence of group mechanisms to resist that or channel it in less destructive ways. 

is caring about the future really enough to meaningfully equate movements with vastly different ideas about how to improve the world?

Given that longtermism is literally defined as a focus on improving the long-term future, I think yes? You can come up with many vastly different ways to improve the long-term future, but we should think of the category as "all movements to improve the long term future" and not "all movements to improve the long term future focusing on AI and bio risk and value lock in".

Let me rephrase: is focus on improving the long-term future enough to equate movements with vastly different ideas about how to improve the world, such that if one of those ideas turns out poorly, all ideas that similarly focus on the long-term future are just as risky or tainted by association?