The phrase "long-termism" is occupying an increasing share of EA community "branding". For example, the Long-Term Future Fund, the FTX Future Fund ("we support ambitious projects to improve humanity's long-term prospects"), and the impending launch of What We Owe The Future ("making the case for long-termism").
Will MacAskill describes long-termism as:
I think this is an interesting philosophy, but I worry that in practical and branding situations it rarely adds value, and might subtract it.
In The Very Short Run, We're All Dead
AI alignment is a central example of a supposedly long-termist cause.
But Ajeya Cotra's Biological Anchors report estimates a 10% chance of transformative AI by 2031, and a 50% chance by 2052. Others (eg Eliezer Yudkowsky) think it might happen even sooner.
Let me rephrase this in a deliberately inflammatory way: if you're under ~50, unaligned AI might kill you and everyone you know. Not your great-great-(...)-great-grandchildren in the year 30,000 AD. Not even your children. You and everyone you know. As a pitch to get people to care about something, this is a pretty strong one.
But right now, a lot of EA discussion about this goes through an argument that starts with "did you know you might want to assign your descendants in the year 30,000 AD exactly equal moral value to yourself? Did you know that maybe you should care about their problems exactly as much as you care about global warming and other problems happening today?"
Regardless of whether these statements are true, or whether you could eventually convince someone of them, they're not the most efficient way to make people concerned about something which will also, in the short term, kill them and everyone they know.
The same argument applies to other long-termist priorities, like biosecurity and nuclear weapons. Well-known ideas like "the hinge of history", "the most important century" and "the precipice" all point to the idea that existential risk is concentrated in the relatively near future - probably before 2100.
The average biosecurity project being funded by Long-Term Future Fund or FTX Future Fund is aimed at preventing pandemics in the next 10 or 30 years. The average nuclear containment project is aimed at preventing nuclear wars in the next 10 to 30 years. One reason all of these projects are good is that they will prevent humanity from being wiped out, leading to a flourishing long-term future. But another reason they're good is that if there's a pandemic or nuclear war 10 or 30 years from now, it might kill you and everyone you know.
Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?
I think yes, but pretty rarely, in ways that rarely affect real practice.
Long-termism might be more willing to fund Progress Studies type projects that increase the rate of GDP growth by .01% per year in a way that compounds over many centuries. "Value change" type work - gradually shifting civilizational values to those more in line with human flourishing - might fall into this category too.
In practice I rarely see long-termists working on these except when they have shorter-term effects. I think there's a sense that in the next 100 years, we'll either get a negative technological singularity which will end civilization, or a positive technological singularity which will solve all of our problems - or at least profoundly change the way we think about things like "GDP growth". Most long-termists I see are trying to shape the progress and values landscape up until that singularity, in the hopes of affecting which way the singularity goes - which puts them on the same page as thoughtful short-termists planning for the next 100 years.
Long-termists might also rate x-risks differently from suffering alleviation. For example, suppose you could choose between saving 1 billion people from poverty (with certainty), or preventing a nuclear war that killed all 10 billion people (with probability 1%), and we assume that poverty is 10% as bad as death. A short-termist might be indifferent between these two causes, but a long-termist would consider the war prevention much more important, since they're thinking of all the future generations who would never be born if humanity was wiped out.
In practice, I think there's almost never an option to save 1 billion people from poverty with certainty. When I said that there was, that was a hack I had to put in there to make the math work out so that the short-termist would come to a different conclusion from the long-termist. A 1/1 million chance of preventing apocalypse is worth 7,000 lives, which takes $30 million with GiveWell style charities. But I don't think long-termists are actually asking for $30 million to make the apocalypse 0.0001% less likely - both because we can't reliably calculate numbers that low, and because if you had $30 million you could probably do much better than 0.0001%. So I'm skeptical that problems like this are likely to come up in real life.
When people allocate money to causes other than existential risk, I think it's more often as a sort of moral parliament maneuver, rather than because they calculated it out and the other cause is better in a way that would change if we considered the long-term future.
"Long-termism" vs. "existential risk"
Philosophers shouldn't be constrained by PR considerations. If they're actually long-termist, and that's what's motivating them, they should say so.
But when I'm talking to non-philosophers, I prefer an "existential risk" framework to a "long-termism" framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. It forestalls attacks about how it's non-empathetic or politically incorrect not to prioritize various classes of people who are suffering now. And it focuses objections on the areas that are most important to clear up (is there really a high chance we're all going to die soon?) and not on tangential premises (are we sure that we know how our actions will affect the year 30,000 AD?)
I'm interested in hearing whether other people have different reasons for preferring the "long-termism" framework that I'm missing.
TLDR; Some people think the future is really bad and don't value it. You need something besides x-risk, to engage them, like a competent and coordinated movement to improve the future. Without this, x-risk and other EA work might be meaningless too. This explanation below has an intuitive or experiential quality, not numerical. I don't know if this is actually longtermism.
Many people don't consider future generations valuable because they have a pessimistic view of human society. I think this is justifiable.
Then, if you think society will remain in its current state, it's reasonable that you might not want to preserve it. If you only ever think about one or two generations into the future, like I think most people do, it's hard to see the possibility of change. So I think this "negative" mentality is self-reinforcing, they're stuck.
To these people, the idea of x-risk doesn't make sense, not because these dangers aren't real but because there isn't anything to preserve. To these people, giant numbers like 10^30 are really, especially unconvincing, because they seem silly and, if anything, we owe the future a small society.
I think the above is an incredibly mainstream view. Many people with talent, perception and resources might hold it.
The alternative to the mindset above is to see a long future that has possibilities. That there is a substantial possibility that things can be a lot better. And that it is viable to actually try to influence it.
I think these three sentences above seem "simple", but for this to substantially enter some people's world view, these ideas need to go together at the same time. Because of this, it's non-obvious and unconvincing.
I think one reason why the idea or movement for influencing the future is valuable is that most people don't know anyone who is seriously trying. It takes a huge amount of coordination and resources to do this. It's bizarre to do this on your own or with a small group of people.
I think everyone, deep down, wants to be optimistic about the future and humanity. But they don't take any action or spend time thinking about it.
With an actual strong movement that seems competent, it is possible to convince people there can be enough focus and investments that are viable to improve the future. It is this viability and assessment that produces a mental shift to optimism and engagement.
So this is the value of presenting the long term future in some way.
To be clear, in making this shift, people are being drawn in by competence. Competence involves "rational" thinking, planning and calculation, and all sorts of probabilities and numbers.
But for these people, despite what is commonly presented, I'm not sure focusing on numbers, or using Bayes, etc. may play any role in this presentation. If someone told me they changed their worldview because they ran numbers, I would be suspicious. Even now, most of the time, I am skeptical when I see huge numbers or intricate calculations.
Instead, this is a mindset or worldview that is intuitive. To kind of see this, this text seems convincing ("Good ideas change the world, or could possibly save it...") but doesn't use any calculations. I think this sort of thinking is how most people actually change their views about complex topics.
To have this particular change in view, I think you still need to have further beliefs that might be weird or unusual:
I have no idea if the above is longtermism at all. This seems sort of weak, and seems like it only would compel me to execute my particular beliefs.
It seems sort of surprising if many people had this particular viewpoint in this comment.
This viewpoint does have the benefit that you could ask questions to interrogate these beliefs (people couldn't just say there's "10^42 people" or something).