People who do not fully take on board all the tenets or conclusion of Longtermism are often called "Neartermist".
But to me this seems a bit negative and inaccurate.
As Alexander Berger said on 80k
I think the philosophical position that it’s better to help people sooner rather than later does not seem to have very many defenders.
"Non-longtermists" have various reasons to want to give some of their resources to help people and animals today or in the near future. A short list might include
- non-total-utilitarian population ethics
- E.g., Person-affecting views[2] and empirical calculation that 'the most good we can do for people/animals sure to exist is likely to be right now'
- moral uncertainty about the above
- a sense of special obligation to help contemporaneous people
- Deep empirical uncertainty about the ability to help people in the future (or even prevent extinction) effectively[3]
It seems to me a generally bad practice to take the positive part of the phrase a movement or philosophy uses to describe itself, and then negate that to describe people outside the movement.
E.g.,
- Pro-choice/Anti-choice, Pro-life/Anti-life
- "Black lives matter"/"Black lives don't matter"
- "Men's rights"/"Men don't have rights" (or "anti-men's-rights")
In case 1 each movement has a name for itself, and we usually use this as a label. On the other hand "not pro-choice" or "anti-abortion' might be more accurate.
In case 2 the "Blue lives matter" is often taken as the opposition to Black Lives Matter, but this goes in a different direction. I think many/most people would be better described as "non-BLM", implying they don't take on board all the tenets and approaches of the movement, not that they literally disagree with the statement.
In case 3, the opposition is a bit silly. I think it's obvious that we should call people who are not in the MRM and don't agree with it simply 'not-MRM'.
Similarly "not longtermist" or some variation on this makes sense to me.[4]
I don't agree with all of Berger's points though; to me doubts about total utilitarian population ethics is one of the main reasons to be uncertain about longtermism. ↩︎
Alt: A welfare function of both average utility and N ↩︎
FWIW, personally, I think it is pretty obvious that there are some things we can do to reduce extinction risks. ↩︎
Berger suggested ‘evident impact' or ‘global health and wellbeing’. But these don't really work as a shorthand to describe people holding this view. They also seem a bit too specific: e.g., I might focus on other near-term causes and risks that don't fit well into GH&W, perhaps presentanimal-welfare gets left out of this. 'Evident impact' is also too narrow: that's only 1 of the reasons I might not be full-LT-ist, and I also could be focusing on near-term interventions that aim at less-hard-to-measure systemic change. ↩︎
The name should comprise the idea that the solution is not intended to perpetuate into the very long term, and may serve either only the very short term (e. g. a specific time in a life of one generation or the entire life of an individual) or the individuals which occur in the foreseeable future ('medium term'). This reasoning also implies that we would need 3 terms.
Solidarity solutions do not address causes but improve the situations of those negatively affected by occurrences or systems. Examples include feeding refugees or regularly providing deworming pills to affected persons. Lasting solutions address causes and improve systems in a way that is still alterable. Examples include conflict resolution (peace agreement) or prevention of human interaction with environment which can cause worm infections (e. g. roads, parks, mechanized farm work, river water quality testing and risky area swimming ban). Locked solutions are practically[1] unalterable. For example, AI system that automatically allocates a place and nutrition for any actor or the eradication of worms. These can combine solidarity aspects (e. g. AI that settles refugees) and lasting changes (no worm infections in the foreseeable future).
Still, these names are written with the intent to denote intent of the solution rather than the impact. For example, a solidarity solution of providing deworming pills can enable income increases and generations to pay for deworming drugs by productivity increase and thus becomes lasting. It may be challenging to think of a solidarity solution that is in fact a locked one. For instance, if someone eradicates worms, then that is addressing the cause so is not a solidarity solution - the solution should be classified objectively. A program intended to last but not to be locked in can become practically unalterable, for example a peace agreement which is later digitized by AI governance. So, intent can be 'one step below' the impact but not two steps below. By definition, solutions that can be classified as one of the three levels cannot be classified as those of any level below or above.
From this writing, it is apparent that all solutions: solidarity, lasting, and locked are possible. I would further argue that it may be challenging to implement malevolent lasting and locked solutions in the present world because problems compel solving. Benevolent solutions may be easier to make lasting and locked because no one would intend to alter them. Of course, this allows for desired dystopias, which one should especially check for, as well as for lasting and locked solutions suboptimal for those who are not considered (no need to participate), so one should always keep checking for more entities to consider as well as implement this into any lasting and locked solutions.
Locked solutions could be altered but that would be unrealistic (who would want no place to stay when the alternative is possible or worms that cause schistosomiasis).