People who do not fully take on board all the tenets or conclusion of Longtermism are often called "Neartermist".
But to me this seems a bit negative and inaccurate.
As Alexander Berger said on 80k
I think the philosophical position that it’s better to help people sooner rather than later does not seem to have very many defenders.
"Non-longtermists" have various reasons to want to give some of their resources to help people and animals today or in the near future. A short list might include
- non-total-utilitarian population ethics
- E.g., Person-affecting views[2] and empirical calculation that 'the most good we can do for people/animals sure to exist is likely to be right now'
- moral uncertainty about the above
- a sense of special obligation to help contemporaneous people
- Deep empirical uncertainty about the ability to help people in the future (or even prevent extinction) effectively[3]
It seems to me a generally bad practice to take the positive part of the phrase a movement or philosophy uses to describe itself, and then negate that to describe people outside the movement.
E.g.,
- Pro-choice/Anti-choice, Pro-life/Anti-life
- "Black lives matter"/"Black lives don't matter"
- "Men's rights"/"Men don't have rights" (or "anti-men's-rights")
In case 1 each movement has a name for itself, and we usually use this as a label. On the other hand "not pro-choice" or "anti-abortion' might be more accurate.
In case 2 the "Blue lives matter" is often taken as the opposition to Black Lives Matter, but this goes in a different direction. I think many/most people would be better described as "non-BLM", implying they don't take on board all the tenets and approaches of the movement, not that they literally disagree with the statement.
In case 3, the opposition is a bit silly. I think it's obvious that we should call people who are not in the MRM and don't agree with it simply 'not-MRM'.
Similarly "not longtermist" or some variation on this makes sense to me.[4]
I don't agree with all of Berger's points though; to me doubts about total utilitarian population ethics is one of the main reasons to be uncertain about longtermism. ↩︎
Alt: A welfare function of both average utility and N ↩︎
FWIW, personally, I think it is pretty obvious that there are some things we can do to reduce extinction risks. ↩︎
Berger suggested ‘evident impact' or ‘global health and wellbeing’. But these don't really work as a shorthand to describe people holding this view. They also seem a bit too specific: e.g., I might focus on other near-term causes and risks that don't fit well into GH&W, perhaps presentanimal-welfare gets left out of this. 'Evident impact' is also too narrow: that's only 1 of the reasons I might not be full-LT-ist, and I also could be focusing on near-term interventions that aim at less-hard-to-measure systemic change. ↩︎
My statement above (not a 'definition', right?) is that
If you are not a total utilitarian, you don't value "creating more lives" ... at least not without some diminishing returns in your value. ... perhaps you value reducing suffering or increasing happiness for people, now and in future, that will definitely or very likely exist...
then it is not clear that "[A] reducing extinction risk is better than anything else we can do" ...
because there's also a strong case that, if the world is getting better, then helping people and animals right now is the most cost-effective solution.
Without the 'extinction rules out an expected value very very OOM much larger number of future people' cost, there is not a clear case that [A] preventing extinction risk must be the best use of our resources.
Now, suppose I were a total population utilitarian. Then there may be a strong case for [A]. But still maybe not; this seems to depend on empirical claims.
To me 'reducing extinction risks' seemed fairly obviously tractable, but on second thought, I can imagine some cases in which even this would be doubtful. Maybe, e.g., reducing risks of nuclear war in the next 100 years (e.g.) has actually little impact on extinction risks, as extinction is so likely anyways?!
Another important claim seems to be that there is a real likelihood of expansion past the earth into other planets/solar systems etc. Yet another is that 'digital beings can have positive valenced existences'.