People who do not fully take on board all the tenets or conclusion of Longtermism are often called "Neartermist".

But to me this seems a bit negative and inaccurate.

As Alexander Berger said on 80k

I think the philosophical position that it’s better to help people sooner rather than later does not seem to have very many defenders.

[1]

"Non-longtermists" have various reasons to want to give some of their resources to help people and animals today or in the near future. A short list might include

  • non-total-utilitarian population ethics
    • E.g., Person-affecting views[2] and empirical calculation that 'the most good we can do for people/animals sure to exist is likely to be right now'
  • moral uncertainty about the above
  • a sense of special obligation to help contemporaneous people
  • Deep empirical uncertainty about the ability to help people in the future (or even prevent extinction) effectively[3]

It seems to me a generally bad practice to take the positive part of the phrase a movement or philosophy uses to describe itself, and then negate that to describe people outside the movement.

E.g.,

  1. Pro-choice/Anti-choice, Pro-life/Anti-life
  2. "Black lives matter"/"Black lives don't matter"
  3. "Men's rights"/"Men don't have rights" (or "anti-men's-rights")

In case 1 each movement has a name for itself, and we usually use this as a label. On the other hand "not pro-choice" or "anti-abortion' might be more accurate.

In case 2 the "Blue lives matter" is often taken as the opposition to Black Lives Matter, but this goes in a different direction. I think many/most people would be better described as "non-BLM", implying they don't take on board all the tenets and approaches of the movement, not that they literally disagree with the statement.

In case 3, the opposition is a bit silly. I think it's obvious that we should call people who are not in the MRM and don't agree with it simply 'not-MRM'.

Similarly "not longtermist" or some variation on this makes sense to me.[4]


  1. I don't agree with all of Berger's points though; to me doubts about total utilitarian population ethics is one of the main reasons to be uncertain about longtermism. ↩︎

  2. Alt: A welfare function of both average utility and N ↩︎

  3. FWIW, personally, I think it is pretty obvious that there are some things we can do to reduce extinction risks. ↩︎

  4. Berger suggested ‘evident impact' or ‘global health and wellbeing’. But these don't really work as a shorthand to describe people holding this view. They also seem a bit too specific: e.g., I might focus on other near-term causes and risks that don't fit well into GH&W, perhaps presentanimal-welfare gets left out of this. 'Evident impact' is also too narrow: that's only 1 of the reasons I might not be full-LT-ist, and I also could be focusing on near-term interventions that aim at less-hard-to-measure systemic change. ↩︎

75

0
0

Reactions

0
0
Comments44
Sorted by Click to highlight new comments since: Today at 3:42 PM

I think it's especially confusing when longtermists working on AI risk think there is a non-negligble chance total doom may befall us in 15 years or less, whereas so-called neartermists working on deworming or charter cities are seeking payoffs that only get realized on a 20-50 year time horizon.

True; empirically there is a lot of crossover in 'which risks and causes we should care about funding'. In the other direction, pandemic prevention seems to serve both masters.

But, for clarification, I think

  1. the reason the "longtermists working on AI risk" care about the total doom in 15 years is because it could cause extinction preclude the possibility of a trillion-happy-sentient-beings in the long term. Not because it will be bad for people alive today.

  2. "deworming or charter cities are seeking payoffs that only get realized on a 20-50 year time horizon" ... that is only long-term in common parlance right? It's not long-term for EAs. LT-ists would general not prioritize this.

the reason the "longtermists working on AI risk" care about the total doom in 15 years is because it could cause extinction preclude the possibility of a trillion-happy-sentient-beings in the long term. Not because it will be bad for people alive today.

As a personal example, I work on AI risk and care a lot about harm to people alive today! I can't speak for the rest of the field, but I think the argument for working on AI risk goes through if you just care about people alive today and hold beliefs which are common in the field

 - see this post I wrote on the topic, and a post by Scott Alexander on the same theme.

Everyone dying in 15 years certainly sounds like it would be bad for people alive today!

But yeah it's more about the stakes (and duration of stakes) rather than the "amount of time to effect"

I don't think a good name for this exists, and I don't think we need one. It's usually better to talk about the specific cause areas than to try and lump all of them together as not-longtermism.

As you mention, there are lots of different reasons one might choose not to identify as a longtermist, including both moral and practical considerations.

But more importantly, I just don't think that longtermist vs not-longtermist is sufficiently important to justify grouping all the other causes into one group.

Trying to find a word for all the clusters other than longtermism  is like trying to find a word that describes all cats that aren't black, but isn't "not-black cats".

One way of thinking about these EA schools of thought is as clusters of causes in a multi-dimensional space. One of the dimensions along which these causes vary is longtermism vs. not-longtermism. But there are many other dimensions, including  animal-focused vs. people-focused, high-certainty vs low-certainty, etc. Not-longtermist causes all vary along these dimensions, too. Finding a simple label for a category that includes animal welfare, poverty alleviation, metascience, YIMBYism, mental health, and community building is going to be weird and hard.

"Not-longtermism" would just be everything outside of some small circle in this space. Not a natural category.

It's because there are so many other dimensions that we can end up with people working on AI safety and people working on chicken welfare in the same movement. I think that's cool.  I really like that EA space has enough dimensions that a  really diverse set of causes can all count as EA. Focusing so much on the longtermism vs. not-longtermism dimension under-emphasizes this.

I weakly disagree. When a belief is ubiquitous enough, as longtermism arguably is in the EA movement, it can be quite helpful to have a term that describes its negation: cf 'atheist', 'moral antirealist' (or 'amoralist'), 'anarchist' etc. I don't think such words have the effect of under-emphasising those views - if anything, I'd say they give them more weight.

Agree and I don’t think it’s a natural category. I just don’t want it to be called “neartermism”

I agree that the term, whether neartermist or not-longtermist, does not describe a natural category. But I think the latter does a better job at communicating that. The way I hear it, "not-longtermist" sounds like "not that part of idea-space", whereas neartermist sounds like an actual view people may hold that relates to how we should prioritise the nearterm versus the longterm. So I think your point actually supports one of David's alternative suggested terms.

And though you say you don't think we need a term for it at all, the fact that the term "neartermist" has caught on suggests otherwise. If it wasn't helpful, people wouldn't use it. However, perhaps you didn't just mean that we didn't need one, but that we shouldn't use one at all. I'd disagree with that too because it seems to me reasonable in many cases to want to distinguish longtermism with other worldviews EAs often have (i.e., it seems fair to me to say that Open Philanthropy's internal structure is divided on longtermist/not-longtermist lines). 

Also, cool image!

"Not longtermist" doesn't seem great to me. It implies being longtermist is the default EA position. I'd say I'm a longtermist, but I don't think we should normalise longtermism as the default EA position. This could be harmful for growth of the movement.

Maybe as Berger says "Global Health and Wellbeing" is the best term.

FWIW my intuition is that if you have a name for a thing, it means the opposite of that is the default. If there's a special term for "longtermist", that means people are not longtermists by default (which I think is basically true—most people are not longtermists, and longtermism is kind of a weird position (although I do happen to agree with it)). Sort of like how EAs are called EAs, but there's no word for people who aren't EAs, because being not-EA is the default.

Yeah I think that’s true if you only have the term “longtermist”. If you have both “longtermist” and “non-longtermist” I’m not so sure.

maybe we just say "not longermist" rather than trying to make "non-longermist" a label?

Either way, I think we can agree to get rid of 'neartermist'.

As a soft counterpoint, I usually find "definition by exclusion" in other areas to be weirder when the more "natural" position is presented as a standalone in an implicit binary, as opposed to adding a "non" in front of it.

Sorry if that's confusing. Here are some examples:


"academia vs industry" vs "academia vs non-academia"
"Jews vs Gentiles" vs "Jews vs non-Jews"

"Christians vs pagans" vs "Christians vs non-Christians"

"nerds vs Muggles" vs "nerds vs non-nerds"
"military vs civilians" vs "military vs non-military"
 

But I'm not sure which way this is going.
By 'the more natural position' do you mean the majority?

"Christians vs pagans" vs "Christians vs non-Christians"

Here are we assuming a society where Christians are the majority? But in any case "non-Christians" obviously need not be pagans.

I don't see how it necessarily implies that. Maybe "long-termist-EA" and "non-long-termist-EA"?

Global Health and Wellbeing is not too bad (even considering my quibbles with this). The "wellbeing" part could encompass animal welfare ... and even encompass avoiding global disasters 'for the sake of the people who would suffer', rather than 'for the sake of extinction and potential future large civilizations' etc.

(Added): But I guess GH&W may be too broad in the other direction. Don't LT-ists also prioritize global well-being?

Agree that "neartermism" is lousy. I like "global health and wellbeing" or just "global wellbeing" as improvements.

Possible better: what I've found I say is that "I focus on doing good now" or "helping people now". I agree it's imperfect in that it's not a single word but it does have a temporal component to it, unlike "global health and wellbeing"

It still seems like prefixing with "not" still runs into defining based on disagreement, where I would guess people who lean that way would rather be named for what they're prioritizing as opposed to what they aren't. I came up with a few (probably bad) ideas along that vein:

  • Immediatists (apparently not a made up word according to Merriam-Webster)
  • Contemporary altruists
  • Effective immediately

I'm relatively new so take my opinion with a big grain of salt. Maybe "not longtermist" is fine with most.

I see your point but I don't think the non-Ltists/neartermists actually do identify as a group along those lines (I may be wrong here). So for me, just "non-LTist EA" seems the right descriptor.

Although "Global Health and Wellbeing" (or maybe just "Global Wellbeing") seem pretty decent.

I could see "Non-LTist EA" as the term to use for precision, and then also identify people by the cause, approach, or moral philosophy they care most about.

To me 'contemporary altruists' suggests people who are alive today and altruistic, in contradistinction to historical altruists in the past, e.g. Katharine McCormick or John D. MacArthur. 

That's a good point, I agree. None of my suggestions really fit very well, it's hard to think of a descriptive name that could be easily used conversationally.

They're good attempts though - I think this is just a tricky needle to thread

I think the problem is from Will MacAskill's original definition baking in assumptions about tractability [bold added]:

An alternative minimal definition, suggested by Hilary Greaves (though the precise wording is my own), is that we could define longtermism as the view that the (intrinsic) value of an outcome is the same no matter what time it occurs.  This rules out views on which we should discount the future or that we should ignore the long-run indirect effects of our actions, but would not rule out views on which it’s just empirically intractable to try to improve the long-term future. Part of the idea is that this definition would open the way to a debate about the relevant empirical issues, in particular on the tractability of affecting the long run. [...]

In my view, this definition would be too broad. I think the distinctive idea that we should be trying to capture is the idea of trying to promote good long-term outcomes. I see the term 'longtermism' creating value if it results in more people taking action to help ensure that the long-run future goes well.

An alternative minimal definition, suggested by Hilary Greaves (though the precise wording is my own), is that we could define longtermism as the view that the (intrinsic) value of an outcome is the same no matter what time it occurs.

By that token I expect that we nearly all would identify as longtermists. (And maybe you agree, as you say you find the term too broad).

But in the absence of a total utilitarian view, we don't have a very solid empirical case that 'the value of your action depends mostly on its effect on the long term future (probably though reducing extinction risk)

This rules out views on which we should discount the future or that we should ignore the long-run indirect effects of our actions,

To be possibly redundant, I think no one is advocating that sort of discounting

but would not rule out views on which it’s just empirically intractable to try to improve the long-term future. Part of the idea is that this definition would open the way to a debate about the relevant empirical issues, in particular on the tractability of affecting the long run. [...]

Semi-agree, but I think more rides on whether you accept 'total utilitarianism' as a population ethic. It seems fairly clear (to me at least) that there are things we can do that are likely to reduce extinction risk. However, if I don't put a high value on 'creating happy lives' (or 'creating happy digital beings' if we are going full avant garde) I might find it more effective to work to improve the lives of people and animals today (or those likely to exist in the near future).

In my view, this definition would be too broad. I think the distinctive idea that we should be trying to capture is the idea of trying to promote good long-term outcomes. I see the term 'longtermism' creating value if it results in more people taking action to help ensure that the long-run future goes well.

But there are tradeoffs, and I think these are likely to be consequential in important cases . In particular 'should additional funding go to reducing pandemic and AI risk, or towards alleviating poverty or lobbying for animal welfare improvements'?

>(And maybe you agree, as you say you find the term too broad).

To be clear, I'm quoting MacAskill.

However, if I don't put a high value on 'creating happy lives' (or 'creating happy digital beings' if we are going full avant garde) I might find it more effective to work to improve the lives of people and animals today (or those likely to exist in the near future).

Do you see preventing extinction as equivalent to 'creating happy lives'? I guess if you hold the person-affecting-view, then extinction is bad because it kills the current population, but the fact that it prevents the existence of future generations is not seen as bad.

I see 'extinction' as doing a few things people might value, with different ethics and beliefs:

  1. Killing the current generation and maybe causing them to suffer/lose something. All ethics probably see this as bad.

  2. Preventing the creation of more lives, possibly many more. So, preventing extinction is 'creating more lives'.

Happy lives? We can't be sure, but maybe the issue of happiness vs suffering should be put in a different discussion?

Assuming the lives not-extincted ergo created are happy, the total utilitarian would value this part, and that's where they see most of the value, dominating all other concerns.

A person-affecting-views-er would not see any value to this part, I guess.

Someone else who has a convex function of happy lives and the number of lives might also value this, but perhaps not so much that it dominates all other concerns (e.g., about present humanity).

  1. Wiping out "humanity and our culture"; people may also see this as a bad for non-utilitarian reasons

But in the absence of a total utilitarian view, we don't have a very solid empirical case that 'the value of your action depends mostly on its effect on the long term future (probably though reducing extinction risk)

I think this definition just assumes longtermist interventions are tractable, instead of proving it.

My statement above (not a 'definition', right?) is that

If you are not a total utilitarian, you don't value "creating more lives" ... at least not without some diminishing returns in your value. ... perhaps you value reducing suffering or increasing happiness for people, now and in future, that will definitely or very likely exist...

then it is not clear that "[A] reducing extinction risk is better than anything else we can do" ...

because there's also a strong case that, if the world is getting better, then helping people and animals right now is the most cost-effective solution.

Without the 'extinction rules out an expected value very very OOM much larger number of future people' cost, there is not a clear case that [A] preventing extinction risk must be the best use of our resources.

Now, suppose I were a total population utilitarian. Then there may be a strong case for [A]. But still maybe not; this seems to depend on empirical claims.

To me 'reducing extinction risks' seemed fairly obviously tractable, but on second thought, I can imagine some cases in which even this would be doubtful. Maybe, e.g., reducing risks of nuclear war in the next 100 years (e.g.) has actually little impact on extinction risks, as extinction is so likely anyways?!

Another important claim seems to be that there is a real likelihood of expansion past the earth into other planets/solar systems etc. Yet another is that 'digital beings can have positive valenced existences'.

My statement above (not a 'definition', right?)

I'm referring to this common definition of longtermism:

>'the value of your action depends mostly on its effect on the long term future

Got it. I’m not sure that this “common definition of longtermism” would or should be widely accepted by longtermists, upon reflection. As you suggest it is a claim about an in-principle measurable outcome (‘value … mostly depends … VMDLT’). It is not a core belief or value.

The truth value of VMDLT depends on a combination of empirical things (e.g., potential to affect long term future, likely positive nature of the future, …) and moral value things (especially total utilitarianism).[1].

What I find slightly strange about this definition of longtermism in an EA context is that it presumes one does the careful analysis with “good epistemics” and then gets to the VMDLT conclusion. But if that is the case then how can we define “longtermist thinking” or “longtermist ideas”?

By off the cuff analogy, suppose we were all trying to evaluate the merit of boosting nuclear energy as a source of power. We stated and defended our set of overlapping core beliefs, consulted similar data and evidence, and came up with estimates and simulations. Our estimates of the net benefit of nuclear spread out across a wide range, sometimes close to 0, sometimes negative, sometimes positive, sometimes very positive.

Would it then make sense to call the people who found it to be very positive “nuclear-ists”? What about those who found it to be just a bit better than 0 in expectation? Should all these people be thought of as a coherent movement and thought group? Should they meet and coalesce around the fact that their results found that Nuclear>0 ?


  1. But I think there is not a unique path to getting there; I suspect a range of combinations of empirical and moral beliefs could get you to VMDLT… or not ↩︎

Would it then make sense to call the people who found it to be very positive “nuclear-ists”? What about those who found it to be just a bit better than 0 in expectation? Should all these people be thought of as a coherent movement and thought group? Should they meet and coalesce around the fact that their results found that Nuclear>0 ?

Yes, I agree. I think longtermism is a step backwards from the original EA framework of importance/tractability/crowdedness, where we allocate resources to the interventions with the highest expected value. If those happen to be aimed at future generations, great. But we're going to have a portfolio of interventions, and the 'best' intervention (which optimally receives the marginal funding dollar) will change as increased funding decreases marginal returns.

<y statement above (not a 'definition', right?) is that

If you are not a total utilitarian, you don't value "creating more lives" ... at least not without some diminishing returns in your value. ... perhaps you value reducing suffering or increasing happiness for people, now and in future, that will definitely or very likely exist...

then it is not clear that "[A] reducing extinction risk is better than anything else we can do" ...

because there's also a strong case that, if the world is getting better, then helping people and animals right now is the most cost-effective solution.

Without the 'extinction rules out an expected value very very OOM much larger number of future people' cost, there is not a clear case that [A] preventing extinction risk must be the best use of our resources.

Now, suppose I were a total population utilitarian. Then there may be a strong case for [A]. But still maybe not; this seems to depend on empirical claims.

To me 'reducing extinction risks' seemed fairly obviously tractable, but on second thought, I can imagine some cases in which even this would be doubtful. Maybe, e.g., reducing risks of nuclear war in the next 100 years (e.g.) has actually little impact on extinction risks, as extinction is so likely anyways?!

Another important claim seems to be that there is a real likelihood of expansion past the earth into other planets/solar systems etc. Yet another is that 'digital beings can have positive valenced existences'.

But tractability is not a binary (tractable vs. intractable). Rather, we should expect tractability to vary continuously, especially as low-hanging fruit is picked (ie. we get diminishing marginal returns as funding is directed to an intervention). So the 'best cause' will change over time as funding changes.

I generally see this as broad global development (encompassing anything related to improving the world rather than preventing extinction (and some causes/interventions do both of these)).

I think neartermist is completely fine. I have no negative associations with the term, and suspect the only reason it sounds negative is because longtermism is predominant in the EA Community.

I don’t think it’s negative either although, as has been pointed out, many interpret it as meaning that one has a high discount rate which can be misleading

Is it possible to have a name related to discount rates? Please correct me if I am wrong, but I guess all "neartermists" have a high discount rate right?

I believe the majority of "neartermist" EAs don't have a high discount rate. They usually prioritise near-term effects because they don't think we can tractably influence the far future (i.e. cannot improve the far future in expectation). You might find the 80,000 Hours podcast episode with Alexander Berger interesting.

EDIT: neartermists may also be concerned by longtermist fanatical thinking or may be driven by a certain population axiology e.g. person-affecting view. In the EA movement though high discount rates are virtually unheard of.

I agree with JackM.

As somewhat an aside I think one might only justify discount rates over well-being as an indirect and probably inadequate proxy for something else, such

  • as a belief that 'intervention $'s go less far to help people in the future because we don't know much about how to help them'
  • a belief that the future may not exist, and if it's lower probability it enters less unto our welfare function.

There is very little direct justification of 'people in the future' or 'welfare in the future' itself matters less.'

From reading this and other comments, I think we should rename longtermists to be "Temporal radicalists". The rest of the community can be "Temporal moderates" or even be "Temporal conservatives" (aka "neratermists") if they are so inclined. I attempt to explain why below.

It looks like there is some agreement that long-termism is a fairly radical idea.

Many (but not all) of the so-called "neartermists" are simply not that radical and that is the reason why they perceive their monicker to be problematic. One side is radical and many in the other side are just not that radical while still believing in the fundamental idea

By "radical", I mean believing in one of the extreme ends of the argument. The rest of the community is not on the other extreme end which is what "neartermism" seems to imply. It looks like many of those not identifying as "longermists" are simply on neither of the extreme ends but somewhere in the spectrum between "longtermists" and "neartermists". I understand now that many who are currently termed "Neartermists" would be willing to make expectation value bets on the future even with fairly low discount rates. From the link to the Berger episode that JackM provided (thanks for that BTW!):

"It’s tied to a sense of not wanting to go all in on everything. So maybe being very happy making expected value bets in the same way that longtermists are very happy making expected value bets, but somehow wanting to pull back from that bet a little bit sooner than I think that the typical longtermist might."

So to overcome the naming issue, we must have a way to recognize that there are extreme ends in this argument and also a middle ground. With this in mind, I would rename the currently "longtermists" as "Tempoal radicalists" while addressing the diversity of opinions in the rest of the community with two different labels "Temporal moderates" and "Temporal conservatives" (which is the synonym of "neartermism"). You can even call yourself 'Temporal moderate leaning towards conservatism' to communicate your position with even more nuance.

PS: Sorry for too many edits. I wanted to write it down before forgetting it and later realized I had not communicated properly.

The name should comprise the idea that the solution is not intended to perpetuate into the very long term, and may serve either only the very short term (e. g. a specific time in a life of one generation or the entire life of an individual) or the individuals which occur in the foreseeable future ('medium term'). This reasoning also implies that we would need 3 terms.

  • Solidarity
  • Lasting
  • Locked

Solidarity  solutions do not address causes but improve the situations of those negatively affected by occurrences or systems. Examples include feeding refugees or regularly providing  deworming pills to affected persons. Lasting solutions address causes and improve systems in a way that is still alterable. Examples include conflict resolution (peace agreement) or prevention of human interaction with environment which can cause worm infections (e. g. roads, parks, mechanized farm work, river water quality testing and risky area swimming ban). Locked solutions are practically[1] unalterable. For example, AI system that automatically allocates a place and nutrition for any actor or the eradication of worms. These can combine solidarity aspects (e. g. AI that settles refugees) and lasting changes (no worm infections in the foreseeable future).

Still, these names are written with the intent to denote intent of the solution rather than the impact. For example, a solidarity solution of providing deworming pills can enable income increases and generations to pay for deworming drugs by productivity increase and thus becomes lasting. It may be challenging to think of a solidarity solution that is in fact a locked one. For instance, if someone eradicates worms, then that is addressing the cause so is not a solidarity solution - the solution should be classified objectively. A program intended to last but not to be locked in can become practically unalterable, for example a peace agreement which is later digitized by AI governance. So, intent can be 'one step below' the impact but not two steps below. By definition, solutions that can be classified as one of the three levels cannot be classified as those of any level below or above.

From this writing, it is apparent that all solutions: solidarity, lasting, and locked are possible. I would further argue that it may be challenging to implement malevolent lasting and locked solutions in the present world because problems compel solving. Benevolent solutions may be easier to make lasting and locked because no one would intend to alter them. Of course, this allows for desired dystopias, which one should especially check for, as well as for lasting and locked solutions suboptimal for those who are not considered (no need to participate), so one should always keep checking for more entities to consider as well as implement this into any lasting and locked solutions.

  1. ^

    Locked solutions could be altered but that would be unrealistic (who would want no place to stay when the alternative is possible or worms that cause schistosomiasis).

I agree the name is non-ideal, and doesn't quite capture differences. A better term may be conventionalists versus non-conventionalists (or to make the two sides stand for something positive, conventionalists versus longtermists).

Conventionalists focus on cause areas like global poverty reduction, animal welfare, governance reforms, improving institutional decision making, and other things which have (to some extent) been done before.

Non-conventionalists focus on cause areas like global catastrophic risk prevention, s-risk prevention, improving our understanding of psychological valence, and other things which have mostly not been done before, or at least have been done comparatively fewer times.

These terms may also be terrible. Many before have tried to prevent the end of the world (see: Petrov), and prevent s-risks (see: effort against Nazism and Communism). Similarly, it's harder to draw a clear value difference or epistemic difference between these two divisions. One obvious choice is to say the conventionalists place less trust in inside-view reasoning, but the case that any particular (say) charter city trying out a brand new organizational structure will be highly beneficial seems to rely on far more inside-view reasoning (economic theory for instance) than the case that AGI is imminent (simply perform a basic extrapolation on graphs of progress or compute in the field).

I agree that conventionists versus non-conventionalists may be a thing, but I don't think this captures what people are talking about when they talk about being a long-term missed or not a long-termist. This seems a different axis.

"To define is to limit."
—Oscar Wilde

Let's be agnostic on near/long-termist definitions and try to do good.