Hide table of contents

TL;DR I propose tweaking the longtermism pitch to focus on how it builds on an already-common practice of making decisions with concern for future people. 

Thanks to Max Clarke, Ben Wylie-van Eerd, and Tyrone Barugh for feedback. All views are my own. 

Epistemic status: This post is a recommendation based on what I feel when I read articles introducing longtermism to a general audience. I know some people feel similarly and others feel differently. I suspect this is because of differing norms in philosophy arguments vs. wider media. I am reasonably confident (75%) that the changes I propose would make longtermism seem more approachable for someone who has never heard of EA or longtermism before, and hasn’t studied philosophy. On the other hand, I believe the changes might decrease the perceived integrity of the argument to readers who seek ideas that are built from first principles, or who have studied philosophy. 

Intro: 

When I read intro articles on longtermism, there’s often something about the tone of the argument that bugs me. I believe there is a missed opportunity to connect with a wider range of readers. It’s also a missed opportunity to build connections and a sense of kinship with others who (at least partially) agree with longtermist values. 

I make two main arguments in this post: 

  1. Longtermism could be (and usually is) introduced as “caring about future people” x “the future could be extremely long and big.”
  2. “Caring about future people” is already quite a common concept and it feels condescending to imply otherwise.

Finally, I suggest one way the “intro to longtermism” pitch could be adapted to connect with a slightly wider audience. 

How I interpret longtermism:

In his recent New York Times article, William MacAskill introduces longtermism as 

(1) "The idea that positively influencing the long-term future is a key moral priority of our time.”

My working definition of longtermism for this post is MacAskill’s definition, plus the following two premises that are used to derive it:

(2) People who live in the future are as morally relevant as people who are alive today

(3) Humanity is extremely young when considered in comparison to the potential timeline of humanity.

What is and isn't new about longtermism:

It seems (to me) that when people introduce longtermism, all three of these concepts are pitched as ‘new’ to the reader and are explained from the ground up. In contrast, the author could choose to pitch specific parts as being new to the reader. I say this because, in a range of non-EA communities in my life, point (2) is actually quite commonly accepted. For example, in public discourse on climate change, people talk about taking action to improve the lives of people one or two hundred years from now[1]. In many indigenous communities, decisions are made with explicit consideration of how this action will affect descendants.

When I read something that implies point (2) is new to the audience, it feels gently condescending towards these other communities. It gives the impression that the author is claiming this as a novel idea of their own, without realizing that many others also use this idea to inform their decisions. It makes me wonder how far and wide the author listens to others, and therefore whether their theories have sprung from a widely informed worldview, or from an ivory tower. 

This isn't a strong, angry reaction; it's just a quiet question in the back of my mind that I dismiss pretty quick. In every case that I get this impression, I assume that the author doesn't intend to make this claim at all. I’m not claiming that my impression is accurate; I’m merely trying to describe why this particular way of talking about longtermism makes me feel slightly uncomfortable. Some of my friends have expressed that they don't get this impression at all; others have expressed that they disengage with academics precisely because they get this impression from conventions in academia. I worry that this type of over-explaining in non-academic contexts might turn some people off from engaging with the actual ideas of longtermism.

Quick aside: I do believe that longtermism is genuinely making a new argument, by asking people to consider “future people are morally relevant” in combination with “humanity is extremely young compared to our potential length of existence.” I just wince slightly when it seems that “we should care about future people” is being presented as a novel idea. 

I’ve been mulling over how I might phrase it differently. I propose adding more clarification to which ideas the author is proposing as new, and being more explicit about ways that longtermism is similar to current common philosophies.

What might this look like? 

I propose a slight change to the longtermism intro pitch to acknowledge that caring for future people is a common concept across many spheres of humanity. This would position longtermism as a collaboration with other philosophies, rather than a new and distinct philosophy. 

For example, in the following excerpt (taken from MacAskill’s NY Times article mentioned above), the middle paragraph uses a small scenario to introduce the reader to the idea that future people count. My uncharitable reading is that the author thinks “future people count” is a new idea to the reader that should be obvious if the reader took the time to think about it.

But some simple ideas exerted a persistent force on my mind: Future people count. There could be a lot of them. And we can make their lives better. To help others as much as possible, we must think about the long-term impact of our actions.

The idea that future people count is common sense. Suppose that I drop a glass bottle while hiking. If I don’t clean it up, a child might cut herself on the shards. Does it matter when the child will cut herself — a week, or a decade, or a century from now? No. Harm is harm, whenever it occurs.

Future people, after all, are people. They will exist. They will have hopes and joys and pains and regrets, just like the rest of us. They just don’t exist yet.

The same argument could instead be presented as follows:

But some simple ideas exerted a persistent force on my mind: Future people count. There could be a lot of them. And we can make their lives better. To help others as much as possible, we must think about the long-term impact of our actions.

We already do this, of course. We often act out of care for our descendents, or the next generation. We try to leave them a world that's better than the one we have today. When we discuss climate change, we frame it with questions like "What will this mean for people born a hundred years from now?" The seventh generation principle and similar concepts from other indigenous cultures ask that decisions consider how people in the long term future will be impacted.

Longtermism combines this empathy for future people with a consideration for the potentially thousands, and hopefully millions of years that humanity may continue to exist. All these people will have hopes and joys and pains and regrets, just like the rest of us. The actions we collectively take today could dramatically shape of the world in which they live, and the opportunities they have to lead a fulfilling life.”

To me, this revised introduction makes it clear that longtermism is a new concept, and it builds on concepts that the reader might already be familiar with. I would anticipate that it comes across as warmer and more approachable to anyone who has a similar reaction to mine described above. 

Thoughts? 

  1. ^

    I know this isn't what’s typically considered “long term” in longtermism, but is reasonably long term in the context of public discourse.

17

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since: Today at 3:33 AM

I'm broadly on board with the points made here, but I would prefer to frame this as an addition to the pitch playbook, not a tweak to "the pitch".

Different people do need to hear different things. Some people probably do have the intuition that we should care about future people, and would react negatively to something like MacAskill's bottle example. But personally, I find that lots of people do react to longtermism with something like "why worry about the future when there are so many problems now?", and I think the bottle example might be a helpful intuition pump for those people.

The more I think about EA pitches the more I wonder if anyone has just done focus group testing or something...

I like the example in the original about the glass bottle, since it's something concrete that people can envision and have likely seen before, so they can relate to. 

I also think it's a good idea to connect caring about the future to existing worldviews since many of them do emphasize it and this could make it seem even more relevant and worth acting on to people. 

Unfortunately, caring about even one generation into the future, let alone several thousand or more, is already a big shift in Western Culture. If you can get Westerners to care about one generation in the future, you've mostly won. Also, I'll give some people credit for thinking about 100-200 years in the future, but how many truly think like this? That's the question. There's a large inferential gap that needs to be bridged here.

Yeah, there may be quite a gap between "what people say they consider valuable when making decisions" and "what people seem to value, based on the decisions they make."

The intention of this post is more around a change to communication style that may make the reader more open to the message when they first hear it.

My intuition based on common psychology and communications principles is that Westerners would still be more amenable to "we already say we value this" than "we are extremely poor at this." Even if the "we" was a wider societal "we" that doesn't end up including the individual reader.

Perhaps what I undervalued in the bottle example is that (as in Peter's comment) it gives a concrete image for people to start bridging that gap, as you say. It's not about getting the reader into a state where they are amenable to the general message, it's about convincing the reader that they can care about the far-far future.

It seems to me that there's another aspect to longtermism, an explicit formulation of future lives as having measurable importance. 

Longtermists seem to think that maximizing the number of future people is a moral activity, that is, that the more people there are, the greater the altruism of the outcome, all other things equal (that the people have happy lives, etc).

Longtermism allows a comparison between the number of present lives and future lives. There are plausible contexts, to do with provision of resources, that force a compromise between altruism toward present lives and altruism toward future lives (say, 200 years in advance). 

Therefore, longtermist EA plans can emphasize the welfare of a larger number of nonexistent people over the welfare of a smaller number of existent people (including babies in the womb).

I don't believe that a nonexistent person not considered certain to exist in future (preconception, before the meeting of sperm and ovum) has a moral status measurable against a present person that can plausibly certainly exist in future. Notice the kinds of absurd plans that such a belief supports, if you believe that the future allows for a potentially vast number of future people.

For example, in a situation where humanity faces existential risk, the expectation that there will be many future people is no longer certain. In that context, longtermism devolves into a plan to ensure that the future will contain future people, a lot of them, probably at the expense of people who are voluntarily sterile (such as myself) or are too old to have children or are children but not close to reproductive age.

I see this taking general shape in longtermist goals to ensure far off and unlikely futures occur (for example, that we become space-faring people numbering in the trillions or that we develop technology that supports the lives of digital people) at the expense of actions with more probable outcomes (for example, that we alleviate global poverty). In context (for example, in the context of global poverty, there's still plenty of people and resources working to alleviate global poverty despite longtermist efforts subtracting resources from efforts to alleviate global poverty), longtermist goals seem harmless and relative to personal interests. However, the moral framework justifying the goals, built with expected utility calculations not bound by common-sense, becomes harmful when humanity faces a genuine existential crisis and resource constraints force real compromises.

NOTE: I made some light edits to this for clarity some 13 hours after the original post, unfortunately I cannot improve it much more, sorry.

Curated and popular this week
Relevant opportunities