I've noticed that the EA community has been aggressively promoting longtermism and longtermist causes:

  • The huge book tour around What We Owe the Future, which promotes longtermism itself
  • There was a recent post claiming that 80k's messaging is discouraging to non-longtermists, although the author deleted (Benjamin Hilton's response is preserved here). The post observed that 80k lists x-risk related causes as "recommended" causes while neartermist causes like global poverty and factory farming are only "sometimes recommended". Further, in 2021, 80k put together a podcast feed called Effective Altruism: An Introduction, which many commenters complained was too skewed towards longtermist causes.

I used to think that longtermism is compatible with a wide range of worldviews, as these pages (1, 2) claim, so I was puzzled as to why so many people who engage with longtermism could be uncomfortable with it. Sure, it's a counterintuitive worldview, but it also flows from such basic principles. But I'm starting to question this - longtermism is very sensitive to the rate of pure time preference, and recently, some philosophers have started to argue that nonzero pure time preference can be justified (section "3. Beyond Neutrality" here).

By contrast, x-risk as a cause area has support from a broader range of moral worldviews:

Maybe it's better to take a two-pronged approach:

  • Promote x-risk reduction as a cause area that most people can agree on; and
  • Promote longtermism as a novel idea in moral philosophy that some people might want to adopt, but be open about its limitations and acknowledge that our audiences might be uncomfortable with it and have valid reasons not to accept it.

39

0
0

Reactions

0
0
Comments17
Sorted by Click to highlight new comments since: Today at 2:37 PM

I still feel it is compatible with a large range of moral views. I remember having a conversation at an EAG a few years ago with:

  1. a global health EA
  2. me, someone who had been trying to be cause-neutral utilitarian with animal welfare before finding EA, who was becoming more into longtermist causes after being educated on them
  3.  a longtime EA who appeared to have climbed the ramp over the years (simultaneous with the EA movement) from global poverty, to animal welfare, to longtermist causes

And the convo went a little like this:

GH-EA: It's pretty hard for me to take longtermism seriously.. helping people who don't exist yet.. I don't really feel like I'm "helping" or could be
LT-EA: Yeah it is hard. It was hard for me too and sometimes it's still hard although it makes sense. I try to keep myself motivated anyway though [assume he was talking about warm fuzzies vs utilons, working on a good team, etc]
ME-EA: Oh really? I think I don't find it hard at all. In animal welfare I never imagined I was helping present animals. I always realized we weren't going to free farm animals, but prevent farmed animals from being born and living lives of suffering. I never thought I was helping anyone to exist, but I was changing the future to be better. 
All 3 of us:  Huh. [Note: the LT-EA still seemed to find this more interesting than the GH-EA]

I've thought about that convo a lot since then and feel it offers a clue... like some people come into the movement primed in some way but others don't. Or the EA movement itself or the experience of spending time in movement primes some people in some way but not others. But I can't really put my finger on it, and I assume there are plenty of ways to be primed vs not. And it's hard to tell who is who or where they or you will end up.

That said it's kinda moot because now I believe (it sounds like you do too) that focusing on "longtermist causes" is probably also the best way to help nearterm people. I do want people to expand their moral circle to future sentient beings (of all kinds), but the most important thing is that the work gets done. So I don't talk that much about longtermism anymore. I don't think it's been very catalyzing to action, and I want neartermists and even self-interested people involved too.

And on the other side of the coin is, let's not act like longtermism is so rare...I mean it's basically why you've got parents saying "my altruism is raising my kid" or we've got the global push around climate change as a cause of human extinction. Most people already believe longtermism without that ever having been a word they heard. It makes sense evolutionarily that we'd care about the future of our own species I think. Tradeoffs between then-present health and wealth vs. future health and wealth were made many times in our evolutionary path, and those who prioritized [more descendants in future] got [more descendants in future] lol. There may be some people who kinda missed that code, but I think it's rare, and we should feel pretty confident it's rare.

Fwiw, my impression is that within the EA community, endorsing a nonzero pure time preference is a minority reason why people don't agree with longtermism. Doubts about tractability seem much more important.

Outside the EA community, I think Torresian arguments about fanaticism dominate.

What are "Torresian arguments about fanaticism" if you don't mind? >.>

Namely, that longtermism is bad because it advocates for drastic actions (such as setting up a global police state) based on small probabilities of large payoffs.

Erin
1y22
19
3

I'm very very skeptical of longtermism in a practical sense, but not for the reason described here.

It's like watching people try to come up with rules about flight safety before powered flight - with some people arguing about lifting gasses, others worrying about muscle strains that will occur when everyone has to turn the hand cranks that power the aircraft, and yet others concerned about the potential for an aircraft to accidentally fly  to close to the sun. Even if one person started thinking about a real risk (for example, landing too hard) - how would they even come up with a solution without knowing what a plane looks like or what a control surface is?

I think most people who are skeptical of longtermism have reasoning some what similar to mine.

I agree. I think 80k et al pushing longtermist philosophy hard was a mistake. It clearly turns some people off and it seems most actual longtermist projects (eg around pandemics or AI) are justifiable without any longtermist baggage.

Recently I learned I wasn't a longtermist because I don't really buy our ability to forecast more than about 5 years in the future. So sure, I'd like to help future people but I've got no idea how, short of us not killing ourselves in the next 5 years. Supposedly this makes me not actually a longtermist???

You probably agree with me that (a) we can't know whether it will rain on 2/10/2050 and (b) we can be pretty sure that there will be a solar eclipse on 7/22/2028. You are actively participating in a prediction market, so you seem to believe in some ability to forecast the future better than a magic 8 ball. 

Where do you think the limits are to what kinds of things we can make useful predictions about, and how confident those predictions can be?

Yeah I guess the limits come down to the actions of others. I can't see any human effecting the eclipse but I can see humans negating or misdirecting any actions I take toward the future. And the more humans there are, the more likely that becomes. Like, the more people there are the more likely any little trickle of a river you try to send into the future via society.. gets stepped on blocked or muddied somehow

I think that makes you a longtermist though... having read What We Owe the Future anyway, unless I missed something:

I think a longtermist would say that the effects on future moral people should dominate our moral calculus due to their vast number, not necessarily that they can right now. But we should keep an eye out for how to impact the longrun future positively and take such a chance if we ever see it. Some people think they see the chance now, so they are taking it.* Maybe you will never see something plausible-to-you within your lifetime. But that doesn't mean a chance will never occur. 

For example if we could run amazing simulations to test longrun outcomes, I think a longtermist, if they believed in the tech, would want to give the best-predicted longterm action a go (like, the best sum of experiences had over time, summed at the end), using neutral moral weights for beings living today vs 3 years from now vs 1000 years from now. Of course there will be wider ranges and larger confidence intervals but you'd factor those too, as to what option you'd want to do. By contrast a neartermist would add moral weight to the consequences and experiences for beings existing in the nearterm on top of the differing ranges and confidence intervals which are just sensible to use for both neartermists and longtermists.

*I'll note that some extinction risks seem low enough percentage chance of happening to me that they may not be worth working on unless you do add neutrally-weighted future generations into the calculus. But it depends on the moral discount rate you'd use, like if your expected rate of population growth is large if everything goes well, and your moral discount rate is gradual enough, you might still end up preferring a "longtermist" intervention, even though you might not really be a true "longtermist" philosophically because you are still claiming that future generations are morally worth less (regardless of confidence), but focusing on the longterm just passed your bar anyway because of the scale.

Longtermism includes the claim that improving the future is tractable. I think that was probably a mistake, and it should just be a claim about values.

Daaaang yeah that seems wrong to me too unfortunately. I can imagine we have passed the threshold on coordination and technology that changing the long-run future somewhat predictably is a tractable cause now (and I can also imagine we haven't), but I'm pretty sure there were many years in history where it would have been impossible to predict past 5 years ahead. It seems to me that a philosophical position (which longtermism claims to be) should be able to have existed then too. But if it included tractability, that position was likely impossible to hold (correctly) at some moments in history, or in many single-actor thought experiments. But I'm no philosopher

I am unsure if that makes you not a longtermist. My understanding of the basic longtermist claim is as follows: 

  1. Future individuals matter as much as those in the present
  2. We should do something to positively shape their lives

I think this has little to do with forecasting accuracy, albeit if we were really good at that, it would help us better outline the something in the second point. I think the major part of it is about how we assign moral value to those who will exist in the future. 

I think he's saying that an important third element is that "We can do something to positively shape their lives".

Link to Benjamin Hilton's response doesn't work for me.

Looks like the EA Forum changed my GreaterWrong link to an EA Forum one; try this link: https://ea.greaterwrong.com/posts/yBPyByccETmHmaByn/could-80-000-hours-messaging-be-discouraging-for-people-not/comment/foLibAjhHvtebYbk7

Curated and popular this week
Relevant opportunities