[Cross-posted from the 80,000 Hours blog]
I find working on longtermist causes to be — emotionally speaking — hard: There are so many terrible problems in the world right now. How can we turn away from the suffering happening all around us in order to prioritise something as abstract as helping make the long-run future go well?
A lot of people who aim to put longtermist ideas into practice seem to struggle with this, including many of the people I’ve worked with over the years. And I myself am no exception — the pull of suffering happening now is hard to escape. For this reason, I wanted to share a few thoughts on how I approach this challenge, and how I maintain the motivation to work on speculative interventions despite finding that difficult in many ways.
This issue is one aspect of a broader issue in EA: figuring out how to motivate ourselves to do important work even when it doesn’t feel emotionally compelling. It’s useful to have a clear understanding of our emotions in order to distinguish between feelings and beliefs we endorse and those that we wouldn’t — on reflection — want to act on.
What I’ve found hard
First, I don’t want to claim that everyone finds it difficult to work on longtermist causes for the same reasons that I do, or in the same ways. I’d also like to be clear that I’m not speaking for 80,000 Hours as an organisation.
My struggles with the work I’m not doing tend to centre around the humans suffering from preventable diseases in poor countries. That’s largely to do with what I initially worked on when I came across effective altruism. For other people, it’s more salient that they aren’t actively working to prevent the barbarity of some factory farming practices. I’m not going to talk about all of the ways in which people might find it hard to focus on the long-run future — for the purposes of this article, I’m going to focus specifically on my own experience.
I feel a strong pull to help people now
A large part of the suffering in the world today simply shouldn’t exist. People are suffering and dying for want of cheap preventative measures and cures. Diseases that rich countries have managed to totally eradicate still plague millions around the world. There’s strong evidence for the efficacy of cheap interventions like insecticide-treated anti-malaria bed nets. Yet many of us in rich countries are well off financially, and spend a significant proportion of our income on non-necessity goods and services. In the face of this absurd and preventable inequity, it feels very difficult to believe that I shouldn’t be doing anything to ameliorate it.
Likewise, it often feels hard to believe that I shouldn’t be helping people geographically close to me — such as homeless people in my town, or people who are being illegitimately incarcerated in my country. It’s hard to deal with there being visible and preventable suffering that I’m not doing anything to combat.
For me, putting off helping people alive today in favour of helping those in the future is even harder than putting off helping those in my country in favour of those on the other side of the world. This is in part due to the sense that if we don’t take actions to improve the future, there are others coming after us who can. By contrast, if we don’t take action to help today’s global poor, those coming after us cannot step in and take our place. The lives we fail to save this year are certain to be lost and grieved for.
Another reason this is challenging is that wealth seems to be sharply increasing over time. This means that we have every reason to believe that people in the future will be far richer than people today, and it would seem to follow that people in the future don’t need our help as much as those in the present. There is no analogue in the case of helping people far away geographically.
The arguments for longtermism aren’t emotionally compelling to me
The reasons we have for improving the lives of those currently alive are emotionally gripping. That’s in part because these are clearly important duties weighing on us, whose force can be vitiated only by some even stronger duty. By comparison, the case for focusing on the longer term feels far more speculative, and relies on careful weighing of complex arguments.
Below I sketch out how I see the arguments for longtermism, and why — despite being convinced of them intellectually — they don’t diminish my sense that we should be alleviating present suffering instead. I’d like to note that this isn’t intended to be a rigorous statement of why we should focus on longtermist causes (which 80,000 Hours has written about elsewhere).
The future of sentient beings is potentially unimaginably large. That means if we have only a very small chance of affecting it in a lasting and positive way, taking that chance is worth it.
One way in which we could affect the long run is by preventing the extinction of all life. The fact that present people could potentially wipe out everyone to come means it isn’t true that the people who come after us will have the chance to improve the future if we don’t. It also makes irrelevant the fact that people in the future could be richer than us.
There may also be ways that the value in the future can be irreversibly curtailed due to the lock-in of a totalitarian regime rather than an extinction event. That suggests future people may well exist, but be very badly off without our intervention.
These terrible outcomes do seem possible to me. They seem to be the kinds of risks we should be investigating, to figure out whether we can reduce them. And in fact there are many reasons to think that society is usually bad at handling these types of risks: Businesses have incentives to make money in the short run, politicians want to get re-elected in the next couple of years, and individuals tend to be bad at planning (even for their own futures!).
The arguments above make sense to me and I believe them. I believe I ought to prioritise working on improving the long-run future.
Despite this, the arguments still feel speculative. And even if they’re right, there’s no guarantee that I’ll actually have any impact by e.g. improving the representation of future generations in our legislation, or by increasing the body of good global priorities research — let alone by simply trying to do either one of those. I just have to place a bet on being able to make a big positive difference, even though I know it might not work. That makes choosing to do these things — rather than e.g. donate to bednet distribution — feel uncomfortably like gambling with the lives of others.
How I handle that difficulty
Given these problems, it sometimes feels hard to be motivated to do what I think I ought to. One thing I’m heartened by is that working on the long run feels hard in precisely the way I think we should expect effective altruism to feel hard: The more salient a particular problem is — and the more compelling working on it seems — the more we should expect it to already have people tackling it. So I should expect working on the most pressing problems not to feel as intuitively urgent and important as working on some other problems. If it did, it would be less neglected.
What makes the most difference in my motivation day to day is being part of a team I deeply respect and care about. My drive to make those around me happy and to not let down my colleagues makes it easy to work hard. They don’t necessarily need to share my values — if I were earning to give, and needed to do my job well in order to maintain (and increase!) my income, I expect it would very much help me to have colleagues who cared about working to a high standard and the success of the company. In order to avoid letting them down, I imagine I’d be motivated to work hard and do my part.
Another thing which makes a significant difference to my motivation is continuing to think and talk about arguments around what causes and interventions are most pressing. One way I do this is to articulate intuitive worries I have that I’m not working on the right thing as they come up, and debate them with people who have similar values to me. Doing that helps me to get a sense for which of my views feel intuitive but I don’t ultimately believe, and which I actually endorse and can defend.
I also try to keep reading and engaging with arguments that indicate that I should work on other problems. It’s particularly important to keep questioning and fleshing out counterintuitive beliefs, because you can’t rely on your gut to tell you when you’re getting carried away (it already thinks you’re off course!).
That said, it would be disorienting and demotivating to be continuously questioning your direction or work. An important time to do this might be when you’re about to engage on a new project, or change direction significantly. (Although I also quite enjoy keeping track of interesting new arguments as they come up, for example on the EA forum.)
For me, it has also been helpful to make concrete commitments to do what’s most effective. I’m a member of Giving What We Can, which means I’ve pledged to give 10% of my income to the organisations I believe can most effectively improve the world. I actually tend to donate a bit on top of my pledge each year — some to an animal welfare organisation to offset eating meat, and some to a global development organisation (typically the Against Malaria Foundation) because I hate the idea of not doing anything to reduce global poverty. But I always give my 10% to the organisations that I think on balance will do the most good in expectation, because I promised I would.
A technique I have more mixed feelings about is making the harms or lack of benefits in the future feel more concrete. For example, I might imagine that humanity is extinguished in a man-made pandemic as a result of reckless biowarfare, and that the accessible universe then remains empty of intelligent life for eons. Thinking about examples like this give my intuitions something to latch onto, and remind me that future harms will be no less real to those experiencing them than present harms.
One of my reservations with this approach is that because there are so many possible terrible outcomes for the world, it seems potentially misleading to latch on to any specific one. Doing so might affect your actions in ways you didn’t intend. One possible way to avoid that might be to try to picture a concrete positive outcome: Set your sights on a world of flourishing beings spread across the universe. Personally, I tend to find that less motivating, in part because I think that as we’re currently constituted, living beings have a far greater capacity for pain than pleasure.
With all the above techniques, I think it really helps to have others around you who are thinking in similar ways — you can share concrete suggestions about what works, and feel the relief of knowing you’re not the only one finding things hard. Being part of the effective altruism community makes a big difference for me in these ways, whether that’s online (for example, the EA Forum) or in person (I’ve been lucky enough to usually live somewhere with a thriving local EA group).
When I’m really struggling to do the right thing, I come back to the fact that with all the uncertainty around longtermism, there is one thing I’m sure about: I care about people in the future, just like I care about people now. I would send a bednet to protect a baby, even if the baby wasn’t yet conceived, and I would train a paediatrician now for the benefit of children for decades to come.
There are so many possible people in the future who have no ability at all to advocate for themselves. Society as it stands is essentially entirely ignoring them. I can’t see those people in pictures, and I have no idea which things will actually afflict them, or if they’ll ever get to live. But I can use my career to try to make things better for them, in expectation. And I believe that’s what I should do.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Nice piece! Though this does not work for all longtermist interventions, some find it motivating that AGI safety, alternative foods, and interventions for losing electricity/industry (and probably other interventions) likely save lives in the present generation more cost-effectively than GiveWell top charities. This book argues that doing more to mitigate catastrophes can be justified by concerns of the present generation.
I do find it plausible that those interventions (or other longtermism/x-risk/GCR-related interventions) save lives in the present generation more cost-effectively than GiveWell top charities do.
In addition, a substantial chunk of my 2020 donations went to ALLFED (though this was motivated by longtermist rather than near-termist considerations).
That said, I think the following posts provide useful arguments for being skeptical about claims like the claim that those interventions save live in the present generation more cost-effectively than GiveWEll top charities do:
(I'm not necessarily saying I disagree with your claim; I just think it seems useful for these or similar links to accompany that claim.)
I'm also a heavy sympathizer towards longtermism. But I don't know that the dilemma needs to be framed as an either/or. Many of the endeavors I've personally gotten behind—bringing new reversible male contraceptives to market and fundamentally improving elections—impact the short-to-mid-term future as well as the long-term future.
That's because these interventions have the ability to have a positive impact now, plus their staying power impacts the future. That contrasts with interventions that deal with consumables or models where you have to keep adding the same large inputs to sustain future good.
Of course, these interventions aren't the only ones. We can think of others such as with charities like SENS. That fits the category because any technology developed doesn't go away and creates benefit into the far future. Good Food Institute has many of these features as well because it focuses on technology that can permanently affect the market yet can also affect people and animals now.
Of course, one may argue that these types of interventions may be somewhat erroneous given they could happen anyway. Even if that's the case, speeding along their timeline helps many people (or other sentient beings) who may not have been helped at all. Or absent the intervention, they wouldn't be helped to the same extent given how far along the intervention was in time.
Perhaps this is a way to have your cake and eat it too. You could focus on interventions that affect the near-term and the future rather than just the future. This way, you're also much more likely to see the interventions blossom or at the very least see their buds begin to form during your lifetime. And getting to see at least part of the excitement firsthand is a nice bonus.
[This comment just responds to one part of what you said, not the whole thing]
I do think that that can be valuable, but I personally expect that changing "where civilization ends up" is much more important than changing how fast we get there. To expand on my thinking on this, here's a section from a post I drafted last year but keep not quite getting around to polishing + publishing:
---
Beckstead writes that our actions might, instead of or in addition to “slightly or significantly alter[ing] the world's development trajectory”, speed up development:
Technically, I think that increases in the pace of development are trajectory changes. At the least, they would change the steepness of one part of the curve. We can illustrate this with the following graph, where actions aimed at speeding up development would be intended to increase the likelihood of the green trajectory relative to the navy one:
This seems to be the sort of picture Benjamin Todd has in mind when he writes:
However, I think speeding up development could also affect “where we end up”, for two reasons.
Firstly, if it makes us spread to the stars earlier and faster, this may increase the amount of resources we can ultimately use. We can illustrate this with the following graph, where again actions aimed at speeding up development would be intended to increase the likelihood of the green trajectory relative to the navy one:
Secondly, more generally, speeding up development could affect which trajectory we’re likely to take. For example, faster economic growth might decrease existential risk by reducing international tensions, or increase it by allowing us less time to prepare for and adjust to each new risky technology. Arguably, this might be best thought of as a way in which speeding up development could, as a side effect, affect other types of trajectory change.
---
(The draft post was meant to be just "A typology of strategies for influencing the future", rather than an argument for one strategy over another, so I just tried to clarify possibilities and lay out possible arguments. If I was instead explaining my own views, I'd give more space to arguments along the lines of Todd's.)
I know it's a struggle to balance polishing and publishing. I find it challenging to balance myself. But I'd love to read your post when you have it up all the way. I think a lot of us are curious about the interaction between longtermism, immediacy, and philanthropic investment.
I figured some people might be interested in whether the orientation toward longtermism that Michelle describes above is common at EA orgs, so I wanted to mention that almost everything in this post could also be describing my personal experience. (I'm the director of strategy at 80,000 Hours.)
I appreciate how this essay acknowledges a missing mood that I have often felt about long-termist interventions. It makes it hard for me consider the possibility of shifting donations from global poverty to long-termism because that feels like taking something away from people who need it right now. I don't always find long-termist arguments persuasive, but hearing that you struggle with that feeling but prefer long-term interventions anyway makes it easier to consider them.
I find it quite hard to talk about longtermism. It always seems like I'm getting into a conspiracy theory group. Especially since most of my friends are hardcore leftists that believe global inequality, climate change and factory farming are by far the most pressing problems. Telling a person like that that you consider that AI related catastrophes are more relevant that people dying today from preventable diseases makes me feel like a complete as***le and a conspiracy theorist. On the other hand, longtermists are right and I want to tell people that and share my beliefs and decisions. Besides that, nice piece.
Thanks for sharing your thoughts. I deeply empathize with the pull to help people now, while rationally agreeing with longtermist arguments. One insight that really stood out to me was:
This intuitively makes sense to me, particularly within the framing of neglectedness. Similarly however, I do wander whether the opposite could be true and whether the sense of something feeling hard may be our intuition providing a signal that the work doesn't align with our personal ethic (for whatever reason)?
I have personally struggled with this. I am trying to give my intuition more weight relative to pure rational reasoning as I have found that I can often learn a lot about my moral intuitions by stepping back and trying to unpack the uncomfortableness that I may feel.
Your comments makes me think of two posts which I imagine you might find interesting:
"How can we turn away from the suffering happening all around us in order to prioritize something as abstract as helping make the long-run future go well?"...this is the reality of what i face in my kind of job daily....having to support people in current desperate situations...and at the same time, strategize for the future....in a world of increasing reduced resources.
Thank you so much for writing and sharing this Michelle. It is super nice to have it highlighted that EA is super freaking hard sometimes, even for people right in the thick of it surrounded by like-minds!
Personally, I kinda struggle with the fact that I don't always agree all the way on some of the long-termism stuffs, and sometimes I feel just so darn confused that everyone else just seems to be on board and do it so easily. But these techniques, and the reminder that there is a whole community of people to reach out to, all with such a spectrum of beliefs and values and difficulties, is exactly what I needed to read today!
In the year or so since this article was posted, I've updated more towards prioritising the far future in many ways, while I'm still figuring out how it affects my actions. I find myself resonating with your post and the excellent comments (I only wish there were more).
My reasons for finding this hard are somewhat different from yours, but it helps to see another perspective. This quote encapsulates what I see in this post and some of what I feel:
I think this is a pretty common challenge in charity in general. In 2019 I had a panel discussion with some very senior people within global health, and one of the topics was making long term and potentially speculative investments in health, such as pandemic preparedness. One of them responded that investments were really important but that we have to "Invest in the things that are killing them today, not just what might kill them some day". I interpret that as saying it was very important to make these longer term investments, but that it also is very hard to justify both to yourself and to the affected people that you have to ignore their suffering in order to alleviate some potential suffering in the future.
So I think this challenge is something that most people in charity struggle with to varying degrees. Going for the direct help is often the more appealing solution, and it is the correct move in many situations. But as covid-19 unfortunately showed us, it's often better to take the less motivating long term perspective.