Hide table of contents

Written by Ben Todd and crossposted from the 80,000 Hours blog.

One of the parts of effective altruism I've found most intellectually interesting recently is ‘patient longtermism’.

This is a school of thinking that takes longtermism seriously, but combines that with the idea that we’re not facing an unusually urgent threat to the future, or another urgent opportunity to have a long-term impact. (We may still be facing threats to the future, but the idea is that they’re not more pressing today than the threats we’ll face down the line.)

Broadly, patient longtermists argue that instead of focusing on reducing specific existential risks or working on AI alignment and so on today, we should expect that the crucial moment for longtermists to act lies in the future, and our main task today should be to prepare for that time.

It’s not a new idea –- Benjamin Franklin was arguably a patient longtermist, and Robin Hanson was writing about it by 2011 — but there has been some interesting recent research.

Three of the most prominent arguments relevant to patient longtermism so far have been made by three researchers in Oxford, who have now all been featured on our podcast (though these guests don’t all necessarily endorse patient longtermism overall):

  1. The argument that we’re not living at the most influential time ever (aka, the rejection of the ‘hinge of history hypothesis’) by Will MacAskill, written here and discussed on our podcast.

  2. The argument that we should focus on saving & growing our resources to spend in the future rather than acting now, which Phil Trammell has written up in a much more developed and quantitative way than previous efforts, and comes down more on the side of patience. You can see the paper or hear our podcast with him.

  3. Arguments pushing back against the Bostrom-Yudkowsky view of AI by Ben Garfinkel. You can see a collection of Ben’s writings here and our interview with him. The Bostrom-Yudkowsky view is the most prominent argument that AI is not only a top priority, but that it is urgent to address in the next few decades. That makes it, in practice, a common ‘urgent longtermist’ argument. (Though Ben still thinks we should expand the field of AI safety.)

Taking a patient longtermist view would imply that the most pressing career and donation opportunities involve the following:

  • Global priorities research - identifying future issues and improving our effectiveness at dealing with them.

  • Building a long-lasting and steadily growing movement that will tackle these issues in the future. This could be the effective altruism movement, but people might also look to build movements around other key issues (e.g. a movement for the political representation of future generations).

  • Saving money that future longtermists can use, as Phil Trammell discusses. There is now an attempt to set up a fund to make this easier.

  • Investing in any career capital that will allow you to achieve more of any of the above priorities over the course of your career.

The three researchers I list above are still unsure how seriously to take patient longtermism overall, and everyone who takes patient longtermism seriously still thinks we should spend some of our resources today on whichever object-level issues seem most pressing for longtermists. They usually converge on AI safety and other efforts to reduce existential risks or risk factors. The difference is that patient longtermists think we should spend much less than what urgent longtermists think.

Indeed, most people are not purely patient or purely urgent longtermists – rather they put some credence in both schools of thinking, and where they lie is a matter of balance. Everyone agrees that the ideal longtermist portfolio would have some of each perspective.

All this said, I’m excited to see more research done into the arguments for patient longtermism and what they might imply in practical terms.

If you'd like to see the alternative take — that the present day is an especially important time — you could read The Precipice: Existential Risk and the Future of Humanity by Toby Ord, who works at the University of Oxford alongside the three researchers mentioned above.

Further reading

Comments11
Sorted by Click to highlight new comments since:

I really like this kind of post from 80,000 Hours: a quick update on their general worldview. Patient philanthropy isn’t something I know much about, but this article makes me take it seriously and I’ll probably read what they recommend.

Another benefit of shorter updates might be sounding less conclusive and more thinking-out-loud. Comprehensive, thesis-driven articles might give readers the false impression that 80K is extremely confident in a particular belief, even when the article tries to accurately state the level of confidence. It’s hard to predict how messages will spread organically over time, but frequently releasing smaller updates might highlight that 80K’s thinking is uncertain and always changing. (Of course, the opposite could be true.)

Thanks, this kind of description of your reaction is really useful for helping calibrate what work people find most helpful and therefore what to prioritise in future.

This is a very nice explanation Ben.

For the record, while I'm perhaps the most prominent voice in EA for our time being one of the most influential there will ever be, I'm also very sympathetic to this approach. For instance, my claim is that this key time period has already been going for 75 years and can't last more than a small number of centuries. This is quite compatible with more important times being 100 years away, and with the arguments that investing for long periods like that could provide a large increase in the expected impact of the resources (even if the time they were spent was not more influential). And of course, I might be wrong about the importance of this time. So I am excited to see more work exploring patient longtermism.

Arguments pushing back against the Bostrom-Yudkowsky view of AI by Ben Garfinkel.

I don't know to what extent this is dependent on the fact that researchers like me argue for alignment by default, but I want to note that at least my views do not argue for patient longtermism according to my understanding. (Though I have not read e.g. Phil Trammel's paper.)

As the post notes, it's a spectrum, I would not argue that Open Phil should spend a billion dollars on AI safety this year, but I would probably not argue for Open Phil to take fewer opportunities than they currently do, nor would I recommend that individuals not donate to x-risk orgs and save the money instead.

Totally frivolous question: why chairs?

Just a visual metaphor of most centuries being boring and only a few standing out as uniquely influential.

I've never found Will's objections to the hinge of history argument persuasive. Convincing me that there was a greater potential impact in past times than I thought, ie. that it would have been very influential to prevent the rise of Christianity, shouldn't make me disbelieve that arguments that AI or bio risks are likely to lead to catastrophe in the next few decades if we don't do anything about it. But maybe I just need to reread the argument again.

I think you probably need to read the argument again (but so do I and apologies if I get anything wrong here). Will has two main arguments against thinking that we are currently at the hinge of history (HoH):

1. It would be an extraordinary coincidence if right now was the HoH. In other words our prior probability of that possibility should be low and so we need pretty extraordinary evidence to believe that we are at HoH (and we don't have such extraordinary evidence)

2. Hinginess generally has increased over time as we become more knowledgeable, powerful and hold better values. We should probably expect this trend to continue and so it seems most likely that HoH is in the future

I understand that (critical) feedback on his ideas mainly came in challenging point 1 - many in the EA movement don't think we need to set such a low prior for HoH and think that the evidence that we are at HoH is strong enough.

Thanks, that was useful. I didn't realise that his argument involved 1+2 and not just 1 by itself. That said, if the hinge of history was some point in the past, then that doesn't affect our decisions as we can't invest in the past. And perhaps it's a less extraordinary coincidence that the forward-looking hinge of history (where we restrict the time period from now until the end of humanity) could be now, especially if in the average case we don't expect history to go on much longer.

You might also like to listen to the podcast episode and have a look at the comments in the original post which cover quite a few objections to Will's argument.

For what it's worth I don't think Will ever suggests the hinge was in the past (I might be wrong though). His idea that hinginess generally increases over time probably implies that he doesn't think the hinge was in the past. He does mention that thinking about the past is useful though to get a sense of the overall distribution of hinginess over time which then allows us to compare the present to the future.

Also I just want to add that Will isn’t implying we shouldn’t do anything about x-risks, just that we may want to diversify by putting more resources into “buck-passing” strategies that allow more influential decision-makers in the future to be as effective as possible

Curated and popular this week
Relevant opportunities