Hide table of contents

Over the last two years I have been researching and advising numerous government officials on how to do long-term thinking well. Now you are probably not going to be shocked to hear me say it but: making long-term decisions is difficult. Most institutions don’t do it very well and the feedback loops to tell us what works are, as you would expect, long. Yet, that said, it is neither a new challenge nor an uncommon challenge. Many groups of people have faced this problem before and developed tools, strategies and approaches that seem to be working. I have been trying to pull all of this together to paint a rough picture of what best practice in long-term thinking looks like, and advise governments accordingly.

To do that work well I did of course engage in some depth with relevant academic work, including the research on longtermism both from within academia and on this forum. And lo and behold it seemed to me like the space was divided into two distinct camps: longtermist theorists and long-term practitioners. The theorists wonder why policy makers do not listen to them and the policy practitioners wonder why academics are not producing work relevant to them. As a practitioner, it seemed that on some days I would say something that was obvious to me and a researcher would be excited by how novel and useful it is, yet on other days I just could not understand the things longtermist researchers were doing and why it mattered. This post is an attempt to bridge this divide.
 

The post is in three parts: 

Section A is descriptive. I invite you to look around my world, at the politicians, policy makers and risk planners who think long-term. I draw examples from fields as diverse as defence, forestry, tech policy and global development looking for common threads and patterns that give us some idea of how we should be making our long-term plans and decisions. My hope is to both be informative about current best practice in long-term planning but also to give a sense of where I am coming from as a practitioner thinking about the long-term. 

Section B is applied. There are of course differences between how a UK government policy official will think about the long-term, and how longtermists might think about the long-term. I take some of the ideas described in Section A and try applying them to some longtermist questions. I don’t have all the answers but I hope to suggest areas for future research and exploration. 

Section C is constructive. I reflect on how my experience as a practitioner of longtermism shapes my view of the academic longtermist community. I then make some recommendations about how longtermists can better produce useful practical research.

 

 

Section A: Welcome to my world, let me show you around 

Imagine that you are a politician or policy maker. You believe that the future matters a lot and that preventing existential risk is important, but you are uncertain about how best to achieve long-term goals. So for a starting point you look for existing examples of good long-term planning and long-term decision making.

At first good examples of long-term policy thinking can be hard to spot. Political incentives that push policy-makers towards short-term plans [1] or towards making long-term decisions primarily based on ideology [2]. There are however places where there is seemingly good long-term policy making to learn from, especially a step away from the most politicised topics. And if we look across enough institutions we start to get a picture of a best practice approach to long-term thinking.

Now I don’t want to claim that current best practice represents the only way to do long term thinking. But I do think it makes sense to set common sense (or best practice) as a prior and not to differ too much from it without a good reason. (I discuss this more in the conclusion of this section.)

Below I run through some of the key best practices. Please note:

  • I err on the side of explaining simple things rather than assuming my reader already knows this content (so if you know most of this feel free to skim or skip section A).
  • The practices I chose to highlight are all used in long-term plans across many different timescales so should be applicable even if looking very far ahead. I try to draw specific examples that look 10+ years into the future, or are non-time-bound.
  • Most of my examples are from UK policy. Policy works differently in different countries. I think much of this will apply globally and in non-policy contexts, but do be cautious when generalising to other countries and contexts.
  • This is not perfect. In particular I have perhaps underexplored some of the tools used in the intelligence community, the military and counter-terrorism units.

 

1. What long-term planning best practice looks like

Setting a long-term vision

The first step towards long-term policy planning is setting out a vision of what policy should achieve in the long-term. That this is a good way to begin should be fairly obvious – setting direction is generally considered a key first step for any decision making process, long-term or otherwise. Normally this is qualitative, although there are a few cases of quantified long-term goals, like let’s keep the world below 1.5°C warming.

Despite the importance of this step it is useful to flag it explicitly as it is sometimes not done. For example when talking to civil servants about why they don’t do long term work they have expressed frustration: how can they, in many situations the Ministers have not asked for it and have not set out long-term goals so what is there to work towards?

One key thing to think about for long-term planning is that you do want your long-term vision to be broadly agreed by those who will use it and to not change too much over time. For policy institutions that need to plan across multiple domains and political changes this means as much as possible building support across party boundaries and ensuring broad public buy-in to the vision. This may require broad public consultations to build a vision that maps out a consensus view of the whole population. For smaller institutions this may require using consensus decision making tools among those most involved or affected by the institution.

Examples


Plans that increase in detail as they decrease in timeframe

Before diving into details of long-term planning I think it is helpful to describe what long-term plans look like.

Long-term plans almost always have the following pattern. They start with a long-term vision, often time independent. There is then a long-term high-level target or set of targets, maybe 10-30 years ahead. There are then sets of medium term targets a few years ahead that go into more detail. Finally there are the implementation plans for achieving those targets. If well managed there will also be oversight and accountability and check-in mechanisms to make sure the whole process is working.

Let me illustrate with some examples.  

Example 1 – emissions. The UK has a high-level long-term target to reach zero net emissions by 2050, set out in the Climate Change Act 2008 (and amended in 2019) – a ~40 year plan. The Climate Change Committee then makes recommendations of medium term emissions targets towards reaching the long-term target (called Carbon Budgets) which look 5-15 years ahead. The government then works out what policies can be implemented to achieve those medium term targets and what short term actions need to happen.

Example 2 – forestry. Forestry England has a long-term not time bound aim that “The nation’s forests are superb forests. They are resilient, sustainable and highly valued; providing a wide range of benefits such as carbon storage, wildlife habitats, natural spaces for enjoyment and green resources such as timber.” Plans are made for individual forests that work towards this. Some of these plans briefly look 30 or 50 years ahead but mostly they are 10 year plans (here is a nice example). Local staff in each forest will then make shorter plans towards achieving the 10 year goals.

Example 3 – foreign policy. The government’s “long-term strategic” 2021 Integrated Review of defence and foriegn policy sets out a “long-term” vision of the UK in 2030 (9 years ahead) and priority medium-term actions to achieve by 2025 (4 years ahead) and then departments will take the short term steps to deliver on this.


(Examples 1 and 2 are significantly more long-term than most government planning. Example 3 is more typical.  Looking ahead 9 years might feel more medium- than long-term but it is perhaps a reasonable length of time in this case given how rapidly technology and international relations evolve, and the fact that there will likely be UK parliament elections in 2024 and 2029. Also, 9 years is still long-term compared to most government plans, consider that the UK’s risk planning looks ahead only 2 years).

This kind of staggered approach to planning is incredibly common. It is very rare to find long-term plans that do not do this.
 

One key common feature to note is that there appears to be a roughly 30 year limit on setting long-term targets, even for high-level targets. According to “The Good Ancestor” it is very difficult to find business or government organisations that make plans longer than this. Longer-timeline data does motivate decisions, e.g. the IPCC’s climate change predictions to 2100 (and occasionally longer than that) feed into the decisions to focus on climate change, but the plans themselves rarely try to set targets beyond 30 years.

I would also note that agreeing the medium-level goals likely requires some combination of research, taking stock of progress towards the high-level goals and consensus building among key decision makers.
 

Adaptive planning

The most common kind of plan that doesn't quite match this pattern is adaptive plans. Adaptive plans are those where they build in feedback loops and flexibility into the plan. The process is described here as:  “plans are designed from the outset to be altered over time in response to how the future actually unfolds. In this way, modifications are planned for, rather than taking place in an ad hoc manner.”

To make adaptive plans work may require ongoing monitoring of situations and awareness of the triggers that might cause your plans to change and designing actions that maintain option value and flexibility. Adaptive plans are not very common but they may look longer ahead.

Examples

  • The Thames Estuary 2100 is the Environment Agency’s “planning to manage tidal flood risk in the Thames Estuary until the year 2100”. It focuses on ensuring optionality and being adaptive to changing levels of risk.
  • The Puerto Rico Economic and Disaster Recovery Plan Decision Support Tool is designed to manage future risk by supporting decision makers to understand the trade-offs and to make good decisions in a variety of risk recovery scenarios.
  • From what I understand of the UK’s counter-terrorism strategy, it relies heavily on continuous monitoring by intelligence services and taking rapid actions to mitigate potential risks as they arise. In this regard it is highly adaptive (although I am not sure it would be considered long-term).

 

2. Understand the future – use foresight and futures tools

To make good long term plans and to support future focused decision making there are a host of futures tools that practitioners use. This includes tools such as: horizon scanning, Delphi method, discount rates, scenario planning, red-teaming, prediction markets, reference class forecasting, Superforecasting and many more.

I am not going to introduce every single futures tool here but I will run through at a high level what various tools are doing. There are a variety of different ways to explain Futures work and there are better sources than me, maybe see this NESTA report or the UK government futures toolkit or the Society for Decision Making Under Deep Uncertainty 

We can break down the tools used into three main categories:
 

Exploring

Mapping out a broad range of possible futures to ensure that we are asking the right questions, not neglecting key possibilities and able to design options that are robust to a range of future scenarios. Tools include: tabletop exercises, exploratory modelling, brainstorming, red-teaming, trend analysis, scenario planning. (Examples of UK good practice include the work of the Emergency Planning College which regularly runs emergency exercises for civil servants and the in-depth Global Strategic Trends report.)
 

Forecasting

Making predictions or likelihood estimates of possible futures. Tools in this space, such as superforecasting, reference class forecasting and trend analysis, are testable against real world outcomes making them a powerful way of supporting decision making. Tools such as prediction markets, delphi method and expert elicitation allow forecasts from different individuals with different expertise to be combined into a single prediction. (Examples of good practice include the Office for Budget Responsibility’s regular economic and fiscal forecasts which are made public and later evaluated for accuracy against the real world data, or the work of the Good Judgment Project.)
 

Visualising

Analysts and forecasters are in many cases not the final decision makers and they will often need to communicate their images of future worlds to others in a way that supports good decisions. This can be achieved by senior decision makers participating in futures exercises, scenarios that describe the most decision relevant future (a nice example here), or constructing a believable inspiring shared vision to allow large teams to work together towards the same goal (see examples above in section A.1). 

All these tools should work together. For example the UK government’s risk management approach involves horizon scanning exercises to explore potential new risks, forecasting in the form of estimations of the likelihood and impact of each identified risk and then descriptions of reasonable worst case scenarios to support policy decisions on each risk. (Unfortunately in practice the UK risk register does not do any of these steps particularly well.)

 

3. Make long-term decisions – ensure they are robust

i. Quantitative assessment

The standard way that policy decision makers make decisions about the future is to use quantitative assessments (such as expected value calculations) to compare current and future or a variety of future options. In policy analysis this often looks like converting current and future costs and benefits of various options into a Net Present Value (NPV) or a net present social value (NPSV). For example see the Regulatory Impact Assessment Calculator (the calculator is useful but unsurprisingly for a government tool it cuts off any consideration of effects after 10 years).

This relies on applying a discount rate (UK gov discount rate = 3.5%). Discount rates account for future benefits being less desirable than the present benefits due to:

  • Pure time preference – ie. the future being intrinsically lower value, (UK gov rate = 0.5%)
  • Economic growth – as the future will be richer passing resources forward reduces their relative value, (UK gov rate = 2%) [3]
  • Catastrophic risks (or windfalls) that could negate future impact, (UK gov rate = 1%)
  • Project specific risks (or windfalls) that could negate future impact (project dependent. May vary from 0-20%)

Longtermists would almost universally reject the pure time preference part of the discount rate. When considering extinction risks the economic growth part also drops out. The risk parts remain and as such discount rates could be a useful tool for longtermist decision making. [4]

In low stakes situations or when there are sufficiently good forecasts and low-uncertainty, a simple quantitative assessment might be sufficient. But to make a good long term decision, especially on greater time scales, a few more tools are needed.

 

ii. Robustness

Decisions about the future are often decisions about situations of high uncertainty. The overarching principle I see applied to making good decisions about an uncertain future is to ensure that those decisions are robust to a broad range of types of evidence, to a broad range of assumptions and to various future scenarios. 

I use the term “robust” here to mean: has a low chance of failure across a variety of domains or ways of thinking. In practice this may require taking a satisficing approach (discussed more below).

(I struggle with the terminology here, but “robustness” appears to be the most common appropriate term. Other closely related concepts you might be familiar with include many weak arguments, cluster thinking (Holden), Decision Making under Deep Uncertainty, fox rather than hedgehog (Tetlock), and avoiding rittle arguments (Christiano)).
 

Robust use of evidence (cluster thinking) [5]

If making a decision about the future (or about anything with high-uncertainty) decision makers should ensure that they are drawing on a broad range of decision making tools. I think this is best explained by Joey here or by Holden here. The more different decision tools that you can use and the greater the extent to which they converge the more certain one can be in their decision. This could mean using quantitative assessment(s), expert consensus, strategic analysis, common sense and so on. Decision makers should also be aware that more bespoke decision tools outperform general decision tools [6], for example for managing risk you could use a vulnerability assessment approach which is designed for understanding risks.
 

Robust to various future scenarios (e.g. scenario based planning)

These tools require identifying a range of possible future scenarios. These might be the scenarios where current assumptions about the future are wrong, the most likely scenarios, the scenarios suggested by considering critical uncertainties, high-risk scenarios, or something else. Decisions makers then build a detailed understanding of each of the important envisaged future scenarios. They then aim to make a plan that should work (or not fail horribly) in all the identified future scenarios.
 

Robust to key assumptions (e.g. assumption based planning)

These tools require trying to identify the critical assumptions that underlie the decision that might be wrong. Decision makers then try to design a plan that would still work if any of those assumptions were incorrect. This reduces the chance that the plan will fail. 

As mentioned this may look like taking a satisficing rather than maximizing approach. So rather than taking the option with the highest expected value robust decision makers take the option that is satisficing good on a range of future scenarios or assumptions. This helps ensure that plans is robust to a range of possible points of failure (the theoretical case for this is set out in this GPI paper), that the plan is more likely to work in the case of unknown unknowns and that the plan is stronger to poor decision making (similar to how engineers will design a bridge made to withstand many times the forces it is expected to face, to adjust for uncertainty and overconfidence).


Examples

These tools are used but are often not published so it is harder to point to examples. Robust use of evidence is more of a way of thinking than a specific tool, and assumption based planning is not common in the UK policy spaces I am familiar with. Furthermore the disconnect between analyst and policy maker means there are sometimes challenges to get the outputs to feed into final decision making. Some examples are:

  • The UK government’s food security scenario planning.
  • This think tank piece is a nice clear example of scenario development.
  • At the UK Treasury I worked on scenarios about the long-run future of the tax system, thinking about how tax incomes and tax bases would change with time.

 

Conclusion

So there we have it. Long-term thinking best practice distilled into a few pages as best as I can.

As I set out above I do think it makes sense to set best practice as a prior and not to differ too much from it without a good reason. I have looked a bit at the strength of the empirical and theoretical evidence for some of the ideas above – and there does seem to be a decent theoretical case and some empirical evidence for these tools. (Unfortunately I am doing this writing in my spare time so don’t currently have capacity to look at this in the depth I would like). 

As such, I roughly think that the tools and approaches mapped out here do represent the best approaches for making long-term decisions.

For example consider the point that in practice most long-term target and vision setting seems to cap out at 30 years. To me this is evidence that given the future is so uncertain the best you can do in most contexts is to plan to be in a good position in ~30 years to face whatever challenges come beyond those 30 years. Even where there are good incentives to care about the long-term (such as in forestry and climate change), actual plans tend to be focused on the few decades and aim to leave the future actors in a good position to manage things beyond that timeframe. 

One criticism I have received is that “EA longtermists think thousands / millions / billions of years into the future and that these ways of thinking are unlikely to be the best on those timescales”. I strongly disagree. It is not practical to make billion year plans. (Try it – it is basically impossible – or at least your billion year plan is going to be practically indistinguishable from a million year plan or 1000 year plan). Longtermists are going to have to make plans along the lines of: let’s minimise the chance we fall into a bad attractor state and maximise the chance we fall into a good attractor state within the length of time that we can reasonably influence, which is 10-100s of years. And these are some of the best tools for planning on that timeframe.

Maybe you disagree and think existing practice can teach us very little about how to make decisions. But agree or disagree or something in between, let's move on. I have shown you around my world and at the tools, approaches and ideas that make up the best examples of long-term thinking I can find. The question to explore now is what might this tell us about how to approach longtermism.


 

Section B. In which I set out ways I would be excited to see these ideas applied

I thought it could be helpful to show how the ideas in Section A could be applied by researchers, academics and others in the longtermist space. In this section I pull out 5 relevant challenges that I have come across reading longtermist research. I suggest ways that researchers or practitioners might want to use the ideas from section A to help solve the challenges (with details in the Appendix). For all of them I have a weakly held view that my suggested approach would be the best way to tackle the challenge at hand.

 

1. Identifying intermediate goals (try staggered long-term planning, it shouldn’t be too difficult)

Challenge: Not having clear long-term plans is a challenge across the longtermist space for anyone wanting to take action to make the world better. The staggered long-term planning approach would suggest setting “intermediate goals” and drawing out concrete actions that move towards those goals. Yet even knowing what goals to set is a problem – and potentially a key bottleneck for longtermist funders (see here) policy makers and entrepreneurs.

Suggestion: The ideas in Section A Part 1, set out how consensus building and broad ideation tools could be used to set a vision for a good future in 20-30 years and to design a range of 10 year targets that move the world towards that vision. This should be fairly achievable – the consensus-based staggered planning approach is extremely common and can be run from the level of a small organisation [7] to the level of an entire country and my personal experience of this kind of work makes me optimistic it can deliver useful results. [8] 

 

1.b. Designing policy asks or entrepreneurial ideas (aim for the intermediate goals).

Once intermediate goals have been set and loosely agreed on it becomes easier to develop policy (or other plans). The problem goes from the broad question of what is good policy from a longtermist perspective to the narrow question of what policy that moves the world towards the specified intermediate goals. This could also help give longtermist policy folk a stronger sense of the value and priority of various policy asks.

 

2. Putting patient longtermism into practice (use long term planning such as discount rates)

Challenge: Patient longtermism involves passing resources to future generations so that they can have an impact at the point in time when it is most needed. This requires finding ways to and making plans to pass resources to the future.

Suggestion: The various long term planning tools set out above could be used by patient longtermists to work out the best way to pass forward resources and how far forward resources could in practice be passed. To provide a concrete example, a simple discount rate model could be used to consider how far forwards funds could be directed. A discount rate of 0.7% would suggest a fund could be passed forward about 300 years (details in Appendix). 

 

3. Improve understanding of future risks (use professional risk management tools)

Challenge: It is useful to have a good understanding of what existential risks humanity faces. Currently most of the work done in this space involves explicit risk probability estimates. To date estimates of existential risks are rare, highly divergent and almost never look at timelines shorter than 80-100 years (see here).

Suggestion: There are other approaches that practitioners use for understanding risks. Current best practice in risk management is to to focus on vulnerability assessments and tools that focus on exploring the range of scenarios that could lead to a disaster rather than forecasting specific risk scenarios. I expect a global catastrophic risks vulnerability assessment could provide useful insight and align longtermist risk concerns to other work in this space. Existential risks likelihood estimates on a 25 years or less timescale would also be really useful for policy work. Potentially an ongoing monitoring / adaptive planning approach to global catastrophic risks (similar to how terrorist risks are managed) could be valuable too.

 

4. Solving the problem of cluelessness (hint: it’s the same thing as deep uncertainty) 

Challenge: Longtermist philosophers sometimes worry about the problem of cluelessness, i.e. the challenge of making decisions given radical uncertainty about the long-term effects of our actions.” 

Suggestion: From a practitioner's perspective “cluelessness” is essentially identical to “deep uncertainty” (or “knightian uncertainty”, “extreme model uncertainty” and similar to “wicked problems”). As such deep uncertainty decision tools, such as scenario planning, robust satisficing, adaptive planning tools and others provide for a way to manage the problem of cluelessness for practical purposes . Philosophers could look into these tools, at how well these tools work, what the theoretical case for them is and how they tie into core philosophical principles. This could lead to an evidence based approach that can be applied in practice to address the problem of cluelessness. To some degree this is happening and philosophers are recognising the value of this kind of analysis, for example this recent GPI paper looks at robust satisficing and questions why this has not happened to date (saying “there has been remarkably little philosophical discussion of robust satisficing as a candidate decision norm, given its popularity among those at the coalface”).

 

5. Making the case for strong longtermism (by applying a broader range of decisions tools)

Challenge: In The Case for Strong Longtermism Will and Hillary say one of “the weakest points in the case for strong longtermism is “the question of whether an expected value (EV) approach to uncertainty is too ‘fanatical’ in this context.” and that this “would benefit from further research.” 

Suggestion: The tools in Section A could be used to apply a non-EV approach to uncertainty to assess strong longtermism, including a satisficing approach and a many-weak arguments approach. Demonstrating the outcomes of these approaches to the issue could help resolve this key uncertainty and help make (or break) the case for strong longtermism.

 

Is this happening?

In some cases this work is happening already. GPI is looking into deep uncertainty and Will and Hilary continue to strengthen the case for strong longtermism. 

In some cases this work may be happening, but is behind closed doors, such as work on patient philanthropy and maybe some work on intermediate goals. 

And some of this work is just not happening such as work on other ways of thinking about existential risks and perhaps much of the work on setting intermediate goals.

 

 

Section C: My story and some thoughts and suggestions

This section risks being critical. I do think constructive criticisms can be useful, however I want to be cautious: the longtermist community is large and diverse, and I only see a small policy-focused and UK-focused part of it. My aim is less criticism and more exploration and bridge building, and hopefully this section can be read in that way. I want to understand the different approaches and improve the dialogue between researchers and practitioners.

 

On longtermist research 

Whilst working in longtermist policy I took some time (especially during summer 2020) to dive deep into the longtermist academic and research work. I was fairly surprised by what I found. It appears that there is a lot of longtermist research happening yet, other than forecasting work, very little of it matched my expectations or my ideas of long-term planning. There was minimal consensus building, vision and goals setting, staggered planning, exploration of different futures (<100 years), development of scenarios, identification of critical assumptions, adaptive planning and so on.

A lot of the research seemed tangential to all of this. To give one example: there was plenty of work on the very long-run, 1000+ years, future (see here, here, here, here, here and here) – my view was (and still is) that although such futurecasting type work can provide some insights to decision makers, it will rarely be decision-relevant for anyone wanting to narrow down policy goals or set organisational strategies and so on [9]. (Everything I have read about long-term planning cautions against basing decisions on highly speculative considerations, and much of this work felt highly speculative.) There is also some very good longtermist policy development research happening, but it is generally a bit piecemeal and not yet being done in a strategic way that links policy suggestions to long-term goals .

To me, coming from the policy world and looking across into the research world, it felt like there was a gap between the methods and tools used by practitioners who plan for the long-term and methods and tools used by longtermist researchers. (My initial response in summer last year was fairly critical of longtermists, this post is more explortaory.) 

 

Why the difference in approach?

So why was this work I came across so different from what I expected? Here are a few possibilities that cross my mind:

  1. Different goals. These researchers were not trying to produce long-term plans. They had different adjacent aims. Evidence for this: GPI’s goal is not to produce immediately useful research but to promote EA-type research within academia. [10]
  2. Academic research is distinct from practitioners' research. Academic work is slower and more thorough. The academics are still at the early stage of researching in-depth how to think about the long-term. Evidence for this: GPI is looking at whether robust satisficing is a good tool, rather than diving into using the tool.
  3. Not yet a priority – as the community is young. The longtermist community is still taking its baby steps and there are very few non-research longtermists changing things like policy, and so long-term planning work has not yet been a priority.
  4. Not yet a priority – as there is a lot of time. For patient longtermists there might be no hurry to do this work, maybe it is fine if it takes a century to figure out if longtermisim is true and a century figuring out what tools to use and so on.
  5. I am wrong. My view of how to do long-term thinking based on current best practice is flawed. Evidence for this: it is plausible, as set out here, that it is not the case that EA’s overuse forecasting, but that practitioners underuse forecasting.
  6. Longtermist researchers are wrong, or at least foolishly neglecting a useful avenue of exploration. Evidence for this: people are now pointing to intermediate goal setting as a key bottleneck (e.g. here) – it would have been good if longtermist researchers had been trying to do some work on this.
  7. This work was all private. Due to publishing norms, concerns about information hazards or other factors, any such useful work of this type was being kept private. Evidence for this: Luke from OpenPhil suggests this might be happening (see here).
  8. I was looking in the wrong places, or have been unlucky with the papers I sampled. (I obviously cannot give evidence of articles I have not found but hopefully folks can share such things in the comments).

I expect there is at least some truth in all of the above (and leave it to the reader to draw their own conclusions). As such I expect it is likely that there may be some areas for improvement on how the community uses long-term planning tools going forward (explored below). 

Although trying to avoid criticisms I do think it is worth flagging that in my experience I have found the longtermist academic community to be fairly unsympathetic to suggestions that there could be useful things to learn from the long-term thinking tools used by practitioners. Suggestions I have made for practical-focused research were rejected as not tractable [11] and I know a few other cases of longtermist academics dismissing the value of practical research. For example, one academic at a longtermist organisation told me they would like to do more looking at existing practice but that senior staff tell them not to. I am not sure I yet understand the justifications for this work being dismissed or blocked.

 

Some potential areas for improvement

As someone working on longtermism and policy here are the things I would like to see that I think would help me, and hopefully others, to have more of an impact in the world:
 

1. More work in line with current best practices in long-term thinking (as set out in Sections A and B). 

Whatever the reasons for not focusing on this kind of work to date, I believe that going forward longtermist long-term planning type work is crucially important and tractable and it would be great if more of this was done. If academics continue to dismiss or block this kind of work I would love to hear why.
 

2. Transparency about long-term planning.

If it is the case that lots of this work happens behind closed doors but is not published, then that seems like a shame. Especially if the reasons for not publishing are small. Traditionally, academic institutions do not publish their strategies and theories of change, but there is no good reason for this that I can see and it is common practice among other non-profit organisations, so this could be encouraged by longtermist research organisations. Furthermore I think there is a general trend towards (EA) organisations publishing less after they have built good credibility or funding streams which seems problematic.
 

3. Longtermists could aim to have shorter feedback loops (e.g. more support for practical and policy work)  

One key thing that I stress when talking about EA to new people is the importance of knowing what you are doing is working, that it is easy to think you are doing good but not be achieving anything or even doing harm. One key feature of good long-term planning under high uncertainty is building in feedback loops so you know that what you are doing is useful. 

Longtermists may never have full feedback loops, it is hard to know if an action prevents an existential catastrophe. But partial feedback loops can be improved. For the community as a whole this could mean doing more projects that are not research. For researchers more dialogue with and feedback from practitioners, entrepreneurs and policy makers and so on could be useful. For funders this could mean funding things with shorter feedback loops, like policy work – currently most funders are extremely reluctant to fund any practical or policy related longtermist work (like developing and then advocating for longtermist policy proposals) [12].

 

 

Conclusion

There is a distinctive flavour to the way good long-term decisions are made by institutions. I have tried to give you a summary of it as well as I can. My hope is that longtermist researchers and others take from this an understanding of current long-term thinking best practice and can apply that to solve some challenges the community faces.[13] I am optimistic – I think this kind of work is beginning to happen [14] and that we will see more movement in this direction over the next few years.


 

Thanks for feedback goes to Jack Malde, Rumtin S and Gabriella O

 

 

Appendix – additional notes for section B

The Appendix is in the Google Doc version of this post here.

 

 

Notes and thoughts

Readers beware. This is mostly a dumping ground for some half-baked opinions that didn’t make the cut for the main body of the text

[1] UK Ministers expect to be in power 1.5 to 2 years and so have few reasons to think longer than that. Often government analysis will just stop beyond a certain length of time – the UK Treasury will estimate the effects of a policy for 5 years and ignore effects beyond that. Even when the decision makers want to work long-term there are non-obvious idiosyncratic factors that limit institutions' abilities to do so – long-term thinking in the UK government is limited by problems with knowledge management software, staff turnover and departmental siloisation

[2]  In practice much of senior policy makers’ long-term plans are based on political ideology. (In this context I think in most contexts we can consider ideology as being a decision making heuristic). For example, Boris Johnson believes ideologically in promoting global freedom (see quote p7) and as a result he wants more UK aid money to go to Ukraine to help minimise the long-term risk of Russia undermining Western democracies. He makes long-term decisions to achieve this goal such as combining the foreign office and development department (the merger itself will take years to be complete, and any effects on global stability may not be seen for decades).

[3] This is not the same as accounting for inflation. This is adjusted for separately from the discount rate.

[4] For more details on discount rates (or if this is confusing) I recommend looking at the Treasury Green Book section A6.

[5] This paragraph is more prescriptive and less descriptive. It is notable to me that this is based less on experience of policy making and more on sources about good decision making, both from within the EA community and elsewhere. Perhaps an area where policy makers can still be learning from EAs and others.

[6] I believe the case for this is in judgment in managerial decision making by Max Bazerman, but I don't have a copy now.

[7] It matches how charity strategy works on a Vision, Mission & Theory of Change model. 

[8] For some topics those results might be that in some areas there is need for more research and we should hold off from policy work etc.

[9] Ok honestly can someone explain this to me? Genuine question! Why do we need to know what is going to happen in 1000+ years (or even 100+ years) surely once we know that the future might be quite big that is enough for any practical decision making? It feels to me that anything about how the long-run future is going (also about possible solutions to the Fermi paradox, like here and here) is just too speculative to be of use to anyone? But there must be some good reasons. Please feel free to explain in the comments or PM me.

[10] To clarify in response to critical feedback – I think longtermist policy practitioners and longtermist researchers both have the same end goal to ensure the far future goes well, but that organisations’ intermediate goals will differ from this. 

[11] My story ends in the following way. I spent a few months trying to find a place to do useful longtermist policy cause prioritisation research. Yet more and more I got the impression that such work was not wanted or seen as useful until I was eventually hired by a predominantly neartermist EA organisation who was interested in throwing me at some neartermist policy cause prioritisation research. I have ended up doing that instead. I know this is just one anecdote but I do think it would be a shame if the longtermist community gradually loses folk who are practical minded because it keeps telling them such work is not useful.

[12] Currently most EA funders are (rightly or wrongly) extremely adverse to funding anything in the longtermist policy space (staff at multiple EA organisations have said to me that this is the case). This can be seen by looking at fund payouts, for example the Long-Term Future Fund says it will support “policy analysis, advocacy” but based on recent payout reports it appears not to be currently funding such work (I believe this is not due to any lack of policy project funding requests). I expect funders could do a better job at funding, or at least explaining their approach to funding, policy or other practical longtermist work. 

[13] I think it is worth being honest that making this post feel fair and balanced was an effort and that this post does not really capture how I feel. I do feel confused by the value of much of the work I have seen done by longtermist research organisations. It bothers me that there seem to be few folk in this community using the tools that I would guess are best for thinking long-term (except forecasting). And I do feel frustrated when it seems like there is an active disinterest in any of this: in having expertise in risk management, long term planners or long term plans, and I cannot work out why. I am genuinely hopeful and do see more of this happening but so slowly. I don’t know how valid all these emotional responses are and considered exploring them in depth this post but it is already long and I have other things to do so instead you got this cheery little footnote. [Minor edits]

[14] Some positives include: Will MacAskill defining an action relevant “restricted hinge of history hypothesis”, David and Andreas researching “deep uncertainty”, policy folks like Rumtin and Gabby looking to set out intermediate goals or unpick the blockages to longtermist policy work. Also here is a list I made of all the longtermist research I have found particularly useful. I do think the community is moving in a positive trajectory.

[Will post a novelty postcard to the person who spots the most typos.] 

Comments15
Sorted by Click to highlight new comments since:

Thanks for writing this Sam. I think a large part of the disconnect you are perceiving can be explained as follows:

The longtermist community are primarily using an unusual (or unusually explicit) concern about what happens over the long term as a way of doing cause prioritisation. e.g. it picks out issues like existential risk as being especially important, and then with some more empirical information, one can pick out particular risks or risk factors to work on. The idea is that these are far more important areas to work on than the typical area, so you already have a big win from this style of thinking. In contrast, you seem to be looking at something that could perhaps be called 'long-term thinking', which takes any area of policy and tries to work out ways to better achieve its longterm goals using longterm plans. 

These are quite different approaches to having impact. A useful analogy  would be the difference between using cost-effectiveness as a tool for selecting a top cause or intervention to work on, vs using it to work out the most cost-effective way to do what you are already doing. I think a lot of the advantages of both cost-effectiveness and longtermist thinking are had in this first step of its contribution to cause-prioritisation, rather than to improving the area you are already working on. 

That said, there are certainly cases of overlap. For example, while one could use longtermist cause-prioritisation to select nuclear disarmament as an area and then focus on the short term goal of re-establishing the INF treaty, which lapsed under Trump, one could also aim higher, for the best ways to completely eliminate nuclear weapons over the rest of the century, which would  require longterm planning. I  expect that longtermists could benefit from advances in longterm planning more than the average person, but it is not always required in order to get large gains from a longtermist approach.

Hi Toby,

Thank you for the thoughts and for engaging. I think this is a really good point. I mostly agree.

To check what you are saying. It seems like the idea here is that there are different aims of altruistic research. In fact we could imagine a spectrum, something like:

Ethics

Worldview

Cause

Intervention

Charity

Plan

At the top end, for people thinking about ethics etc, traditional longtermist ways of thinking are best and at the lower end for someone thinking about plans etc, long-term planning tools are the best.

I think this is roughly correct. 

My hope is to provide a set of long-term planning tools that people might find useful, not to rubbish the old tools.

That said, reading between the lines a bit, it feels like there is still some disconnect about the usefulness and importance of different kinds of research. I go into each of these a little bit below.

 

On the usefulness of different ways of thinking 

You said:

A useful analogy  would be the difference between using cost-effectiveness as a tool for selecting a top cause or intervention to work on, vs using it to work out the most cost-effective way to do what you are already doing.

I am speculating a bit (so correct me if I am wrong) but I get the impression from that analogy that you would see the best tools to use a bit like this

Ethics

Worldview

Cause

Intervention

Charity

Plan

Traditional longtermist ways of thinkingLong-term planning 

(Diagram is an over-simplification as both ways of thinking will be good across the spectrum so the cut of would be vague, but this chart is the best I can do on this forum.)
 

However I would see it a bit more like this:

Ethics

Worldview

Cause

Intervention

Charity

Plan

Trad. longtermism Long-term planning 

 

And the analogy I would use would be something like:

A useful analogy  would be the difference between using philosophical "would you rather?" thought experiments as a tool for selecting an ethical view , vs using thought experiments to work out the most important causes to work on .


Deciding what the best ways of thinking are best suited for different intellectual challenges is a huge debate. I could give views but  not sure we are going to solve it here. And it makes sense that we each prefer to rely on the ways of thinking that we are experienced in using.

One of my aims of writing this post is to give feedback to researchers, as a practitioner about what kind of work I find useful. Basically trying to shorten the feedback loop as much as I can to help guide future research.

So what I can do is provide my own experiences. I do have to on a semi-regular basis make decisions about causes and interventions to focus on (do we engage politicians about AI or bio or improving institutions, etc). And in making these high level decisions there is some good research and some less useful research (of the type I discuss in my post) and my general view is that more work like: scenarios, short term estimates of x-risk, vulnerability assessments, etc – would be particularly useful for me making even very high-level cause decisions.

Maybe that is useful for you or others (I hope so).

 

On the value of different research

Perhaps we also have different views on what work is valuable. I guess I already think that the future matters and see less value in more work on is longtermism true and more value on work into what are the risks we face now and how can we address them.

You said:

[Long term planning] is not always required in order to get large gains

Let's flesh out what we mean by "gains".

  • If gains at philosophy / ethics / deciding if longtermism is true, then yes this would apply.
  • If gains at actually reducing the chance of an existential catastrophe (other than in cases where the solution is predominately technical) then I don't think this would be true.

I expect we agree on that. So maybe the question is less about the best way of thinking about the world and and more about what the aims of additional research should be? Should we be pushing resources to more philosophy or towards more actionable plans to affect the long-term and/or reduce risks?

(Also worth considering the extent to which demonstrating practical actionable plans is useful for the philosophy, either for learning how to deal with uncertainty or for making the case that the long-term is a thing people can act on).

Toby has articulated what I was thinking quite well.

I also think this diagram is useful in highlighting the core disagreement:

However I would see it a bit more like this:

EthicsWorldviewCauseInterventionCharityPlan
Trad. longtermism Long-term planning 

Personally I'm surprised to see long-term planning stretching so far to the left. Can you expand on how long-term planning helps with worldview and cause choices?

Worldview: I presume this is something like the contention that reducing existential risk should be our overriding concern? If so I don't really understand how long-term planning tools help us get there. Longtermists got here essentially through academic papers like this one that relies on EV reasoning, and the contention that existential risk reduction is neglected and tractable.

Cause: I presume this would be identifying the most pressing existential risks? Maybe your tools (e.g. vulnerability assessments) would help here but I don't really know enough to comment. Longtermists essentially got to the importance of misaligned AI for example through writings like Bostrom's Superintelligence which I would say to some extent relies on thought experiments. Remember existential risks are different to global catastrophic risks, with the former unprecedented - so we may have to think outside the box a bit to identify them. I'm still unsure if established tools are genuinely up to the task (although they may be) - do you think we might get radically different answers on the most pressing existential risks if we use established tools as opposed to traditional longtermist thinking?

EDIT: long-term planning extending into worldview may be a glitch as it does on my laptop but not on my phone...

Hi Jack, some semi-unstructured thoughts for you  on this topic. And as the old quote goes "sorry, if I had more time, I would have written you a shorter letter":

  • What are we aiming to do here with this discussion? It seems like we are trying to work out what are the best thinking tools for various kinds of questions an altruist might want answered. And we are deciding between two somewhat distinct and also overlapping and also not well defined 'ways of thinking' whilst still acknowledging that both are going to be at least somewhat useful across the spectrum and that everyone will have a preference for the tools they are used to using (and speaking at least for myself I have limited expertise in judging the value of academic type work) !!! .... Well sure why not. Let's do that. Happy to throw some uniformed opinions out into the void and see where it goes ....
  • How should or could we even make such a judgement call? I believe we have evidence from domains such as global health and policy design that is you are not evaluating and testing you are likely not having  the impact you expect. I don't see why this would not apply to research. In my humble view of you wanted to know what research was most valuable you would want monitoring and evaluation and iteration. Clear theories of change and justification for research topics. Research could be judged by evidence of real world impact, feedback from users of the research, prediction markets, expert views, etc. Space for more exploratory research would be carved out. This would all be done transparently and maybe there would be competing research organisations* that faced independent evaluation. And so forth. And then you wouldn't need to guess at what tools were useful, you'd find out.
  • But easier said that done? I think people would push back, say they know they don't need evaluating, creativity will be stifled, evaluation is difficult, we would agree on how to evaluate, that’s not how academia works, we already know we are great, etc. They might be right. I have mostly looked at EA research orgs as a potential donor so this is at least the direction I would want things to move in – but I am sure there are other perspectives. And either way I don’t know if anyone is making an push to move EA research organisations in that direction.
  • So what can be done? I guess one thing would be if the people who used research to make decisions, would give feedback to researchers about what research they find more useful and less useful, that could be a step in the right direction.  As I have done somewhat here. And my feedback from the policy change side of the fence is that the research that looks more like the long-term planning tools (that I am used to using) is more useful. I have found it more useful to date and expect it would be more useful going forward. And I would like to see more of it. I don’t think that feedback from me is sufficient to answer the question, it is a partial answer at best!! There are (I am sure) at least some other folk in the world that use FHI/GPI/EA research who will hopefully have different views.

So with that lengthy caveat about partial answers / plea for more monitoring and evaluation out of the way, you asked for specific examples:

  • Long-term planning and worldview. I think the basic idea that the long-term future matters is probably true.  I think how tractable this is and how much that should effect our decisions is less clear. How to make decisions about influencing the world given uncertainty/cluelessness? How tractable it is to have that influence? Are attractor states the only way to have that long-run influence? Are stopping risks the only easy to reach attractor states? I think the long-term planning tools give us at least one way to approach these kinds of questions (as I set out above in Section B. 4.-5.). To design plans and to influence the future, ensure their robustness, then put them into action and see how they work, and so on.  Honestly I doubt you would get radically different answers (although I also doubt other ways of approaching this question would lead to radically different answers either, I just quite uncertain how tractable more worldview research work is).
  • Long-term planning and cause choice. This seems obvious to me. If you want to know, as per your example, what risks to care about – then mapping out future paths and scenarios the world the world might take, doing estimates of risks on 5, 10, 25, 50, 100 year timescales, explicitly evaluating your assumptions, doing forecasting, using risk management tools, identifying the signs to watch-out for that would warrant a change in priorities, and so on – all seems to be super useful.
    Also I think there might be a misunderstanding but the whole point of all the tools listed above is that they are for use in situations where you are dealing with the "unprecedented" and the unknown black swans. If something is easy to predict then you can just predict it and do a simple expected value calculation and you are done (and EA folk are already good at that). 
    Overall I doubt you would get drastically different answers to which risks matter, although I would expect there may be a greater focus on building a world that is robust to other "unforeseen anthropogenic risks", not just AI and bio. I also think in specific circumstances that people might be in, say a meeting with a politician or writing a strategy, they would hopefully have a better sense of which risks to focus on.

 

 * P.S. Not sure anyone will have read this far but if anyone is reading this and actually thinks it could be a good (or v bad) idea to start an research org with a focus on demonstrating impact, policy research, and planning type tools – then do get in touch.

Part 2 – Also, a note on expectations

On a slightly separate point maybe some of the challenge I feel here comes from me having misplaced expectations. I think that before I dived into the longtermist academic research I was hoping that the world looked like this:

Most important qtns.EthicsWorldviewCauseInterventionCharityPlan
People solving them:

GPI and FHI, etc

  

and I could find the answers I needed and get on with driving change – YAY.

 

But maybe the world actually looks like more this:

Most important qtns.EthicsWorldviewCauseInterventionCharityPlan
People solving them:

GPI and FHI

    

and there is so much more to do – Awww. 


(Reminds me of talking to GovAI about policy and they said GovAI does not do applied policy research but people often think that they do it. )

I know it is not going to be top of anyone's to do list but I would love at some point to see an FHI post like this one from 80K setting out what is in scope and what is out of scope that could be great for others in the ecosystem to do.


(* diagrams oversimplified again but hopefully they make the point)

But maybe the world actually looks like more this:

Most important qtns.EthicsWorldviewCauseInterventionCharityPlan
People solving them:GPI and FHI    

and there is so much more to do – Awww. 

Is this fair? FHI's Research seems to me to venture into Cause and Intervention buckets and they seem to be working with government and industry to spur implementation of important policies/interventions that come out of their research? E.g. for each of FHI's research areas:

  • Macrostrategy: most recent publication, Bostrom's Vulnerable World Hypothesis calls for greatly amplified capacities for preventive policing and global governance (Cause)
  • AI Governance: the research agenda dicusses AI safety as a cause area, and much of the research should lead to interventions. For example the inequality/job displacement section discusses potential governance solutions, the AI race section discusses potential routes for avoiding races / ending those underway (e.g. Third-Party Standards, Verification, Enforcement, and Control), and there is discussion of optimal design of institutions. Apparently researchers are active in international policy circles, regularly hosting discussions with leading academics in the field, and advising governments and industry leaders.
  • AI Safety: Apparently FHI collaborates with and advises leading AI research organisations, such as Google DeepMind on building safe AI.
  • Biosecurity: As well as research on impacts of advanced biotech, FHI regularly advises policymakers including the US President’s Council on Bioethics, the US National Academy of Sciences, the Global Risk Register, the UK Synthetic Biology Leadership Council, as well as serving on the board of DARPA’s SafeGenes programme and directing iGEM’s safety and security system.

Overall it seems to me FHI is spurring change from their research?

You may also find this interesting regarding AI interventions.

Your push back here seems fair. These orgs certainly do some good work across this whole spectrum. My shoddy diagrams  were supposed to be more illustrative of a high level point than accurate. But perhaps they are somewhat over-exaggerated and critical. I still think the high-level point about expectations and reality is worth making (like the point about people's expectations about GovAI). 

I provided some comments on a draft of this post where I said that I was skeptical of the use of many of these tools for EA longtermists, although felt they were very useful for policymakers who are looking to improve the future across a shorter timeframe. On a second read I feel more optimistic of the use for EA longtermists, but am still slightly uncertain.

For example, you suggest setting a vision for a good future in 20-30 years and then designing a range of 10 year targets that move the world towards that vision. This seems reasonable a lot of the time, but I’m still unsure if this would be the best approach to reducing existential risk (which is currently the most accepted approach to improving the far future in expectation amongst EAs).

Take the existential risk of misaligned AI for example. What would the 30 year vision be? What would intermediate targets be? What is wrong with the current approach of “shout about how hard the alignment problem is to make important people listen, whilst also carrying out alignment research, and also getting people into influential positions so that they can act on this research”?

I guess my main point is that I’d like to see some applications of this framework (and some of the other frameworks you mention too) to important longtermist problems, before I accept it as useful. I think the framework does work well for more general goals like “let’s make a happier world in the next few decades” which is very vague and needs to be broken down systematically, but I'm unsure it would work well for more specific goals such as “let’s not let AI / nuclear weapons etc. destroy us”. I’m not saying the framework won’t work, but I’d like to see someone try to apply it.

Longtermists are going to have to make plans along the lines of: let’s minimise the chance we fall into a bad attractor state and maximise the chance we fall into a good attractor state within the length of time that we can reasonably influence, which is 10-100s of years

I’m also sceptical about the claim that we can’t affect probabilities of lock-in events that may happen beyond the next few decades. As I also say here, what about growing the Effective Altruism/longtermist community, or saving/investing money for the future, or improving values? These are all things that many EAs think can be credible longtermist interventions and could reasonably affect chances of lock-in beyond the next few decades as they essentially increase the number of thoughtful/good people in the future or the amount of resources such people have at their disposal. I do think it is important for us to carefully consider how we can affect lock-in events over longer timescales.

I guess my main point is that I’d like to see some applications of this framework (and some of the other frameworks you mention too) to important longtermist problems, before I accept it as useful.

I 100% fully agree. Totally. I think a key point I want to make is that we should be testing all our frameworks against the real world and seeing what is useful. I would love to sit down with CLR or FHI or other organisations and see how this framework can be applied. (Also expect a future post with details of some of the policy work I have been doing that uses some of this).

(I would also love people who have alternative frameworks to be similarly testing them in terms of how much they lead to real world outputs or changes in decisions.)

 

I’m still unsure if this would be the best approach to reducing existential risk 

The aim here is to provide a tool kit that folk can use when needed.

For example these tools they are not that useful where solutions are technical and fairly obvious. I don’t think you need to go through all these steps to conclude that we should be doing interpretability research on AI systems. But if you want to make plans to ensure the incentives of future researches who develop a transformative AI are aligned to the global good then you have a complex high-uncertainty long-term problem and I expect these kinds of tools become the sort of think you would want to use.

Also as I say in the post more bespoke tools beat more general tools. Even in specific cases there will be other toolboxes to use. Organisational design methods for aligning future actors incentives, vulnerability assessments for identifying risks, etc. The tools above are the most general form for anyone to pick up and use. 

 

I’m also sceptical about the claim that we can’t affect probabilities of lock-in events that may happen beyond the next few decades. As I also say here, what about growing the Effective Altruism/longtermist community, or saving/investing money for the future, or improving values?

I think this is a misunderstanding. You totally can affect those events. (I gave the example of patient philanthropy that has non-negligible expected value even in 300 years.) But in most cases a good way of having an impact in more than a few decades is to map out high level goals on shorter decade long timelines.  On climate change we are trying to prevent disaster in 2100 but we do it by stetting targets for 2050. The forestry commission might plant oak tress that will grow for 100s of years but they will make planting plans on 10 year cycles. Etc

 

What would the 30 year vision be? What would intermediate targets be? 

Some examples here if helpful.

[comment deleted]1
0
0
[comment deleted]1
0
0

Have only done a first skim but this looks so useful Sam!

One of the best, most useful posts I've read on the forum. Terrific work, Sam.

A discount rate of 0.7% would suggest a fund could be passed forward about 300 years (details in Appendix). 

This is an interesting point. Naively compounding using expected market returns-discount rates would tell you the fund will grow forever, but what we really should expect is that the fund will eventually fail, and in the unlikely event that it's still running properly past 300 years (based on your discount rate), it'll be massive, and we might not even be able to use most of it very usefully, with decreasing marginal altruistic returns to resources.

That being said, it's worth keeping in mind that EA has multiple such funds and could continue to start more, so it's far more likely that we'll still have at least one around for much longer.

Curated and popular this week
Relevant opportunities