Hide table of contents

Will MacAskill’s What We Owe the Future was published on Sep 1st in Europe. In chapter 2, You Can Shape the Course of History, Will presents “a framework for assessing the long term value of an event”, with the dimensions 1) significance, 2) persistence, and 3) contingency. Will explains the terms in the following way:

  • Let’s start by assuming that some event or action will create a state of affairs, then
  • Significance is the average value added by bringing about that state of affairs
  • Persistence is how long the state of affairs is expected to last, once it is brought about
  • Contingency considers, if the event or action didn’t occur, would the state of affairs eventually be brought about anyways? More precisely defined: in expectation, what % of the time spent in the state of affairs (defined as the persistence in the bullet above) would be expected anyways, in a world without the event or action?


 

In this post I’ll start by presenting some visualizations of this that hopefully can make these concepts intuitive, and then use those visualizations to discuss the claim that we’re living in a period of plasticity. Note that as I refer to the book throughout this post, it is based on my best reading of it, and any errors are my own (I have not been in touch with Will’s team).


 

The significance, persistence and contingency framework

The basic tool I’ll use for the visualizations is a directed acyclic graph visualizing, on the x-axis, the flow of time (which could be measured in e.g., days, years, generations or millennia for each node, depending on what question you’re asking), and on the y-axis, some normative value of the world. This is a big simplification compared to how I interpret that Will is using the framework. Notably, when he refers to a “state of affairs”, that may refer to the existence of some institution or technology, which is just one out of many variables that would determine my y axis. Mapping all of this to one dimension may miss out on some nuance, so if you haven’t done it already, please buy Will’s book and read the more detailed discussion there!


 

What, exactly, you use to assign a value on the y axis depends on your moral beliefs, but I won’t take that discussion here. For now, all I’ll assume is that it is possible to assign some value on this axis, that that the current value of the world is “1” (of some unit), and that going forward, the world can take higher or lower values, including values below “0”.


 


 

I will then assume that some event or action that we can choose to take, can shift the probability of different future scenarios, and in particular, that we can make good outcomes more likely.


 


 

For simplicity, I’ll start out considering a deterministic world, where we know a baseline scenario for the future, and we know how the action we’re considering would change said baseline. For the probability enthusiasts, don’t worry, I’ll add back some uncertainty later.


 


 

Under this framework, the significance of an action is illustrated by how much it lifts the value of civilization over its lifetime. In the chart below, this is represented by a green and a red action. If you prefer to think in examples, you can assume that the actions below are:

  • Green: the city of Oslo, where I live, decides to pay for seasonal flu vaccines for all its citizens
  • Red: the city of Oslo instead decides to pay the same amount for beautiful snow sculptures in public parks this winter
  • Note that both actions are likely to only matter for the winter, but one is likely to matter more than the other


 


 

The persistence measures how long it takes before the value of civilization converges back to the baseline path. In the figure below, you may consider that the red action still is to pay for snow sculptures, while the green action is to pay for stone sculptures instead, which won’t be as pretty, but which will last longer.


 


 

The contingency is slightly harder to visualize, since it considers when specific changes would happen in a counterfactual scenario. Let’s therefore start by considering the following baseline, where we have added some notion about how the improvements in the consecutive time steps are brought about; first, intervention A would be implemented, and then, intervention B. For simplicity, let’s assume that both interventions have the same significance, and both would persist until the end of humanity (which, on expectation, happens at the same time irrespective of these interventions).


 

Assume that we can take one action today - either intervention A or B - and that is all we’re allowed to do. How would that look?


 

Taking action A today would only accelerate it by one node. In contrast, taking action B would accelerate it by two nodes. Therefore, the counterfactual impact of taking action B today is larger than the value of taking action A.


 


 

This is a bit of a simplified model, and one may balk at some of the assumptions. For instance, if we do A today, why don’t we just do B tomorrow, and end up with the same improvements? This is a fair objection to my specific scenarios, which aren’t optimized for plausibility, but rather for ease of visualization. It is possible to address the assumptions, for instance by adding a fixed cost for each intervention you choose to accelerate, and setting a budget so that you can only accelerate one intervention (e.g., because it requires you to spend your entire career on it). However, I don’t think this is critical for our purposes - this is just a stylized example, and there are other examples of how contingency can arise, though most are harder to visualize.

Notes on uncertainty

I will now switch back to visualizations with uncertainty. Consider then a baseline which includes different possible future scenarios, each with some estimated probability. 


 


 

There are three ways an action could have positive significance compared to this baseline: 1) it could shift probabilities from the lower-valued to the higher-valued scenarios; 2) it could eliminate the lower-valued scenarios altogether; and 3) it could create new, higher-valued scenarios. These are illustrated below in blue, red and green respectively. 


 

 


 

This gets even more complex when considering multiple stages in time. Even if just adding one more stage, I wouldn’t know how to start visualizing the potential impacts of an intervention. The chart below, is a simplification even if it only shows the baseline, since it makes quite limited assumptions about the possible future states of the world in each stage.


 


 

Complicating this picture even further, expected values can be dominated by tail outcomes - if an action could have a really large significance, persistence and/ or consistency, this may dominate the moral calculus, even if the most likely outcomes of the action are more moderate. Some have objected that this has properties resembling Pascal’s Mugging, but the relevant probabilities may not be Pascalian - Will explains in the book that for some of the relevant actions we can consider today, the probabilities are significantly higher than e.g., the risk of your house burning down, but you still presumably have insurance for that. Still, if the expected significance, persistence and contingency of our actions is dominated by outlier scenarios - maybe in the 1-5% range of probability, but with at least 2 orders of magnitude larger impact than the most likely scenarios - it poses an epistemic challenge, since it is much harder to get good data on the tail. This is some of the reason why philosophers have argued that we are clueless about the long-run impact of our actions, and one of the arguments Will offers in the book for taking actions that are robustly good across plausible worldviews.


 

Are we living in a period of plasticity?

In the book, Will argues that people alive today may be unusually well positioned to influence the long-run trajectory of civilization. This is a relevant question when deciding whether to try to influence the future by action today, or through resource accumulation that can be used to take action in the future (potentially by your descendants in the very far future). It is also a relevant question when considering whether to use our resources today to try to help people in the future, or whether to use them to help people and animals today.

The book presents the following argument:

  • Civilization can still follow many different future trajectories, so our actions today can drive contingent outcomes (chapter 3, Moral change)
  • In the future, the rate of change in society may decrease, with several implications:
    • The outcomes we create with our actions today may get locked in for a long time, so our actions today can also drive persistent outcomes (chapters 4 Value lock-in and 5 Extinction, and partially also 6 Collapse and 7 Stagnation)
    • Increasing ossification may decrease the potential for significant action in the future compared to today


 

I will not add much to this argument, beyond noting the following:

  • Historically, technology has typically increased our powers to shape the world. If technology continues getting more powerful, this would be an argument for assuming that the significance of our actions will increase going forward. This would favor a strategy of accumulating resources to take more action later.
  • The counterpoint to this is that technology may make irreversible changes (e.g., there may be no catching up with the first AGI, in which case the values it is optimizing for are likely to dominate the future), or that technology may shift who wields influence (e.g., shift the power balance between liberal democracies vs. autocracies). Both of these are arguments in favor of taking action before the power to influence the future is lost (either because of irreversible changes or power shifts)
  • Historically, our understanding of the world, the future, and the outcomes of our actions (in short, our “wisdom”) has also been increasing, although some have argued that our wisdom has not grown as fast as our powers (Bostrom, 2019). If this is the case, it would be an argument for accelerating our wisdom-building, as well as saving up to wield influence later.
  • However, a strategy of delaying action to later may not be available to us in practice. We have few mechanisms to reliably wield power far into the future - in his book, Will explicitly mentions Scotscare, a charity that is still operating long after its founders are dead, but which presumably is no longer achieving their goals (in this specific case, the reason is that the charity is focused on supporting Scots in London. This made sense when it was founded, since Scots in London were poor and discriminated against, but that is no longer the case, and so it is reasonable to assume the founders would want the organization to shift strategy, but that is no longer possible). Until we have institutions that can more reliably allow us to wield power in the future (in the way that we would actually want them to use it, and not just based on our best judgement today), we may be best off by deploying our resources today rather than trying to save them for later.
Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f