Hide table of contents

Following Ord (2023) I define the total value of the future as

where  is the length of time until extinction and  is the instantaneous value of the world at time . Of course, we are uncertain what value V will take, so we should consider a probability distribution of possible values of V.[1] On the y-axis in the following graphs is probability density, and on the x-axis is a pseudo-log transformed version of V that allows V to vary by sign and over many OOMs on the same axis.[2]

There are infinite possible distributions we may believe, but we can tease out some important distinguishing features of distributions of V, and map these onto plausible longtermist prioritisations of how to improve the future.

S-risk focused

If there is a significant chance of very bad futures (S-risks), then making those futures either less likely to occur, or less bad if they do occur seems very valuable, regardless of the relative probability of extinction versus nice futures.

Ideal-future focused

If bad futures are very unlikely, and there is a very high variance in just how good positive futures are, then moving probability mass from moderately good to astronomically good futures could be even more valuable than moving probability mass from extinction to moderately good futures (keeping in mind the log-like transformation of the x-axis).

X-risk focused

If there is a large probability of both near-term extinction and a good future, but both astronomically good and astronomically bad futures are ~impossible, then preventing X-risks (and thereby locking us into one of many possible low-variance moderately good futures) seems very important.

Discussion

  • Some differences between these camps are normative, e.g. negative utilitarians are more likely to focus on S-risks, and person-affecting views are more likely to favour X-risk prevention over ensuring good futures are astronomically large. But significant prioritisation disagreement probably also arises from empirical disagreements about likely future trajectories, as stylistically represented by my three probability distributions. In flowchart form this is something like:
  • I have not encountered particularly strong arguments about what sort of distribution we should assign to V - my impression is that intuitions (implicit Bayesian priors) are doing a lot of the work, and it may be quite hard to change someone’s mind about the shape of this distribution. But I think explicitly describing and drawing these distributions can be useful in at least understanding our empirical disagreements.
  • I don’t have any particular conclusions, I just found this a helpful framing/visualisation for my thinking and maybe it will be for others too.
  1. ^

    None of the ideas in this post are particularly original (see e.g. Beckstead and Bostrom here and Harling here). I haven't seen graphs quite like this presented before, but it is a simple visualisation so quite possibly others have done this before too!

  2. ^

     For the mathematicians among us, let’s use arcsinh(V), which is like a log scaling, but crucially allows for negative values as well. For small values of V, arcsinh(V) ~V, and for large values of V, arcsinh(V) ~ sign(V) * log|2V|, with nice smooth transitions between these regimes (desmos).

15

0
0

Reactions

0
0
Comments6


Sorted by Click to highlight new comments since:
[anonymous]9
4
0

I don't think that high x-risk implies that we should focus more on x-risk all else equal - high x-risk means that the value of the future is lower. I think what we should care about is high tractability of x-risk, which sometimes but doesn't necessarily correspond to a high probability of x-risk. 

Good point, I think if X-risk is very low it is less urgent/important to work on (so the conditional works in that direction I reckon). But I agree that the inverse - if X-risk is very high, it is very urgent/important to work on - isn't always true (though I think it usually is - generally bigger risks are easier to work on).

I think high X-risk makes working on X-risk more valuable only if you believe that you can have a durable effect on the level of X-risk - here's MacAskill talking about the hinge-of-history hypothesis (which is closely related to the 'time of perils' hypothesis):

Or perhaps extinction risk is high, but will stay high indefinitely, in which case in expectation we do not have a very long future ahead of us, and the grounds for thinking that extinction risk reduction is of enormous value fall away.

Hi Oscar,

I would be curious to know your thoughts on my post Reducing the nearterm risk of human extinction is not astronomically cost-effective? (feel free to comment there).

Summary

  • I believe many in the effective altruism community, including me in the past, have at some point concluded that reducing the nearterm risk of human extinction is astronomically cost-effective. For this to hold, it has to increase the chance that the future has an astronomical value, which is what drives its expected value.
  • Nevertheless, reducing the nearterm risk of human extinction only obviously makes worlds with close to 0 value less likely. It does not have to make ones with astronomical value significantly more likely. A priori, I would say the probability mass is moved to nearby worlds which are just slightly better than the ones where humans go extinct soon. Consequently, interventions reducing nearterm extinction risk need not be astronomically cost-effective.
  • I wonder whether the conclusion that reducing the nearterm risk of human extinction is astronomically cost-effective may be explained by:

Thanks, interesting ideas. I overall wasn't very persuaded - I think if we prevent an extinction event in the 21st century, the natural assumption is that probability mass is evenly distributed over all other futures, and we need to make arguments in specific cases as to why this isn't the case. I didn't read the whole dialogue but I think I mostly agree with Owen.

I think if we prevent an extinction event in the 21st century, the natural assumption is that probability mass is evenly distributed over all other futures, and we need to make arguments in specific cases as to why this isn't the case.

I make some specific arguments:

As far as I can tell, the (posterior) counterfactual impact of interventions whose effects can be accurately measured, like ones in global health and development, decays to 0 as time goes by, and can be modelled as increasing the value of the world for a few years or decades, far from astronomically.

[...]

Here are some intuition pumps for why reducing the nearterm risk of human extinction says practically nothing about changes to the expected value of the future. In terms of:

  • Human life expectancy:
    • I have around 1 life of value left, whereas I calculated an expected value of the future of 1.40*10^52 lives.
    • Ensuring the future survives over 1 year, i.e. over 8*10^7 lives (= 8*10^(9 - 2)) for a lifespan of 100 years, is analogous to ensuring I survive over 5.71*10^-45 lives (= 8*10^7/(1.40*10^52)), i.e. over 1.80*10^-35 seconds (= 5.71*10^-45*10^2*365.25*86400).
    • Decreasing my risk of death over such an infinitesimal period of time says basically nothing about whether I have significantly extended my life expectancy. In addition, I should be a priori very sceptical about claims that the expected value of my life will be significantly determined over that period (e.g. because my risk of death is concentrated there).
    • Similarly, I am guessing decreasing the nearterm risk of human extinction says practically nothing about changes to the expected value of the future. Additionally, I should be a priori very sceptical about claims that the expected value of the future will be significantly determined over the next few decades (e.g. because we are in a time of perils).
  • A missing pen:
    • If I leave my desk for 10 min, and a pen is missing when I come back, I should not assume the pen is equally likely to be in any 2 points inside a sphere of radius 180 M km (= 10*60*3*10^8) centred on my desk. Assuming the pen is around 180 M km away would be even less valid.
    • The probability of the pen being in my home will be much higher than outside it. The probability of being outside Portugal will be negligible, but the probability of being outside Europe even lower, and in Mars even lower still[5].
    • Similarly, if an intervention makes the least valuable future worlds less likely, I should not assume the missing probability mass is as likely to be in slightly more valuable worlds as in astronomically valuable worlds. Assuming the probability mass is all moved to the astronomically valuable worlds would be even less valid.
  • Moving mass:
    • For a given cost/effort, the amount of physical mass one can transfer from one point to another decreases with the distance between them. If the distance is sufficiently large, basically no mass can be transferred.
    • Similarly, the probability mass which is transferred from the least valuable worlds to more valuable ones decreases with the distance (in value) between them. If the world is sufficiently faraway (valuable), basically no mass can be transferred.
Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 31m read
 · 
James Özden and Sam Glover at Social Change Lab wrote a literature review on protest outcomes[1] as part of a broader investigation[2] on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that's what I'm going to do today. I reviewed the evidence on protest outcomes, focusing only on the highest-quality research, to answer two questions: 1. Do protests work? 2. Are Social Change Lab's conclusions consistent with the highest-quality evidence? Here's what I found: Do protests work? Highly likely (credence: 90%) in certain contexts, although it's unclear how well the results generalize. [More] Are Social Change Lab's conclusions consistent with the highest-quality evidence? Yes—the report's core claims are well-supported, although it overstates the strength of some of the evidence. [More] Cross-posted from my website. Introduction This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review. Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews: 1. Animal Charity Evaluators (2018), Protest Intervention Report. 2. Orazani et al. (2021), Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis. 3. Social Change Lab – Ozden & Glover (2022), Literature Review: Protest Outcomes. 4. Shuman et al. (2024), When Are Social Protests Effective? The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018). Orazani et al. (2021)[3] is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters' cause. But what people say in a lab setting mig