Existential risk

# The value of x-risk reduction

6 min read21st May 202210 comments

# 17

In his interview on the 80000 hours podcast after the publication of the Precipice, Toby Ord claimed that x-risk reduction was worth pursing regardless of weather x-risk was high or low. The claim is that when x-risk is low, it's worth working on x-risk because a small decline in the probability of x-risk has very large returns.

This is because by reducing the value of x-risk you're increasing the value of the future and the value of the future is greatest when x-risk is smallest. On the other hand, when x-risk is large, the return is also high because x-risk is likely to be neglected and therefore the marginal impact of work on x-risk is likely to be very high. This post aims to show the cases when this does and doesn't hold true.

I'll first lay out intuitively the cases where it does and doesn't hold, formalise it these notions and then give some numerical examples.

Intuition

There the are two parts to the claim: the value of the future part and the neglectedness part. The value of the future is the value of humanity per year, divided by the probability of x-risk per year. I'm assuming that the probability of x-risk is constant every year and I'll assume that throughout the post. This has some quite counter-intuitive effects. It means that decreasing the probability of x-risk by 1/10 per year to 1/100 makes the future 10 times more valuable in expectation. However, the much larger percentage point decline in year probability of x-risk from 1/2 to 1/10 only increases the expected value of the future 5 fold.

In general, a decline in x-risk from x to y increases the the value of the future by x/y. This does indeed mean that the value of decreasing x-risk is very large when x-risk is very small, and in fact in increases with square of x-risk. However, if you think that x-risk is very high the value of x-risk reduction is proportionally as small.

There's also a second factor to consider. We can think of probability of an existential risk in a given year as being dependent on all of the inputs going into x-risk reduction. This means that when we change the an input we get two effects - the effect from the change in probability of x-risk in the year, multiplied by the everything the effect of the input is proportional to. For instance, the amount of x-risk per year could be dependent on the product of the amount of research being done and the amount of money being put into the researchers ideas.

In that case,  the effect of a small increase in research will be proportional to the product of the amount of money and the likelihood of x-risk at current amount of inputs. The impact of this consideration is that effect of increasing the inputs into x-risk reduction is highest when the probability of x-risk is around 50-50. However, at these value of reducing x-risk is very low because we're almost certainly going to all die if x-risk is anywhere close to 50-50.

The second part of the claim is that x-risk will be more neglected if it's large. There are two conditions that need to be met for neglectedness to increase the value of working on x-risk. The first is that x-risk must be neglected in a variable that we can increase. For instance, if preventing x-risk is proportional to the amount of political capital invested in x-risk, or even worse requires some minimum about of political capital, then increasing the amount of labour going into x-risk will have a small effect.

The second condition is diminishing returns to scale. This means that if all of the inputs into the x-risk reduction production function were all scaled by the same amount, the output of that production is scaled by less than that amount. If diminishing returns to scale holds then, all else equal, the return to putting resources into x-risk reduction is higher the fewer resources are in x-risk reduction.

But it's not at all obvious that that is the case! It could easily be that we need reach a threshold of high quality research done before we can start deploying capital in a productive way. In this case, scaling everything by an amount that meant that the research didn't reach that threshold would be useful, and then the amount that just got us over that threshold hold would be incredibly valuable!

Formalisation

The return to working on existential risk (x-risk) is a function of three variables from a total utilitarian perspective. These are the productivity of the marginal dollar put into x-risk, the current level of existential risk and the expected value of the future.

We can formalise this with a utility function and a production function. The utility function represents the total value of the future. The production function represents how the risk of existential catastrophe varies with various inputs.
We can represent the production function with the following expression:

K is capital, L is labour and A a productivity parameter.

And a utility function

Where u is the utility we get if we get a civilisation that's able to build a space faring civilisation. For the purposes of this post, I'm going assume that utility is binary, dependent only on weather an x-risk has occurred, where it's 0 when an x-risk has occurred and a constant otherwise.

The utility over time is discounted by r, in this model the probability that an x-risk happens.

We can put the probability than x-risk occurs in year as

Where G is a CDF. In this model, the probability of extinction is dependent on the output of the production function.

The effect of increasing the amount of, for instance, labour working to reduce x-risk is given by

This is equal to

The partial derivative of the probability of x-risk per year with respect to labour is given by the product of the partial derivative of the production function with respect to labour and partial derivative of the CDF - i.e the pdf - evaluated at the current level of input.

This gives us the way in which value of working on x-risk depends both on the current value of x-risk, and the levels of other inputs going into x-risk reduction.

Numerical example

Let the production function be

The probability of x-risk be given by the standard normal distribution cdf of the production function.

Let

This gives a probability of x-risk as  0.3. If set utility per year as equal to 1 this gives an expected value of 3.24

Doubling the amount of labour going into x-risk reduction increases the expected value of future to 4.17

Now, if we imagine that we start with half a unit of capital and 20 units of labour we get an initial value of future of 1268. Increasing the amount of capital to 1 increase the expected value of future to 255754.  As you can see, this is a very very very very dramatic increase in the value of the future with the same increase in resources. This post has assumed throughout a constant probability of extinction throughout - it's very unclear if this is the case and it seems extremely worthwhile to do the analysis for a variable rate of extinction, especially if a time of perils model is much much better than a constant x-risk model.

# 17

10 comments, sorted by Click to highlight new comments since:
New Comment

That sort of analysis is what you get for constant non-vanishing rates over time. But most of the long-term EV comes from histories where you have a period of elevated risk and the potential to get it down to stably very low levels, i.e. a 'time of perils,' which is the actual view Ord argues for in his book. And with that shape the value of risk reduction is ~ proportional to the amount  of risk you reduce in the time of perils. I guess this comment you're  responding to might be just talking about the constant risk case?

Yeah this is just about the constant risk case, I probably should have referred to it not covering time of perils explicitly, although same mechanism with neglectedness should still apply.

Toby Ord claimed that x-risk reduction was worth pursing regardless of [whether] x-risk was high or low.

A priori, it seems unlikely that the ROI to working on x-risk would be a constant function of the level of x-risk.

Using  in  is assuming constant returns to scale. If you have , you get diminishing returns.

Messing around with some python code:

from scipy.stats import norm
import numpy as np

def risk_reduction(K,L,alpha,beta):
print('risk:', norm.cdf(-(K**alpha)*(L**beta)))
print('expected value:', 1/norm.cdf(-(K**alpha)*(L**beta)))

print('risk (2x):', norm.cdf(-((2*K)**alpha)*(L**beta)))
print('expected value (2x):', 1/norm.cdf(-((2*K)**alpha)*(L**beta)))

print('ratio:',(1/norm.cdf(-((2*K)**alpha)*(L**beta)))/(1/norm.cdf(-(K**alpha)*(L**beta))))

K,L = 0.5, 0.5
alpha, beta = 0.5, 0.5
risk_reduction(K,L,alpha,beta)

K,L = 0.5, 0.5
alpha, beta = 0.2, 0.2
risk_reduction(K,L,alpha,beta)

K,L = 0.5, 20
alpha, beta = 0.2, 0.2
risk_reduction(K,L,alpha,beta)

K,L = 0.5, 20
alpha, beta = 0.5, 0.5
risk_reduction(K,L,alpha,beta)

Are you using ?

This isn't rendering for me but I don't know  if that's just my computer or if everyone will see that.

It looks like you have double escaped the closing bracket.

Thanks! Fixed

I can get the equation to render by highlighting and using ctrl+4 on this:
U = \int_{0}^{\infty} u\cdot e^{-rt}\,dt = \frac{u}{r}
$$U = \int_{0}^{\infty} u\cdot e^{-rt}\,dt = \frac{u}{r}\$$

Huh, actually it renders in the comment box, but not in the published comment. Weird!