I had a random thought while forming a lecture about the logic behind working towards reducing existential risks, and I would love to hear your insights about it:

If a person is risk-averse when thinking about impact in his career or donations, he probably would not want to donate or work in jobs that aim to help to reduce existential risks, because he might have low 'chances of impact'. For example- in donations such a person might prefer to donate to Givewell

However, if a person is risk-averse when thinking about his life/his loved one lives/peoples lives in general, he would want to 'buy insurance' that nothing bad will happen, and then he would want to donate or work in jobs that help to reduce x-risks.

I think that maybe the same paradox would apply to risk lovers.

I think that if this is true, it raises questions on inner incentives to work towards global pressing problems, and maybe on convincing arguments.

What do you think?




Sorted by Click to highlight new comments since:

Those are two different kinds of risk aversion: the first is difference-making risk aversion (see this paper and this post that define and criticize it), and the second is (standard) risk aversion, i.e. with respect to outcomes.

[comment deleted]1

I remind you that "risk aversion" is a big deal in economics/finance because of the decreasing marginal utility of income. In fact, in economics and finance, risk aversion for rational agents is not a primitive parameter, but a consequence of the CRRA parameter of your consumption function.


So I think risk aversion turns quite meaningless for non monetary types of loss.

That's not quite what the article says, and it's absolutely meaningful for non-monetary loss - though as the article does imply, you do need to be careful about how you think about utility, and understanding what your tradeoff between money and non-monetary goods would be.

Well, I think he says what he says: that risk aversion "isn't". That means that what you observe as "risk aversion" is mainly the result of taking expectations (risk neutrally) over some non-linear function of payoffs. 

"As it happens, the one thing in life you most want to do is to produce and bring up children. Thirty years is long enough to do that; fifteen is not. You grit your teeth and sign up for the operation. You are risk preferring in years of life, because years of life have increasing marginal utility to you."

So general comments on "risk- aversion" are invariabily wrong, because there is nothing fundamental on "risk aversion", and to discuss consequences of risk aversion you shall explicit what is the non-linear transformation of an original input on the utility function of interest in the case. 

You're treating utility like a fact, and actual outcomes as irrelevant, they conclusing risk preference is an artifact. But as you admitted, risk aversion over monetary outcomes exist, and it's the transformation to utility that removes it. Similarly, we'd expect risk aversion over non-monetary goods - having children and years of life are actual outcomes, and risk preference is secondary to those. So your example proves too much.

And yes, you can construct situations where "preferences" that are normally risk-averse become risk-loving when you change what concrete outcome you're discussing because you put in place arbitrary rules. So I can similarly make almost anyone risk-loving in money by saying that they die if they have too little money, and they need to double their current money to survive - but that's an artifact of the scenario, and says very little about risk preferences in less constrained scenarios.

As long as you have a utility function u(), on an outcome x (preferebly monotonous), but non-linear you get some kind of “risk aversion”.

But a clear narrative on the u() non linearity is necessary for any risk aversion argument.

For example, I have some intuition that the same total human population spread over time is better than concentrated on a given period. Something like “time is interesting, better be represented across it”.

This generates a kind of “risk aversion”. The opposite intuition (the more human lives are overlapped, the more interesting), would lead to other.

There are many risk aversions, and they are very different; “risk aversion” is exactly as “non linearity”. How much useful can be said about “non-linearity”?

I agree with your point that risk aversion is "just" pointing out a non-linearity, but there is an incredible amount you can say about non-linearity. And you can say the same thing about how any concept, when reduced to "just X" seems trivial, but they are still often useful.

Could be useful, but I would say that the more explicit is the u() and the x, the easier is to assess the validity of the argument. Thank you for the nice (and clarifiying) discussion!

Curated and popular this week
Relevant opportunities