Hide table of contents

I'll illustrate this as follows:

Imagine you have two models of the world: an x-risk worldview and a near-termist worldview. In the first you expect, AGI will on average arrive in 15 years and in the second, you expect that it will on average arrive in 100 years.

For the purposes of simplicity, we'll assume that money is only valuable before it arrives. Let's suppose you want to dedicate $A worth of net-present value to x-risk and $B worth of net-present value to near-termist causes. Then it seems like you'd want to adjust your spend rate to take account of the possibility of one world having a longer timeline than the other. A naive model would just divide the spend over the expected number of years.

Of course, there are a huge number of factors here if you wanted a better model:

• Your timeline estimate should probably be a range.
•Beyond this, there's the possibility of outside of the model error. If your timeline is too short neither cause dips into the other, but if it's too long, then you'd feel pressure to dip into your near-termist funds and the fact that this could only ever go one way would be unfair.
• Your understanding of timelines might improve over time. In some cases, it might make sense to start investing more in the cause with the apparently shorter timeline and back off if this seems to have been a mistake.
• You might hope that your understanding of cause prioritisation would improve over time. Spending now can sometimes be used to set up the possibility of more useful spending later.
• You could discover a completely new cause area, so it might make sense to hold back money for that.
• You might expect the EA community's resources and ability to influence resources to increase of time.
• The opportunities for investing in a cause area might improve over time.

So, there are really a lot of factors there. Does anyone know of any existing research on this?

13

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

Is Pascal's Wager an example of cause prioritization being worldview-dependent? In some sense it's about trading off the expected utility of an infinite afterlife versus the expected utility of a finite worldly life. Is that the kind of thing you're thinking about?

It's relevant here. Certain analyses would have to address this issue.

Curated and popular this week
Relevant opportunities