With the seeming increase of AI risk discussion, I was just wondering if anything like this existed. My silly imagining of this might be a ~1/6 x-risk penalty on some lifetime donors (per Ord’s The Precipice x-risk estimation for this century), not that I think this should be the case or that I think this number is still representative.
I don’t mean this as an out of the blue criticism - mostly just curious if/how x-risk might be taken into account, since I myself am beginning to think in this way about my own life.
Hi Phib, Michael from the GWWC Research team here! In our latest impact evaluation we did need to consider how to think about future donations. We explain how we did this in the appendix "Our approach to discount rates". Essentially, it's a really complex topic, and you're right that existential risk plays into it (we note this as one of the key considerations). If you discount the future just based on Ord's existential risk estimates, based on some quick-maths, the 1 in 6 chance over 100 years should discount each year by 0.2% (1 - ((1 - 1/6)^(1/100)) = 0.02).
Yet there are many other considerations that also weigh into this, at least from GWWC's perspective. Most significantly is how we should expect the cost-effectiveness of charities to change over time.
We chose to use a discount rate of 3.5% for our best-guess estimates (and 5% for our conservative estimates); based on the recommendation from the UK government’s green book. We explain why we made that decision in our report. It was largely motivated by our framework of being useful/transparent/justifiable over being academically correct and thorough.
If you're interested in this topic, and on how to think about discount rates in general, you may find Founders Pledge's report on investing to give an interesting read.
I really think that the discount rate equation used just doesn’t capture my intuitions about how impactful x-risk would be, but I think I will just leave it at that and stop bugging you (thanks for the thoughtful response again).
Of course, it seems at some point you have to stop the recursive utilitarian dilemma of analysis paralysis and probably in that report this is a good place.
Unsure as well, I think I’m at the point of waiting, and doing my best to learn, since I think any claims as to just how transformative AI might be regarding the economy and eve... (read more)