With the seeming increase of AI risk discussion, I was just wondering if anything like this existed. My silly imagining of this might be a ~1/6 x-risk penalty on some lifetime donors (per Ord’s The Precipice x-risk estimation for this century), not that I think this should be the case or that I think this number is still representative.
I don’t mean this as an out of the blue criticism - mostly just curious if/how x-risk might be taken into account, since I myself am beginning to think in this way about my own life.
I really think that the discount rate equation used just doesn’t capture my intuitions about how impactful x-risk would be, but I think I will just leave it at that and stop bugging you (thanks for the thoughtful response again).
Of course, it seems at some point you have to stop the recursive utilitarian dilemma of analysis paralysis and probably in that report this is a good place.
Unsure as well, I think I’m at the point of waiting, and doing my best to learn, since I think any claims as to just how transformative AI might be regarding the economy and even how we come to solve problems is a matter of respecting probabilities that I’m uncertain about… to the extent that I guess having people just think about it is all I can ask (which it seems both you and GWWC are doing, in addition to all the impactful work y’all are doing - thank you again.)