With the seeming increase of AI risk discussion, I was just wondering if anything like this existed. My silly imagining of this might be a ~1/6 x-risk penalty on some lifetime donors (per Ord’s The Precipice x-risk estimation for this century), not that I think this should be the case or that I think this number is still representative.
I don’t mean this as an out of the blue criticism - mostly just curious if/how x-risk might be taken into account, since I myself am beginning to think in this way about my own life.
Hi Michael, thank you for the response, and I definitely should have checked out the full report to be more respectful of your time. Yeah, honestly seems really complex and I understand the need to prioritize, thanks for sharing.
I'm not sure how to evaluate this, I see existential risk kind of being relegated to a bullet point in the appendix and that may be a good place for it considering the sophisticated scope and in such a report... but I am also trying to reconcile this with such (moderate?) estimates as Ord's... where even humoring this chance seems to change a lot. I'm also unsure that discount rates really capture the loss of value of x-risk, but maybe that's a more classic argument of near vs longtermism.
Also, wouldn't the above 'x-risk discount rate' be 2% rather than 0.2%?
I guess I am curious about this sort of tension between x-risk, transformative AI, and near-term plans for a lot of EA orgs (and this has been rather informative, thanks again!).