With the seeming increase of AI risk discussion, I was just wondering if anything like this existed. My silly imagining of this might be a ~1/6 x-risk penalty on some lifetime donors (per Ord’s The Precipice x-risk estimation for this century), not that I think this should be the case or that I think this number is still representative.
I don’t mean this as an out of the blue criticism - mostly just curious if/how x-risk might be taken into account, since I myself am beginning to think in this way about my own life.
No problem!
Regarding:
There was a typo in my answer before: (1 - ((1 - 1/6)^(1/100)) = 0.0018) which is ~0.2% (not 0.2), and is a fair amount smaller than the discount rate we actually used (3.5%). Still, if you assigned a greater probability of existential risk this century than Ord does, you could end up with a (potentially much) higher discount rate. Alternatively, even with a high existential risk estimate, if you thought we were going to find more and more cost-effective giving opportunities as time goes on, then at least for the purpose of our impact evaluation, these effects could cancel out.
I think if we spent more time trying to come to an all-things-considered view on this topic, we'd still be left with considerable uncertainty, and so I think it was the right call for us to just acknowledge to take the pragmatic approach of deferring to the Green Book.
In terms of the general tension between potentially high x-risk and the chance of transformative AI, I can only speak personally (not on behalf of GWWC). It's something on my mind, but it's unclear to me what exactly the tension is. I still think it's great to move money to effective charities across a range of impactful causes, and I'm excited about building a culture of giving significantly and effectively throughout one's life (i.e., via the Pledge). I don't think GWWC should pivot and become specifically focused on one cause (e.g., AI) and otherwise I'm not sure exactly what the potential for transformative AI should imply for GWWC.