is there a quantitative model for money going to ai safety like this (http://globalprioritiesproject.org/2015/08/quantifyingaisafety/) but for donations ? not including far future utopia effects but just x-risk
is there a quantitative model for money going to ai safety like this (http://globalprioritiesproject.org/2015/08/quantifyingaisafety/) but for donations ? not including far future utopia effects but just x-risk
One can convert the utility-per-researcher into utility-per-dollar by dividing everything by a cost per researcher. So if before you would have 1e-6 x-risk reduction per researcher, and you also decide to value researchers at $1M/researcher, then your evaluation in terms of cost is 1e-12 x-risk per dollar.
For some values (i.e. fake numbers but still acceptable for comparing orders-of-magnitude of cause areas) that I've saw used: The Oxford Prioritisation Project uses 1.8 million (lognormal distribution between $1M and $3M) for a MIRI researcher over their career, 80,000 Hours implicitly uses ~$100,000/year/worker in their yardsticks comparing cause areas, and Effective Altruism orgs in the 2018 talent survey claim to value their junior hires at $450k and senior hires at $3M on average (over three years).
The cost per researcher is typically larger than what they get paid, since it also includes overhead (administration costs, office space, etc).
The cost per researcher is typically larger than what they get paid, since it also includes overhead (administration costs, office space, etc).