Hide table of contents

Summary

  • GiveWell’s discount rate of 4% includes a 1.4% contribution from ‘temporal uncertainty’ arising from the possibility of major events radically changing the world.
  • This is incompatible with the transformative artificial intelligence (TAI) timelines of many AI safety researchers.
  • I argue that GiveWell should increase its discount rate, or at least provide a justification for differing significantly from the commonly-held (in EA) view that TAI could come soon.

Epistemic Status: timelines are hard, and I don’t have novel takes, but I worry perhaps GiveWell doesn’t either, and they are dissenting unintentionally.

In my accompanying post I argued GiveWell should use a probability distribution over discount rates, I will ignore that here though, and just consider whether their point estimate is appropriate.

GiveWell’s current discount rate of 4% is calculated as the sum of three factors. Quoting their explanations from this document:

  • Improving circumstances over time: 1.7%. “Increases in consumption over time meaning marginal increases in consumption in the future are less valuable.”
  • Compounding non-monetary benefits: 0.9%. “There are non-monetary returns not captured in our cost-effectiveness analysis which likely compound over time and are causally intertwined with consumption. These include reduced stress and improved nutrition.”
  • Temporal uncertainty: 1.4%. “Uncertainty increases with projections into the future, meaning the projected benefits may fail to materialize. James [a GiveWell researcher] recommended a rate of 1.4% based on judgement on the annual likelihood of an unforeseen event or longer term change causing the expected benefits to not be realized. Examples of such events are major changes in economic structure, catastrophe, or political instability.”

I do not have a good understanding of how these numbers were derived, and have no reason to think the first two are unfair estimates. I think the third is a significant underestimate.

TAI is precisely the sort of “major change” meant to be captured by the temporal uncertainty factor. I have no insights to add on the question of TAI timelines, but I think absent GiveWell providing justification to the contrary, they should default towards using the timelines of people who have thought about this a lot. One such person is Ajeya Cotra, who in August reported a 50% credence in TAI being developed by 2040. I do not claim, and nor does Ajeya, that this is authoritative, however it seems a reasonable starting point for GiveWell to use, given they have not and (I think rightly) probably will not put significant work into forming independent timelines. Also in August, a broader survey of 738 experts by AI Impacts resulted in a median year for TAI of 2059. This source has the advantage of including many more people, but conversely most of them will have not spent much time thinking carefully about timelines.

I will not give a sophisticated instantiation of what I propose, but rather gesture at what I think a good approach would be, and give a toy example to improve on the status quo. A naive thing to do would be to imagine that there is a fixed annual probability of developing TAI conditional on not having developed it to date. This method gives an annual probability of 3.8% under Ajeya’s timelines, and 1.9% under the AI Impacts timelines.[1] In reality, more of our probability mass should be placed on the later years between now and 2040 (or 2059), and we should not simply stop at 2040 (or 2059). A proper model would likely need to dispense with a constant discount rate entirely, and instead track the probability the world has not seen a “major change” by each year. A model that accomplishes something like this was published on the EA Forum in July, and suggests interventions that actualise their benefit sooner should be favoured in shorter timelines worlds.

Clearly, TAI is not the only such event that could upend the world, so the contribution of temporal uncertainty to the discount rate should be greater than that just from AI. Thus, the temporal uncertainty parameter, and therefore the overall discount rate, seems to be significantly too low. If we instead use a discount rate of 7%, this reduces the cost-effectiveness of the deworming charities by 45-46%, while a discount rate of 5% would still lead to a reduction in cost effectiveness of 19%, both compared to the current value of 4%.[2]

All this relies on judgement calls in the challenging domain of forecasting AI timelines where reasonable people can and do disagree dramatically, so if GiveWell decides they have long timelines this is fine. However, if this is the case it should be communicated in a better justification for the time uncertainty parameter.

Notes


  1. If there is an annual probability, x, of TAI for the next 18 years, and a 50% chance of TAI within 18 years, then (1-x)^18=0.5 => x=1-2^(-1/18)=0.038. SImilarly, 37 years => 0.0186. ↩︎

  2. The naive AI temporal uncertainty parameter of 3.8% (Ajeya’s timelines) is 2.4 percentage points higher than GiveWell’s value of 1.4%, but I assume much of that original 1.4% was for non-AI reasons (else presumably they would have called it ‘AI risk’ or at least talked about AI specifically in the description). So I will round up from 2.4% to 3% to account for whatever other factors GiveWell was thinking of (Biorisk? A world war? Radically new medical technology? Unknown unknowns?). Likewise, I use 5% as indicative of the AI Impacts timelines. Data in accompanying spreadsheet ↩︎

Comments6
Sorted by Click to highlight new comments since: Today at 10:12 AM

If TAI arrives and doesn't cause extinction, it could still be years before the poorest countries are significantly impacted. So, the probability of TAI arrival could be too high to discount by (or at least for AI's contribution).

Also, a life saved might become more valuable and life-saving charities might do more good than otherwise, in case the beneficiaries' quality of life or life expectancies improve due to the arrival of TAI! I'd guess you'd only want to discount by the probability of extinction or global catastrophe for life-saving interventions. I suppose there's also a chance that between your donation and its use saving a life, the beneficiary would have been saved through the benefits from TAI, but I think GiveWell has been recommending donations for benefits in the next couple years after reception, so this seems unlikely.

The extreme person-affecting tails involve far longer lives from life extension tech and mind uploading.

Income/wealth gains would probably become less valuable if GiveWell charity beneficiaries benefit from TAI.

All good points. Yes, in slower take-off scenarios there would be a larger lag, I suppose I was implicitly thinking of cases where the world quickly moves to collapse or >=20% annual economic growth, but true this does weaken my conclusion. Ah interesting thought about saving lives being especially valuable given the possibility of life-extension tech. Perhaps our best guess 'life expectancy' for someone alive today should be >100 years then and maybe far more, if there is even a small chance of entering post-death worlds.

I think it requires either a disagreement in definitions, or very pessimistic views about how tractable certain scientific problems will prove to be, to think that the "transformative" bit will take long enough to impact the discount rate by more than a few percent (total).  But yes, it will be non-zero.

Thanks for your entry!