Should Good Ventures focus on current giving opportunities, or save for future giving opportunities?

byMilan_Griffes2y7th Nov 20168 comments

4


Around this time of year, GiveWell traditionally spends a lot of time thinking about game theoretic considerations – specifically, what funding recommendation it ought to make to Good Ventures so that Good Ventures allocates its resources wisely. (Here are GiveWell's game theoretic posts from 2014 & 2015.)

The main considerations here are:

  1. How should Good Ventures act in an environment where individual donors & other foundations are also giving money?
  2. How should Good Ventures value its current giving opportunities compared to the giving opportunities it will have in the future?

I'm more interested in the second consideration, so that's what I'll engage with here. If present-day opportunities seem better than expected future opportunities, Good Ventures should fully take advantage of its current opportunities, because they are the best giving opportunities it will ever encounter. Conversely, if present-day opportunities seem worse than expected future opportunities, Good Ventures should give sparsely now, preserving its resources for the superior upcoming opportunities.

Personally, I'm bullish on present-day opportunities. Present-day opportunities seem more attractive than future ones for a couple reasons:

  1. The world is improving, so giving opportunities will get worse if current trends continue.
  2. There's a non-negligible chance that a global catastrophic risk (GCR) occurs within Good Ventures' lifetime (it's a "burn-down" foundation), thus nullifying any future giving opportunities.
  3. Strong AI might emerge sometime in the next 30 years. This could be a global catastrophe, or it could ferry humanity into a post-scarcity environment, wherein philanthropic giving opportunities are either dramatically reduced or entirely absent.

So far, my reasoning has been qualitative, and if it's worth doing, it's worth doing with made-up numbers, so let's assign some subjective probabilities to the different scenarios we could encounter (in the next 30 years):

  • P(current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs) = 30%
  • P(current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn't lead to post-scarcity) = 56%
  • P(strong AI leads to a post-scarcity economy) = 5%
  • P(strong AI leads to a global catastrophe) = 2%
  • P(a different GCR occurs) = 7%

To assess the expected value of these scenarios, we also have to assign a utility score to each scenario (obviously, the following is incredibly rough):

  • Current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs = Baseline
  • Current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn't lead to post-scarcity = 2x as good as baseline
  • Strong AI leads to a post-scarcity economy = 100x as good as baseline
  • Strong AI leads to a global catastrophe = 0x as good as baseline
  • A different GCR occurs = 0x as good as baseline

Before calculating the expected value of each scenario, let's unpack my assessments a bit. I'm imagining "baseline" goodness as essentially things as they are right now, with no dramatic changes to human happiness in the next 30 years. If quality of life broadly construed continues to improve over the next 30 years, I assess that as twice as good as the baseline scenario.

Achieving post-scarcity in the next 30 years is assessed as 100x as good as the baseline scenario of no improvement. (Arguably this could be nearly infinitely better than baseline, but to avoid Pascal's mugging we'll cap it at 100x.)

A global catastrophe in the next 30 years is assessed as 0x as good as baseline.

Again, this is all very rough.

Now, calculating the expected value of each outcome is straightforward:

  • Expected value of current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs = 0.3 x 1 = 0.3
  • Expected value of current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn't lead to post-scarcity = 0.56 x 2 = 1.12
  • Expected value of strong AI leads to a post-scarcity economy = 0.05 x 100 = 5
  • Expected value of strong AI leads to a global catastrophe = 0.02 * 0 = 0
  • Expected value of a different GCR occurs = 0.07 * 0 = 0

And each scenario maps to a now-or-later giving decision:

  • Current, broad trend of humanitarian improvement stalls out or reverses; no strong AI, no GCRs –> Give later (because new opportunities may be discovered)
  • Current, broad trend of humanitarian improvement continues; no GCRs, if strong AI occurs it doesn't lead to post-scarcity –> Give now (because the best giving opportunities are the ones we're currently aware of)
  • Strong AI leads to a post-scarcity economy –> Give now (because philanthropy is obsolete in post-scarcity)
  • Strong AI leads to a global catastrophe (GCR) –> Give now (because philanthropy is nullified by a global catastrophe)
  • A different GCR occurs –> Give now (because philanthropy is nullified by a global catastrophe)

So, we can add up the expected values of all the "give now" scenarios and all the "give later" scenarios, and see which sum is higher:

  • Give now total expected value = 1.12 + 5 + 0 + 0 = 6.12
  • Give later total expected value = 0.3  = 0.3

This is a little strange because GCR outcomes are given no weight, but in reality if we were faced with a substantial risk of a global catastrophe, that would strongly influence our decision-making. Maybe the proper way to do this is to assign a negative value to GCR outcomes and include them in the "give later" bucket, but that pushes even further in the direction of "give now" so I'm not going to fiddle with it here.

Comparing the sums shows that, in expectation, giving now will lead to substantially more value. Most of this is driven by the post-scarcity variable, but even with post-scarcity outcomes excluded, I still assess "give now" scenarios to have about 4x the expected value as "give later" scenarios.

Yes, this exercise is ad-hoc and a little silly. Others could assign different probabilities & utilities, which would lead them to different conclusions. But the point the exercise illustrates is important: if you're like me in thinking that, over the next 30 years, things are most likely going to continue slowly improving with some chance of a trend reversal and a tail risk of major societal disruption, then in expectation, present-day giving opportunities are a better bet than future giving opportunities.

 ---

Disclosure: I used to work at GiveWell.

A version of this post appeared on my personal blog.