Hide table of contents

I haven't seen much public discussion of Holden's post regarding Good Ventures' giving plans for the near future, especially in relation to how effective the last dollar they give away will be relative to how effective donations to GiveWell's' current top charities are. Perhaps I missed something?

The post itself is very well written, and it's long enough that I'll summarize the points I believe are most important by quoting the piece directly:

To start, here's OPP's overall take on how effective the last dollar they give away will be:

We have a great deal of uncertainty about the value of giving later. We could imagine that funds we save and give later will end up doing much less good than donations to GiveWell’s highest rated top charities would - or much more. On balance, our very tentative, unstable guess is the “last dollar” we will give (from the pool of currently available capital) has higher expected value than gifts to GiveWell’s top charities today. This is a notable change from last year’s position, and our position could easily change again in the near future.

And here's Holden's explanation of their reasoning:

The points above [not quoted, for brevity] list several possible ways in which we might later come to believe that there are billions of dollars’ worth of giving opportunities that we would prefer over further support of GiveWell’s top charities. I don’t think any one of them is highly likely to play out, but I think there are reasonable odds that at least one of them does. One very rough way of thinking about this is to imagine that there are four such possibilities (one featuring our views about the moral value of the far future; one featuring our views about the moral significance of helping animals; one featuring our estimated “room for more funding” for some outstanding cause such as potential risks from advanced AI or biosecurity and pandemic preparedness; and “unknown unknowns”), each with an independent 10% chance of leading to a “last dollar” that seems 5x as good as GiveWell’s current top charities. In aggregate, this would imply a 34% chance1that there is some way to spend the “last dollar” 5x as well as GiveWell’s current top charities. This would imply that GiveWell’s current top charities should be considered less cost-effective in expectation than the “last dollar.”

I'll also highlight one point from his conclusion [emphasis in original]:

In 2017, I hope to put significantly more time into the issues that were preliminarily addressed in this post, such as the value of the “last dollar” and what sort of conservatism we should practice.

And, one last bit by Holden that I found interesting, which I'm curious about people's opinions on:

I haven’t yet seen a formal approach I find satisfying and compelling for questions like “How should I behave when I perceive a significant risk that I’m badly misguided in a fundamental way?”

I suspect that it might be possible to have some sort of helpful discussion on this topic, and I don't know what the most reasonable answers to the questions posed here look like. I agree that it's worth putting lots of effort into thinking about the value of the last dollar which is spent on OPP for decision-theoretic reasons. And, as always, I think that there's value to be had in reading the whole post itself, in addition to this summary.

Oh, and one other thing! I think it's worth mentioning that the fact that this topic was interesting to me was likely influenced by the fact that it's new, and hard to make progress on. To the extent that challenging and groundbreaking topics are more interesting in general, I'm likely to point them out, to the marked exclusion of topics where notable progress has been made. So, just as I'm focusing on a problem OPP staff still have to face, I'd also like to take a moment to appreciate the work OPP and GiveWell have done on everything else.

5

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since: Today at 8:09 AM

I haven’t yet seen a formal approach I find satisfying and compelling for questions like “How should I behave when I perceive a significant risk that I’m badly misguided in a fundamental way?”

Seems like the obvious thing would be to frontload testing your hypotheses, try things that break quickly and perceptibly if a key belief is wrong, minimize the extent to which you try to control the behavior of other agents in ways other than sharing information, and share resources when you happen to be extraordinarily lucky. In other words, behave like you'd like other agents who might be badly misguided in a fundamental way to behave.

Curated and popular this week
Relevant opportunities