Chris Smith

Comments

Drowning children are rare

(I used to work for GiveWell)

Hey Ben,

I'm sympathetic to a lot of the points you make in this post, but I think your conclusions are far more negative than is reasonable.

Here's the stuff I largely agree with you on:

-The opportunities to save lives w/ global health interventions probably aren't nearly as easy as Singer's thought experiment suggests

-Entities other than GiveWell use GiveWell's estimates without the appropriate level of nuance and detail about where the estimates come from and how uncertain they are

-There's not anything close to $50,000,000,000 funding gap for ultra cost-effective interventions to save lives

-GiveWell's cost-effectiveness estimates are probably overly optimistic

That said, I find a few of the things you say in this post frustrating:

"Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated."

I don't think anyone at GiveWell believes millions of lives could be saved today at an ultra-low cost. GiveWell regularly publishes their room for more funding analyses that indicate it thinks the funding gaps for their recommended interventions amount to way way less than $50 billion/year.

As far as I can tell, people at Good Ventures & Open Phil sincerely believe that funding in cause areas other than global health may be incredibly cost-effective. I think Good Ventures funds other stuff because they think each $5,000 of funding given to those causes may do more good than an additional $5,000 given to GiveWell's recommended charities. They might be dead wrong, but I don't think they rationalize their choices with, "Well, GiveWell's estimates are just BS so let's not take them seriously."

"They were worried that this would be an unfair way to save lives."

I find this way of describing GW's motivations awfully uncharitable.

"[The cost-effectiveness estimates are] marketing copy designed to control your behavior, not unbiased estimates designed to improve the quality of your decisionmaking process."

GiveWell puts a ton of effort into coming up with these numbers and drawing on them as they make decisions. None of that would happen if the numbers were just created for the purposes of marketing and manipulation. I have significant reservations about how GiveWell's estimates are created and used. I don't have significant reservations about GiveWell's sincerity when sharing the estimates.

[Link] The Optimizer's Curse & Wrong-Way Reductions

That's interesting—and something I may not have considered enough. I think there's a real possibility that there could be excessive quantification in some areas of the EA but not enough of it in other areas.

For what it's worth, I may have made this post too broad. I wanted to point out a handful of issues that I felt all kind of fell under the umbrella of "having excessive faith in systematic or mathematical thinking styles." Maybe I should have written several posts on specific topics that get at areas of disagreement a bit more concretely. I might get around to those posts at some point in the future.

[Link] The Optimizer's Curse & Wrong-Way Reductions

Again, none of this is to say that Bayesianism is fundamentally broken or that high-level Bayesian-ish things like "I have a very skeptical prior so I should not take this estimate of impact at face value" are crazy.

[Link] The Optimizer's Curse & Wrong-Way Reductions

As a real world example:

Venture capitalists frequently fund things that they're extremely uncertain about. It's my impression that Bayesian calculations rarely play into these situations. Instead, smart VCs think hard and critically and come to conclusions based on processes that they probably don't full understand themselves.

It could be that VCs have just failed to realize the amazingness of Bayesianism. However, given that they're smart & there's a ton of money on the table, I think the much more plausible explanation is that hardcore Bayesianism wouldn't lead to better results than whatever it is that successful VCs actually do.

[Link] The Optimizer's Curse & Wrong-Way Reductions

It's always worth entertaining multiple models if you can do that at no cost. However, doing that often comes at some cost (money, time, etc). In situations with lots of uncertainty (where the optimizer's curse is liable to cause significant problems), it's worth paying much higher costs to entertain multiple models (or do other things I suggested) than it is in cases where the optimizer's curse is unlikely to cause serious problems.

[Link] The Optimizer's Curse & Wrong-Way Reductions

Hey Kyle, I'd stopped responding since I felt like we were well beyond the point where we were likely to convince one another or say things that those reading the comments would find insightful.

I understand why you think "good prior" needs to be defined better.

As I try to communicate (but may not quite say explicitly) in my post, I think that in situations where uncertainty is poorly understood, it's hard to come up with priors that are good enough that choosing actions based explicit Bayesian calculations will lead to better outcomes than choosing actions based on a combination of careful skepticism, information gathering, hunches, and critical thinking.

[Link] The Optimizer's Curse & Wrong-Way Reductions

I'd also be excited to see more people in the EA movement doing the sort of work that I think would put society in a good position for handling future problems when they arrive. E.g., I think a lot of people who associate with EA might be awfully good and pushing for progress in metascience/open science or promoting a free & open internet.

[Link] The Optimizer's Curse & Wrong-Way Reductions

Thanks for raising this.

To be clear, I'm still a huge fan of GiveWell. GiveWell only shows up in so many examples in my post because I'm so familiar with the organization.

I mostly agree with the points Holden makes in his cluster thinking post (and his other related posts). Despite that, I still have serious reservations about some of the decision-making strategies used both at GW and in the EA community at large. It could be that Holden and I mostly agree, but other people take different positions. It could be that Holden and I agree about a lot of things at a high-level but then have significantly different perspectives about how those things we agree on at a high-level should actually manifest themselves in concrete decision making.

For what it's worth, I do feel like the page you linked to from GiveWell's website may downplay the role cost-effectiveness plays in its final recommendations (though GiveWell may have a good rebuttal).

In a response to Taymon's comment, I left a specific example of something I'd like to see change. In general, I'd like people to be more reluctant to brute-force push their way through uncertainty by putting numbers on things. I don't think people need to stop doing that entirely, but I think it should be done while keeping in mind something like: "I'm using lots of probabilities in a domain where I have no idea if I'm well-calibrated...I need to be extra skeptical of whatever conclusions I reach."

[Link] The Optimizer's Curse & Wrong-Way Reductions

Just to be clear, much of the deworming work supported by people in the EA community happens in areas where worm infections are more intense or are caused by worm species other than Trichuris & Ascaris. However, I believe a non-trivial amount of deworming done by charities supported by the EA community occurs in areas w/ primarily light infections from those worms.

[Link] The Optimizer's Curse & Wrong-Way Reductions

Sure. To be clear, I think most of what I'm concerned about applies to prioritization decisions made in highly-uncertain scenarios. So far, I think the EA community has had very few opportunities to look back and conclusively assess whether highly-uncertain things it prioritized turned out to be worthwhile. (Ben makes a similar point at https://www.lesswrong.com/posts/Kb9HeG2jHy2GehHDY/effective-altruism-is-self-recommending.)

That said, there are cases where I believe mistakes are being made. For example, I think mass deworming in areas where almost all worm infections are light cases of trichuriasis or ascariasis is almost certainly not among the most cost-effective global health interventions.

Neither trichuriasis nor ascariasis appear to have common/significant/easily-measured symptoms when infections are light (i.e., when there are not many worms in an infected person's body). To reach the conclusion that treating these infections has a high expected value, extrapolations are made from the results of a study that had some weird features and occurred in a very different environment (an environment with far heavier infections and additional types of worm infections). When GiveWell makes its extrapolations, lots of discounts, assumptions, probabilities, etc. are used. I don't think people can make this kind of extrapolation reliably (even if they're skeptical, smart, and thinking carefully). When unreliable estimates are combined with an optimization procedure, I worry about the optimizer's curse.

Someone who is generally skeptical of people's ability to productively use models in highly-uncertain situations might instead survey experts about the value of treating light trichuriasis & asariasis infections. Faced with the decision of funding either this kind of deworming or a different health program that looked highly-effective, I think the example person who ran surveys would choose the latter.

Load More