Just saw this comment, I'm also super late to the party responding to you!
It actually seems to me it might have been worth emphasising more, as I think a casual reader could think this post was a critique of formal/explicit/quantitative models in particular.
Totally agree! Honestly, I had several goals with this post, and I almost complete failed on two of them:
Just found this post, coming in to comment a year late--Thanks Michael for the thoughtful post and Ozzie for the thoughtful comments!
I'm not saying that these are easy to solve, but rather, there is a mathematical strategy to generally fix them in ways that would make sense intuitively. There's no better approach than to try to approximate the mathematical approach, or go with an approach that in-expectation does a decent job at approximating the mathematical approach.
I might agree with you about what's (in some sense) mathematically possible (in pri...
(I used to work for GiveWell)
Hey Ben,
I'm sympathetic to a lot of the points you make in this post, but I think your conclusions are far more negative than is reasonable.
Here's the stuff I largely agree with you on:
-The opportunities to save lives w/ global health interventions probably aren't nearly as easy as Singer's thought experiment suggests
-Entities other than GiveWell use GiveWell's estimates without the appropriate level of nuance and detail about where the estimates come from and how uncertain they are
-There's not an...
That's interesting—and something I may not have considered enough. I think there's a real possibility that there could be excessive quantification in some areas of the EA but not enough of it in other areas.
For what it's worth, I may have made this post too broad. I wanted to point out a handful of issues that I felt all kind of fell under the umbrella of "having excessive faith in systematic or mathematical thinking styles." Maybe I should have written several posts on specific topics that get at areas of disagreement a bit more concretely. I might get around to those posts at some point in the future.
FWIW, as someone who was and is broadly sympathetic to the aims of the OP, my general impression agrees with "excessive quantification in some areas of the EA but not enough of it in other areas."
(I think the full picture has more nuance than I can easily convey, e.g. rather than 'more vs. less quantification' it often seems more important to me how quantitative estimates are being used - what role they play in the overall decision-making or discussion process.)
Again, none of this is to say that Bayesianism is fundamentally broken or that high-level Bayesian-ish things like "I have a very skeptical prior so I should not take this estimate of impact at face value" are crazy.
As a real world example:
Venture capitalists frequently fund things that they're extremely uncertain about. It's my impression that Bayesian calculations rarely play into these situations. Instead, smart VCs think hard and critically and come to conclusions based on processes that they probably don't full understand themselves.
It could be that VCs have just failed to realize the amazingness of Bayesianism. However, given that they're smart & there's a ton of money on the table, I think the much more plausible explanation is that hardcore Bayesianism wouldn't lead to better results than whatever it is that successful VCs actually do.
It's always worth entertaining multiple models if you can do that at no cost. However, doing that often comes at some cost (money, time, etc). In situations with lots of uncertainty (where the optimizer's curse is liable to cause significant problems), it's worth paying much higher costs to entertain multiple models (or do other things I suggested) than it is in cases where the optimizer's curse is unlikely to cause serious problems.
Hey Kyle, I'd stopped responding since I felt like we were well beyond the point where we were likely to convince one another or say things that those reading the comments would find insightful.
I understand why you think "good prior" needs to be defined better.
As I try to communicate (but may not quite say explicitly) in my post, I think that in situations where uncertainty is poorly understood, it's hard to come up with priors that are good enough that choosing actions based explicit Bayesian calculations will lead to better outcomes than choosing actions based on a combination of careful skepticism, information gathering, hunches, and critical thinking.
I'd also be excited to see more people in the EA movement doing the sort of work that I think would put society in a good position for handling future problems when they arrive. E.g., I think a lot of people who associate with EA might be awfully good and pushing for progress in metascience/open science or promoting a free & open internet.
Thanks for raising this.
To be clear, I'm still a huge fan of GiveWell. GiveWell only shows up in so many examples in my post because I'm so familiar with the organization.
I mostly agree with the points Holden makes in his cluster thinking post (and his other related posts). Despite that, I still have serious reservations about some of the decision-making strategies used both at GW and in the EA community at large. It could be that Holden and I mostly agree, but other people take different positions. It could be that Holden and I agree about a lo...
Fair enough. I remain in almost-total agreement, so I guess I'll just have to try and keep an eye out for what you describe. But based on what I've seen within EA, which is evidently very different to what you've seen, I'm more worried about little-to-zero quantification than excessive quantification.
Just to be clear, much of the deworming work supported by people in the EA community happens in areas where worm infections are more intense or are caused by worm species other than Trichuris & Ascaris. However, I believe a non-trivial amount of deworming done by charities supported by the EA community occurs in areas w/ primarily light infections from those worms.
Sure. To be clear, I think most of what I'm concerned about applies to prioritization decisions made in highly-uncertain scenarios. So far, I think the EA community has had very few opportunities to look back and conclusively assess whether highly-uncertain things it prioritized turned out to be worthwhile. (Ben makes a similar point at https://www.lesswrong.com/posts/Kb9HeG2jHy2GehHDY/effective-altruism-is-self-recommending.)
That said, there are cases where I believe mistakes are being made. For example, I think mass deworming in areas where almost ...
I think it's super exciting—a really useful application of probability!
I don't know as much as I'd like to about Tetlock's work. My understanding is that the work has focused mostly on geopolitical events where forecasters have been awfully successful. Geopolitical events are a kind of thing I think people are in an OK position for predicting—i.e. we've seen a lot of geopolitical events in the past that are similar to the events we expect to see in the future. We have decent theories that can explain why certain events came to pas...
I'm struggling to understand how your proposed new group avoids the optimizer's curse, and I'm worried we're already talking past each other. To be clear, I'm don't believe there's something wrong with Bayesian methods in the abstract. Those methods are correct in a technical sense. They clearly work in situations where everything that matters can be completely quantified.
The position I'm taking is that the scope of real-world problems that those methods are useful for is limited because our ability to precisely qua...
Thanks Max! That paper looks interesting—I'll have to give it a closer read at some point.
I agree with you that how the reliability of assessments varies between options is crucial.
Can you expand on how you would directly estimate the reliability of charity evaluations? I feel like there are a lot of realistic situations where this would be extremely difficult to do well.
Thanks for the detailed comment!
I expect we’ll remain in disagreement, but I’ll clarify where I stand on a couple of points you raised:
“Optimizer's curse only matters when comparing better-understood projects to worse-understood projects, but you are talking about "prioritizing among funding opportunities that involve substantial, poorly understood uncertainty."
Certainly, the optimizer’s curse may be a big deal when well-understood projects are compared with poorly-understood projects. However, I don’t think it’s the case that all projects ...
It's definitely an interesting phenomenon & worth thinking about seriously.
Any procedures for optimizing for expected impact could go wrong if the value of long-term alliances and relationships isn't accounted for.
Thanks Milan—I probably should have been a bit more detailed in my summary.
Here are the main issues I see:
-The optimizer's curse is an underappreciated threat to those who prioritize among causes and programs that involve substantial, poorly understood uncertainty.
-I think EAs are unusually prone to wrong-way reductions: a fallacy where people try to solve messy, hard problems with tidy, formulaic approaches that actually create more issues than they resolve.
--I argue that trying to turn all uncertainty into something like numeric probability estimat...
I largely agree with what you said in this comment, though I'd say the line between data collection and data processing is often blurred in real-world scenarios.
I think we are talking past each other (not in a bad faith way though!), so I want to stop myself from digging us deeper into an unproductive rabbit hole.