This is a linkpost for https://confusopoly.com/2019/04/03/the-optimizers-curse-wrong-way-reductions/.
Summary
I spent about two and a half years as a research analyst at GiveWell. For most of my time there, I was the point person on GiveWell’s main cost-effectiveness analyses. I’ve come to believe there are serious, underappreciated issues with the methods the effective altruism (EA) community at large uses to prioritize causes and programs. While effective altruists approach prioritization in a number of different ways, most approaches involve (a) roughly estimating the possible impacts funding opportunities could have and (b) assessing the probability that possible impacts will be realized if an opportunity is funded.
I discuss the phenomenon of the optimizer’s curse: when assessments of activities’ impacts are uncertain, engaging in the activities that look most promising will tend to have a smaller impact than anticipated. I argue that the optimizer’s curse should be extremely concerning when prioritizing among funding opportunities that involve substantial, poorly understood uncertainty. I further argue that proposed Bayesian approaches to avoiding the optimizer’s curse are often unrealistic. I maintain that it is a mistake to try and understand all uncertainty in terms of precise probability estimates.
I go into a lot more detail in the full post.
I think I agree with everything you've said there, except that I'd prefer to stay away from the term "Knightian", as it seems to be so often taken to refer to an absolute, binary distinction. It seems you wouldn't endorse that binary distinction yourself, given that you say "Knightian-ish", and that in your post you write:
But I think, whatever one's own intentions, the term "Knightian" sneaks in a lot of baggage and connotations. And on top of that, the term is interpreted in so many different ways by different people. For example, I happened to have recently seen events very similar to those you contrasted against cases of Knightian-ish uncertainty used as examples to explain the concept of Knightian uncertainty (in this paper):
So I see the term "Knightian" as introducing more confusion than it's worth, and I'd prefer to only use it if I also give caveats to that effect, or to highlight the confusions it causes. Typically, I'd prefer to rely instead on terms like more or less resilient, precise, or (your term) hazy probabilities/credences. (I collected various terms that can be used for this sort of idea here.)
[I know this comment is very late to the party, but I'm working on some posts about the idea of a risk-uncertainty distinction, and was re-reading your post to help inform that.]