This is a linkpost for https://confusopoly.com/2019/04/03/the-optimizers-curse-wrong-way-reductions/.
Summary
I spent about two and a half years as a research analyst at GiveWell. For most of my time there, I was the point person on GiveWell’s main cost-effectiveness analyses. I’ve come to believe there are serious, underappreciated issues with the methods the effective altruism (EA) community at large uses to prioritize causes and programs. While effective altruists approach prioritization in a number of different ways, most approaches involve (a) roughly estimating the possible impacts funding opportunities could have and (b) assessing the probability that possible impacts will be realized if an opportunity is funded.
I discuss the phenomenon of the optimizer’s curse: when assessments of activities’ impacts are uncertain, engaging in the activities that look most promising will tend to have a smaller impact than anticipated. I argue that the optimizer’s curse should be extremely concerning when prioritizing among funding opportunities that involve substantial, poorly understood uncertainty. I further argue that proposed Bayesian approaches to avoiding the optimizer’s curse are often unrealistic. I maintain that it is a mistake to try and understand all uncertainty in terms of precise probability estimates.
I go into a lot more detail in the full post.
I haven't had time yet to think about your specific claims, but I'm glad to see attention for this issue. Thank you to contributing what I suspect is an important discussion!
You might be interested in the following paper which essentially shows that under an additional assumption the Optimizer's Curse not only makes us overestimate the value of the apparent top option but in fact can make us predictably choose the wrong option.
The crucial assumption roughly is that the reliability of our assessments varies sufficiently much between options. Intuitively, I'm concerned that this might apply when EAs consider interventions across different cause areas: e.g., our uncertainty about the value of AI safety research is much larger than our uncertainty about the short-term benefits of unconditional cash transfers.
(See also the part on the Optimizer's Curse and endnote [6] on Denrell and Liu (2012) in this post by me, though I suspect it won't teach you anything new.)
Hmm. This made me wonder whether the paper's results depends on the decision-maker being uncertain about which options have been estimated reliably vs. unreliably. It seems possible that the effect could disappear if the reliability of my estimates varies but I know that the variance of my value estimate for option 1 is v_1, the one for option 2 v_2 etc. (even if the v_i vary a lot). (I don't have time to check the paper or get clear on this I'm afraid.)
Is this what you were trying to say here?