Hide table of contents

I have recently read Holden Karnofsky's post on Why we can’t take expected value estimates literally (even when they’re unbiased). Below are my notes. Any errors/misinterpretations are my own.

Conclusions

  • Any approach to decision-making that relies only on rough estimates of expected value – and does not incorporate preferences for better-grounded estimates over shakier estimates – is flawed.
  • When aiming to maximize expected positive impact, it is not advisable to make giving decisions based fully on explicit formulas. Proper Bayesian adjustments are important and are usually overly difficult to formalize.
  • The above point is a general defense of resisting (Pascal’s Mugging) arguments that both (a) seem intuitively problematic (b) have thin evidential support and/or room for significant error.

Informal objections to EEV (explicit expected-value) decision-making

  • There seems to be nothing in EEV that penalizes relative ignorance or relatively poorly grounded estimates, or rewards investigation and the forming of particularly well grounded estimates.
  • EEV doesn’t seem to allow rewarding charities for transparency or penalizing them for opacity: it simply recommends giving to the charity with the highest estimated expected value, regardless of how well-grounded the estimate is.
  • If you are basing your actions on EEV analysis, it seems that you’re very open to being exploited by Pascal’s Mugging.
  • If I’m deciding between eating at a new restaurant with 3 Yelp reviews averaging 5 stars and eating at an older restaurant with 200 Yelp reviews averaging 4.75 stars, EEV seems to imply (using Yelp rating as a stand-in for “expected value of the experience”) that I should opt for the former.

Applying Bayesian adjustments to cost-effectiveness estimates for donations, actions, etc.

  • The more one feels confident in one’s pre-existing view of how cost-effective an donation or action should be, the smaller the variance of the “prior”.
  • The more one feels confident in the cost-effectiveness estimate itself, the smaller the variance of the “estimate error”.
  • When one applies Bayes’s rule to obtain a distribution for cost-effectiveness based on (a) a normally distributed prior distribution (b) a normally distributed “estimate error”, one obtains a distribution with (see this):
    • Mean equal to the average of the two means weighted by their inverse variances.
    • Variance equal to the harmonic sum of the two variances.

Consequences

  • If one has a relatively reliable estimate (narrow confidence interval / small variance of “estimate error”), the Bayesian-adjusted conclusion ends up very close to the estimate.
  • When using highly precise and well-understood methods, we can use them (almost) literally.
  • When the estimate is relatively unreliable (wide confidence interval / large variance of “estimate error”), it has little effect on the final expectation of cost-effectiveness.
  • At the point where the one-standard-deviation bands include zero cost-effectiveness (where there is a pretty strong probability that the whole cost-effectiveness estimate is worthless), the estimate ends up having practically no effect on one’s final view.

Tackling Pascal’s muggling

  • You believe, based on life experience, in a “prior distribution” for the value of your actions with a mean of 0 and a standard deviation of 1.
  • Giving in to the mugger’s demands has an expected value of X, but the estimate is so rough that the right expected value could easily be 0 or 2X.
  • The expected value, after the Bayesian adjustment, is X/(X^2+1), or just under 1/X.
    • In this framework, the greater X is, the lower the expected value of giving in.

Approaches to Bayesian adjustment that I oppose

  • “I have a very weak (or uninformative) prior, which means I can more or less take rough estimates literally”.
    • Even just a sense for the values of the small set of actions you have taken in your life, and observed the consequences of, gives you something to work with.
    • This sense probably ought to have high variance, but, when dealing with a rough estimate that has very high variance of its own, it may still be quite a meaningful prior.
  • Tell that their estimates seem too optimistic, and so make various “downward adjustments”, multiplying EEV by apparently ad hoc figures (1%, 10%, 20%).
    • Unclear whether the size of the adjustment has the correct relationship to:
      • The weakness of the estimate itself.
      • The strength of the prior.
      • Distance of the estimate from the prior.
    • In the “Pascal’s Mugging” analysis above: assigning one’s framework a 99.99% chance of being totally wrong may seem to be amply conservative, but in fact the proper Bayesian adjustment is much larger and leads to a completely different conclusion.

Heuristics I use to address whether I’m making an appropriate prior-based adjustment

  • The more action is asked of me, the more evidence I require.
  • I pay attention to how much of the variation I see between estimates is likely to be driven by true variation vs. estimate error.
  • I put much more weight on conclusions that seem to be supported by multiple different lines of analysis, as unrelated to one another as possible.
  • I am hesitant to embrace arguments that seem to have anti-common-sense implications (unless the evidence behind these arguments is strong).
  • My prior for charity is generally skeptical, as outlined at this post.

14

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities