This is the first in what might become a bunch of posts picking out issues from statistics and probability of relevance to EA. The format will be informal and fairly bite-size. None of this will be original, hopefully.
Expectations are not outcomes
Here we attempt to trim back the intuition that an expected value can be safely thought of as a representative value of the random variable.
Situation 1
A Rademacher random variable X takes the value 1 with probability 1/2 and otherwise -1. Its expectation is zero. We will almost surely never see any value other than -1 or 1.
This means that the expected value might not even be a number the distribution could produce. We might not even be able to get arbitrarily close to it.
Imagine walking up to a table in a casino and betting that the next roll of a die will be 7/2.
Situation 2
Researchers create a natural language simulation model. Upon receiving a piece of text as stimulus it outputs a random short story. What is the expectation of the story?
Let’s think about the first word. There will be some implied probability distribution over a dictionary. Its expectation is some fractional combination of every word in the dictionary. Whatever that means, and whatever it is useful for, it is not the start of a legible story - and should not be used as such.
What is the expected length of the story? What would a solution to that problem mean? Could one, for example, print the expected story?
Situation 3
Distributions with very fat tails. For instance, the Cauchy distribution has an undefined expectation.
Implication
It is tempting to freely substitute an expectation in as a representative of a random variable. Suppose we used the following procedure in a blanket fashion:
- We are faced with a decision depending on an uncertain outcome.
- We take the expected value of the outcome.
- We use the expectation as a scenario to plan around.
Step three is unsafe in principle - even if sometimes not in practice.
If there is a next time (the length of this series is currently fractional) I hope to touch on some scenarios less easily dismissed as the concerns of a pedant.
I’ve found Christian Tarsney’s “Exceeding Expectations” insightful when it comes to recognizing and maybe coping with the limits of expected value.
See also the post/sequence by Daniel Kokotajlo, “Tiny Probabilities of Vast Utilities”. I’m linking to the post that was most valuable to me, but by default it might make sense to start with the first one in the sequence. ^^
Thanks - that last link was one I'd come across and liked when looking for previous coverage. My sole previous blog post was about Pascal's Wager. I'd found though when speaking about it that I was assuming too much for some of the audience I wanted to bring along; notwithstanding my sloppy writing :D So, I'm going to attempt to stay focused and incremental.
Thanks for writing this! It's always useful to get reminders for the sort of mistakes we can fail to notice even if when they're significant.
I also think it would be a lot more helpful to walk through how this mistake could happen in some real scenarios in the context of EA (even though these scenarios would naturally be less clear-cut and more complex).
Lastly, it might be worth noting the many other tools we have to represent random variables. Some options off the top of my head:
* Expectation & variance: Sometimes useful for normal distributions and other intuitive distributions (eg QALY per $ for many interventions at scale).
* Confidence intervals: Useful for many cases where the result is likely to be in a specific range (eg effect size for a specific treatment).
* Probabilities for specific outcomes or events: Sometimes useful for distributions with important anomalies (eg impact of a new organization), or when looking for specific combinations of multiple distributions (eg the probability that AGI is coming soon and also that current alignment research is useful).
* Full model of the distribution: Sometimes useful for simple \ common distributions (all the examples that come to mind aren’t in the context of EA, oh well).
One small note: The examples are there to make the category clearer. These aren’t all cases where expected value is wrong \ inappropriate to use. Specifically, for some of them, I think using expected value works great.
Hopefully, we'll get there! It'll be mostly Bayesian though :)
Thanks for writing this. I hadn't though it about this explicitly and think it's useful. The bite-sized format is great. A series of posts would be great too.