UnexpectedValues

I'm a theoretical CS grad student at Columbia specializing in mechanism design. I write a blog called Unexpected Values which you can find here: https://ericneyman.wordpress.com/. My academic website can be found here: https://sites.google.com/view/ericneyman/.

Wiki Contributions

Comments

2-week summer course in "economic theory and global prioritization": LMK if interested!

Thanks for putting this together; I might be interested!

I just want to flag that if your goal is to avoid internships, then (at least for American students) I think the right time to do this would be late May-early June rather than late June-early July as you suggest on the Airtable form. I think the most common day for internships to start is the day after Memorial Day, which in 2022 will be May 31st. (Someone correct me if I'm wrong.)

AMA: Jeremiah Johnson, Director/Founder of the Neoliberal Project

My understanding is that the Neoliberal Project is a part of the Progressive Policy Institute, a DC think tank (correct me if I'm wrong).

Are you guys trying to lobby for any causes, and if so, what has your experience been on the lobbying front? Are there any lessons you've learned that may be helpful to EAs lobbying for EA causes like pandemic preparedness funding?

AMA: Jeremiah Johnson, Director/Founder of the Neoliberal Project

There sort of is -- I've seen some EAs use the light bulb emoji 💡 on Twitter (I assume this comes from the EA logo) -- but it's not widely used, and it's unclear to me whether it means "identifies as an EA" or "is a practicing EA" (i.e. donates a substantial percentage of their income to EA causes and/or does direct work on those causes).

I'm unsure whether I want there to be an easy way to "identify as EA", since identities do seem to make people worse at thinking clearly. I've thought/written about this (in the context of a neoliberal identity too, as it happens), and my conclusion was basically that a strong EA identity would be okay so long as the centerpiece of the identity continues to be a question ("How can we do the most good?") as opposed to any particular answer. I'm not sure how realistic that is, though.

When pooling forecasts, use the geometric mean of odds

Thanks for writing this up; I agree with your conclusions.

There's a neat one-to-one correspondence between proper scoring rules and probabilistic opinion pooling methods satisfying certain axioms, and this correspondence maps Brier's quadratic scoring rule to arithmetic pooling (averaging probabilities) and the log scoring rule to logarithmic pooling (geometric mean of odds). I'll illustrate the correspondence with an example.

Let's say you have two experts: one says 10% and one says 50%. You see these predictions and need to come up with your own prediction, and you'll be scored using the Brier loss: (1 - x)^2, where x is the probability you assign to whichever outcome ends up happening (you want to minimize this). Suppose you know nothing about pooling; one really basic thing you can do is to pick an expert to trust at random: report 10% with probability 1/2 and 50% with probability 1/2. Your expected Brier loss in the case of YES is (0.81 + 0.25)/2 = 0.53, and your expected loss in the case of NO is (0.01 + 0.25)/2 = 0.13.

But, you can do better. Suppose you say 35% -- then your loss is 0.4225 in the case of YES and 0.1225 in the case of NO -- better in both cases! So you might ask: what is the strategy the gives me the largest possible guaranteed improvement over choosing a random expert? The answer is linear pooling (averaging the experts). This gets you 0.49 in the case of YES and 0.09 in the case of NO (an improvement of 0.04 in each case).

Now suppose you were instead being scored with a log loss -- so your loss is -ln(x), where x is the probability you assign to whichever outcome ends up happening. Your expected log loss in the case of YES is (-ln(0.1) - ln(0.5))/2 ~ 1.498, and in the case of NO is (-ln(0.9) - ln(0.5))/2 ~ 0.399.

Again you can ask: what is the strategy that gives you the largest possible guaranteed improvement of this "choose a random expert" strategy? This time, the answer is logarithmic pooling (taking the geometric mean of the odds). This is 25%, which has a loss of 1.386 in the case of YES and 0.288 in the case of NO, an improvement of about 0.111 in each case.

(This works just as well with weights: say you trust one expert more than the other. You could choose an expert at random in proportion to these weights; the strategy that guarantees the largest improvement over this is to take the weighted pool of the experts' probabilities.)

This generalizes to other scoring rules as well. I co-wrote a paper about this, which you can find here, or here's a talk if you prefer.

 

What's the moral here? I wouldn't say that it's "use arithmetic pooling if you're being scored with the Brier score and logarithmic pooling if you're being scored with the log score"; as Simon's data somewhat convincingly demonstrated (and as I think I would have predicted), logarithmic pooling works better regardless of the scoring rule.

Instead I would say: the same judgments that would influence your decision about which scoring rule to use should also influence your decision about which pooling method to use. The log scoring rule is useful for distinguishing between extreme probabilities; it treats 0.01% as substantially different from 1%. Logarithmic pooling does the same thing: the pool of 1% and 50% is about 10%, and the pool of 0.01% and 50% is about 1%. By contrast, if you don't care about the difference between 0.01% and 1% ("they both round to zero"), perhaps you should use the quadratic scoring rule; and if you're already not taking distinctions between low and extremely low probabilities seriously, you might as well use linear pooling.

Epistemic Trade: A quick proof sketch with one example

Cool idea! Some thoughts I have:

  • A different thing you could do, instead of trading models, is compromise by assuming that there's a 50% chance that your model is right and a 50% chance that your peer's model is right. Then you can do utility calculations under this uncertainty. Note that this would have the same effect as the one you desire in your motivating example: Alice would scrub surfaces and Bob would wear a mask.
    • This would however make utility calculations twice as difficult as compared just using your own model, since you'd need to compute the expected utility under each model. But note that this amount of computational intensity is already assumed by the premise that it makes sense for Alice and Bob to trade models. In order for Alice and Bob to reach this conclusion, each needs to compute their utility under each action in each of their models.
    • I would say that this is more epistemically sound than switching models with your peer, since it's reasonably well-motivated by the notion that you are epistemic peers and could have ended up in a world where you had had the information your peer has and vice versa.
  • But the fundamental issue you're getting at here is that reaching an agreement can be hard, and we'd like to make good/informed decisions anyway. This motivates the question: how can you effectively improve your decision making without paying the cost required by trying to reach an agreement?
    • One answer is that you can share partial information with your peer. For instance, maybe Alice and Bob decide that they will simply tell each other their best guess about the percentage of COVID transmission that is airborne and leave it at that (without trying to resolve subsequent disagreement). This is enough to, in most circumstances, cause each of them to update a lot (and thus be much better informed in expectation) without requiring a huge amount of communication.
  • Which is better: acting as if each model is 50% to be correct, or sharing limited information and then updating? I think the answer depends on (1) how well you can conceptualize your peer's model, (2) how hard updating is, and (3) whether you'll want to make similar decisions in the future but without communicating. The sort of case when the first approach is better is when both Alice and Bob have simple-to-describe models and will want to make good COVID-related decisions in the future without consulting each other. The sort of case when the second approach is better is when Alice and Bob have difficult-to-describe models, but have pretty good heuristics about how to update their probabilities based on the other's probabilities.

I started making a formal model of the "sharing partial information" approach and came up with an example of where it makes sense for Alice and Bob to swap behaviors upon sharing partial information. But ultimately this wasn't super interesting because the underlying behavior was that they were updating on the partial information. So while there are some really interesting questions of the form "How can you improve your expected outcome the most while talking to the other person as little as possible", ultimately you're getting at something different (if I understand correctly) -- that adopting a different model might be easier than updating your own. I'd love to see a formal approach to this (and may think some more about it later!)

Status update: Getting money out of politics and into charity

Yeah -- I think it's unlikely that Pact would become a really large player and have distortionary effects. If that happens, we'll solve that problem when we get there :)

The broader point that the marginal dollar might be more valuable to one campaign than to another is an important one. You could try to deal with this by making an actual market, where the ratio at which people trade campaign dollars isn't fixed at 1, but I think that will complicate the platform and end up doing more harm than good.

Status update: Getting money out of politics and into charity

Yeah, there are various incentives issues like this one that are definitely worth thinking about! I wrote about some of them in this blog post: https://ericneyman.wordpress.com/2019/09/15/incentives-in-the-election-charity-platform/

The issue you point out can be mostly resolved by saying that half of a pledges contributions will go to their chosen candidate no matter what -- but this has the unfortunate effect of decreasing the amount of money that gets sent to charity. My guess is that it's not worth it (though maybe doing some nominal amount like 5% is worth it (so as to discourage e.g. liberals who care mostly just about charity from donating to the Republican candidate).

Status update: Getting money out of politics and into charity

We want a Republican on our team; unfortunately in our experience Democrats are pretty disproportionately interested in the idea -- and this is in addition to the fact that our circles already have very few Republicans. (This could be a byproduct of how we're framing things, which is part of why we're trying to experiment with framing and talking to Republican consultants.) So we've been unsuccessful so far, but I agree that this is important.

Status update: Getting money out of politics and into charity

This is a cool idea that we hadn't considered. Thank you!

Status update: Getting money out of politics and into charity

This definitely sounds like it's worth trying, and it turns out that there's at least one prominent politician who's a fan of this idea. I do have the intuition that almost none of them would actually do it, because having more money directly benefits their staff.

Load More