Otis Reid

48 karmaJoined Feb 2022


By far the biggest problem with interpreting (certain) prediction markets as probabilities, especially in the tails, is the fee structures. PredictIt charges 10% of all winnings plus a 5% withdrawal fee. The latter fee in particular strongly disincentivizes adding new money to arbitrage small deviations in the tails. E.g. if a "yes" contract is trading at 98 cents and you are 100% sure it will happen (and you're correct), investing 98 cents now and withdrawing the resulting $1 after the market turns over (even if it ends tomorrow, so we abstract from inflation), will yield a loss of 0.98-((1-.98)*0.9+0.98)*0.95 = 3.19 cents. This problem is much smaller when betting on long shot outcomes, so it will tend to push prediction markets away from "certainty" on outcomes. You can make a copy of this calculator for PredictIt to see the effects.

This problem is somewhat ameliorated by people making multiple sequential bets (this amortizes the 5% withdrawal fee over more bets -- in the limit, I think that if you made an infinite number of (on average winning) bets, the 5% withdrawal fee would become just a fee on profits), but I think it's a significant issue in the tails.

This article is several years old, but as of 2019, their machine translation tool was quite poor and my experience is that articles can have vastly different levels of depth in different languages, so simply getting French/Spanish/etc. articles up to the level of their English language analogues might be an easy win.

Here are a couple of social science papers on the evidence that (well-written) Wikipedia articles have an impact on real world outcomes:

I think the main caveat (also mentioned in other comments) is that these papers are predicated on high quality edits or page creations that align with Wikipedia standards.

Nice post! Using the individual level data are you able to answer the question of whether forecasts also get better if you start with the “best” 1 forecaster and then progressively add the next best, the next best, etc. where best is defined ex ante (eg prior lifetime Metaculus score)? It’s a different question but might also be of interest for thinking about optimal aggregation.

Otis Reid

Enjoyed the post! Open Philanthropy hired Tom Adamczewski (not sure what his user name is on here) to create https://valueofinfo.com/ for us, which is designed to do this sort of calculation — and accommodates a limited number of non-normal distributions (currently just Pareto and normal though if you work with the core package, you can do more). Just an FYI if you want to play more with this!