OR

Otis Reid

53 karmaJoined Feb 2022

Comments
7

I don't think of this as really a medical intervention FWIW -- it's a community level water infrastructure intervention, which is very much in his wheelhouse (would also be in the wheelhouse of other folks tbc).

Fwiw, the 50% loss rate isn't that different from the Roessler paper, which still found effects in spite of the losses/sales/breakage (see pg. 9).

"As expected, by endline self-reported phone ownership in the treatment conditions far outpaced control (72% in the pooled phone conditions to 27%) but also revealing the attenuating effects of handset turnover. Pinning down the exact mechanisms that account for handset loss among those in the phone groups is challenging because of survey demand effects.11 We could, however, verify whether participants had the program handset on their person during the endline survey: while 50.7% of those in the basic group still had the basic handset, only 34.2% in the smartphone group did. Below we refer to subjects displaying the project phone at endline as compliers."

By far the biggest problem with interpreting (certain) prediction markets as probabilities, especially in the tails, is the fee structures. PredictIt charges 10% of all winnings plus a 5% withdrawal fee. The latter fee in particular strongly disincentivizes adding new money to arbitrage small deviations in the tails. E.g. if a "yes" contract is trading at 98 cents and you are 100% sure it will happen (and you're correct), investing 98 cents now and withdrawing the resulting $1 after the market turns over (even if it ends tomorrow, so we abstract from inflation), will yield a loss of 0.98-((1-.98)*0.9+0.98)*0.95 = 3.19 cents. This problem is much smaller when betting on long shot outcomes, so it will tend to push prediction markets away from "certainty" on outcomes. You can make a copy of this calculator for PredictIt to see the effects.

This problem is somewhat ameliorated by people making multiple sequential bets (this amortizes the 5% withdrawal fee over more bets -- in the limit, I think that if you made an infinite number of (on average winning) bets, the 5% withdrawal fee would become just a fee on profits), but I think it's a significant issue in the tails.

This article is several years old, but as of 2019, their machine translation tool was quite poor and my experience is that articles can have vastly different levels of depth in different languages, so simply getting French/Spanish/etc. articles up to the level of their English language analogues might be an easy win.

Here are a couple of social science papers on the evidence that (well-written) Wikipedia articles have an impact on real world outcomes:

I think the main caveat (also mentioned in other comments) is that these papers are predicated on high quality edits or page creations that align with Wikipedia standards.

Nice post! Using the individual level data are you able to answer the question of whether forecasts also get better if you start with the “best” 1 forecaster and then progressively add the next best, the next best, etc. where best is defined ex ante (eg prior lifetime Metaculus score)? It’s a different question but might also be of interest for thinking about optimal aggregation.

Enjoyed the post! Open Philanthropy hired Tom Adamczewski (not sure what his user name is on here) to create https://valueofinfo.com/ for us, which is designed to do this sort of calculation — and accommodates a limited number of non-normal distributions (currently just Pareto and normal though if you work with the core package, you can do more). Just an FYI if you want to play more with this!