Effective Altruism Prediction Registry

by ghabs28th Jan 201623 comments



Predictions matter because they separate the signal from the noise. Anyone can make a vague proclamation that sometime in the future something good or bad will happen; it’s the people who put definitive dates and measurable claims on these statements that make it possible to evaluate whether they were actually right or wrong, and figure out what we should do next.

SuperForecasting makes a compelling case that tracking predictions is useful on both the individual and collective level. When we track a prediction we’re able to get feedback on whether we were right or wrong, and overtime we get better at making predictions. And it helps groups make better decisions by forcing disciplined thinking and avoiding unclear and confusing opinions and HiPPO.


Whenever you make a decision you’re making a prediction. These can be small predictions, like if I decide to leave at 7:30 I’ll make it to work on time, or big predictions, like if I attend a prestigious university I will be able to make a lot of money when I graduate. For Effective Altruists these decisions are often related to charitable giving, not just about what cause is the ‘best’ to give to, but also what is the marginal improvement you expect your donation to cause. I’m increasingly convinced that we need to do a better job as a community systematically tracking predictions around important topics.

If I give $3,000 to the Against Malaria Foundation, I’m predicting I will be able to save one life. Or, if I think that EAs should be making appeals based on equality or justice, instead of individual rights or outcomes, I’m predicting that this strategy, if implemented, will drive more donations than the other.

So we’re making predictions all the time, but the predictions sit in isolation from one another, in random posts in the various EA facebook groups and forums. And they lack the rigor of providing actual deadlines and numbers, for when/how we should assess the claims. This is a problem, because we’re missing out on the wealth of knowledge that would come from learning if/how these ideas actually worked.

I think this is low hanging fruit for us to improve EA. If we tracked predictions about the outcomes of campaigns, interventions, etc. we’d see a number of benefits.

  • Establish a strong track record of success for top performing charities

  • Provide guidance for decision makers. Predictions, from orgs and individuals, about EA campaigns, could help guide donations.

  • Elevate good forecasters, and boost effective ideas.

In particular EA orgs could make more explicit predictions about what the outcome of giving money would be. It was difficult to track down concrete predictions from notable EA Orgs for the Winter 2015 giving season, but two I found were from 80000 Hours, predicting 50 plan changes by 10/31/2016, and CFAR, with 1000 new alumni by 12/31/2016.

Some type of central repository of predictions being made about EA, where people can comment, provide their own predictions, and update their predictions as new evidence comes in. This could be as simple as a blog post or a more full fledged system/market. Prediction trackers have a long history, but with little successful adoption. I think that EA’s might be different because of the unique origins of the movement. Given the analytical culture of EAs, this could be uniquely well suited to our community.

I see three steps to doing this:

  • A simple forum where anyone can submit a declarative prediction about EA related events.

    • By December 31st, 2016 there will be 5,000 people signed up for the Giving What We Can pledge

  • Anyone can submit their predictions tied to this event

    • JohnSmith89 thinks there is a 20% probability this is true

  • After the deadline has past judges/people can vote on whether it actually happened, and you can see who successfully predicted it and with what accuracy.

With 75% confidence I’d say that by February 10th at least 15 people will have expressed interest in predictions about effective altruism.

23 comments, sorted by Highlighting new comments since Today at 1:09 AM
New Comment

Very much support the thrust of this post. Oliver Habryka on the EA Outreach team is currently chatting with the Good Judgment Project team about implementing a prediction market in EA.

Update: the Good Judgment Project has just launched Good Judgement Open. https://www.gjopen.com/

To get a good prediction market, we need more participation than the EA community would provide at its current size.

This is going to be a problem where the superset - creating a prediction market accessible by everyone - is easier to solve than the specific case - making a prediction market for the EA community.

Prediction Book is already taking on this role - a Prediction market by EAs - and would welcome help.

edit: fixed spelling

Prediction markets benefit a lot from liquidity. Making it EA specific doesn't seem to gain all that much. But EAs should definitely practice forecasting formally and getting rewarded for reliable predictions.

This comment has a lot of spelling mistakes and it's hard to understand. (I'm guessing you wrote it on mobile.) Can you go over it and remove the spelling mistakes?

My best translation...

To get a good prediction market, we need more participation than the EA community would provide at its current size.

This is going to be a problem where the superset - A prediction market accessible by every one - is easier to solve than the specific case - prediction market for the EA community.

Prediction is already taking on this role - a prediction market by EAs - and would welcome help.

I believe "Predictipn bool" is supposed to be "PredictionBook".

Not sure why that happened. Fixed.

It gave me a mental image of a drunk Ryan, slurring his words while making a coherent argument.

Something that surprised me from the Superforecasting book is that just having a registry helps, even when those predictions aren't part of a prediction market.

Maybe a prediction market is overkill right now? I think that registering predictions could be valuable even without the critical mass necessary for the market to have much liquidity. It seems that the advantage of prediction markets is in incentivizing people to try to participate and do well, but if we're just trying to track predictions that EAs are already trying to make then that might be enough.

Also, one of FLI's cofounders (Anthony Aguirre) started a prediction registry: http://www.metaculus.com/ , http://futureoflife.org/2016/01/24/predicting-the-future-of-life/

What tools for prediction markets are there besides http://predictionbook.com/ ? Any comments on what features they have or which are best for which purposes?

The only other one I know of is https://called.it/ which is mobile only (h/t John Maxwell).

People may be interested in https://www.facebook.com/groups/eapredictons/

Augur (http://www.augur.net/) - a decentralised prediction market.

I'm not familiar with too many "personal" prediction sites, one's you can register your own predictions (outside of predictionbook).

Zocalo, (http://zocalo.sourceforge.net/) is a toolkit for building prediction markets, but isn't currently supported. https://www.cultivatelabs.com/ creates enterprise prediction markets. And Augur is a cryptocurrency based prediction market that is currently in Alpha, but you can spin up your own test nodes if you wanted to run a separate network..

I can't access the facebook group, is it public? would be interested to check it out!

I can't access the facebook group, is it public?

You have to join to see posts.

With 75% confidence I’d say that by February 10th at least 15 people will have expressed interest in predictions about effective altruism.

I hereby express interest. Others can do so in a comment under this!

Am interested.

Me also.

Also interested, would prefer something not facebook-based. If something needed to be setup/maintained/whatnot, I'd be happy to help.

What's the count up to now, counting all sources?

13 in total, from this post and an EA meetup - so 48 hours to find two more to be "well calibrated" on the prediction.

/interest expressed