Predictions matter because they separate the signal from the noise. Anyone can make a vague proclamation that sometime in the future something good or bad will happen; it’s the people who put definitive dates and measurable claims on these statements that make it possible to evaluate whether they were actually right or wrong, and figure out what we should do next.

SuperForecasting makes a compelling case that tracking predictions is useful on both the individual and collective level. When we track a prediction we’re able to get feedback on whether we were right or wrong, and overtime we get better at making predictions. And it helps groups make better decisions by forcing disciplined thinking and avoiding unclear and confusing opinions and HiPPO.

 

Whenever you make a decision you’re making a prediction. These can be small predictions, like if I decide to leave at 7:30 I’ll make it to work on time, or big predictions, like if I attend a prestigious university I will be able to make a lot of money when I graduate. For Effective Altruists these decisions are often related to charitable giving, not just about what cause is the ‘best’ to give to, but also what is the marginal improvement you expect your donation to cause. I’m increasingly convinced that we need to do a better job as a community systematically tracking predictions around important topics.


If I give $3,000 to the Against Malaria Foundation, I’m predicting I will be able to save one life. Or, if I think that EAs should be making appeals based on equality or justice, instead of individual rights or outcomes, I’m predicting that this strategy, if implemented, will drive more donations than the other.


So we’re making predictions all the time, but the predictions sit in isolation from one another, in random posts in the various EA facebook groups and forums. And they lack the rigor of providing actual deadlines and numbers, for when/how we should assess the claims. This is a problem, because we’re missing out on the wealth of knowledge that would come from learning if/how these ideas actually worked.


I think this is low hanging fruit for us to improve EA. If we tracked predictions about the outcomes of campaigns, interventions, etc. we’d see a number of benefits.

  • Establish a strong track record of success for top performing charities

  • Provide guidance for decision makers. Predictions, from orgs and individuals, about EA campaigns, could help guide donations.

  • Elevate good forecasters, and boost effective ideas.


In particular EA orgs could make more explicit predictions about what the outcome of giving money would be. It was difficult to track down concrete predictions from notable EA Orgs for the Winter 2015 giving season, but two I found were from 80000 Hours, predicting 50 plan changes by 10/31/2016, and CFAR, with 1000 new alumni by 12/31/2016.

Some type of central repository of predictions being made about EA, where people can comment, provide their own predictions, and update their predictions as new evidence comes in. This could be as simple as a blog post or a more full fledged system/market. Prediction trackers have a long history, but with little successful adoption. I think that EA’s might be different because of the unique origins of the movement. Given the analytical culture of EAs, this could be uniquely well suited to our community.

I see three steps to doing this:

  • A simple forum where anyone can submit a declarative prediction about EA related events.

    • By December 31st, 2016 there will be 5,000 people signed up for the Giving What We Can pledge

  • Anyone can submit their predictions tied to this event

    • JohnSmith89 thinks there is a 20% probability this is true

  • After the deadline has past judges/people can vote on whether it actually happened, and you can see who successfully predicted it and with what accuracy.


With 75% confidence I’d say that by February 10th at least 15 people will have expressed interest in predictions about effective altruism.


5

0
0

Reactions

0
0

More posts like this

Comments23


Sorted by Click to highlight new comments since:

Very much support the thrust of this post. Oliver Habryka on the EA Outreach team is currently chatting with the Good Judgment Project team about implementing a prediction market in EA.

Update: the Good Judgment Project has just launched Good Judgement Open. https://www.gjopen.com/

To get a good prediction market, we need more participation than the EA community would provide at its current size.

This is going to be a problem where the superset - creating a prediction market accessible by everyone - is easier to solve than the specific case - making a prediction market for the EA community.

Prediction Book is already taking on this role - a Prediction market by EAs - and would welcome help.

edit: fixed spelling

Prediction markets benefit a lot from liquidity. Making it EA specific doesn't seem to gain all that much. But EAs should definitely practice forecasting formally and getting rewarded for reliable predictions.

This comment has a lot of spelling mistakes and it's hard to understand. (I'm guessing you wrote it on mobile.) Can you go over it and remove the spelling mistakes?

My best translation...

To get a good prediction market, we need more participation than the EA community would provide at its current size.

This is going to be a problem where the superset - A prediction market accessible by every one - is easier to solve than the specific case - prediction market for the EA community.

Prediction is already taking on this role - a prediction market by EAs - and would welcome help.

I believe "Predictipn bool" is supposed to be "PredictionBook".

Not sure why that happened. Fixed.

It gave me a mental image of a drunk Ryan, slurring his words while making a coherent argument.

Something that surprised me from the Superforecasting book is that just having a registry helps, even when those predictions aren't part of a prediction market.

Maybe a prediction market is overkill right now? I think that registering predictions could be valuable even without the critical mass necessary for the market to have much liquidity. It seems that the advantage of prediction markets is in incentivizing people to try to participate and do well, but if we're just trying to track predictions that EAs are already trying to make then that might be enough.

Also, one of FLI's cofounders (Anthony Aguirre) started a prediction registry: http://www.metaculus.com/ , http://futureoflife.org/2016/01/24/predicting-the-future-of-life/

What tools for prediction markets are there besides http://predictionbook.com/ ? Any comments on what features they have or which are best for which purposes?

The only other one I know of is https://called.it/ which is mobile only (h/t John Maxwell).

People may be interested in https://www.facebook.com/groups/eapredictons/

Augur (http://www.augur.net/) - a decentralised prediction market.

I'm not familiar with too many "personal" prediction sites, one's you can register your own predictions (outside of predictionbook).

Zocalo, (http://zocalo.sourceforge.net/) is a toolkit for building prediction markets, but isn't currently supported. https://www.cultivatelabs.com/ creates enterprise prediction markets. And Augur is a cryptocurrency based prediction market that is currently in Alpha, but you can spin up your own test nodes if you wanted to run a separate network..

I can't access the facebook group, is it public? would be interested to check it out!

I can't access the facebook group, is it public?

You have to join to see posts.

With 75% confidence I’d say that by February 10th at least 15 people will have expressed interest in predictions about effective altruism.

I hereby express interest. Others can do so in a comment under this!

Am interested.

And me

Me also.

Also interested, would prefer something not facebook-based. If something needed to be setup/maintained/whatnot, I'd be happy to help.

What's the count up to now, counting all sources?

13 in total, from this post and an EA meetup - so 48 hours to find two more to be "well calibrated" on the prediction.

/interest expressed

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr