Adam Binks

660 karmaJoined Nov 2020Pursuing a doctoral degree (e.g. PhD)Working (0-5 years)London, UK


I'm building tools for forecasting and thinking at Sage. Currently building Quantified Intuitions and Fatebook.

Previously I was doing a PhD in HCI at St Andrews, and worked at Clearer Thinking.


Tweeting, sometimes about EA:


Topic Contributions

I think some subreddits do a good job of moderating to create a culture which is different from the default reddit culture, e.g. /r/askhistorians. See this post for an example, where there are a bunch of comments deleted, including one answer which didn't cite enough sources. Maybe this is what you have in mind when you refer to "moderating with an iron fist" though, which you mention might be destructive!

Seems like the challenge with reddit moderation is that users are travelling between subreddits all the time, and most have low quality/effort discussion norms. Whereas on the Forum, the userbase is more siloed, which I guess would make good quality moderation easier.

We've added a new deck of questions to the calibration training app - The World, then and now.

What was the world like 200 years ago, and how has it changed? Featuring charts from Our World in Data.

Thanks to Johanna Einsiedler and Jakob Graabak for helping build this deck!

We've also split the existing questions into decks, so you can focus on the topics you're most interested in:

Ah thank you! I've just pushed what should be a fix for this (hard to fully test as I'm in the UK).

The July Estimation Game is now live: a 10 question Fermi estimation game all about big picture history!

Question 1:

I was also wondering this - did 80k link to it in their newsletter (which has a big audience)?

Relatedly, I wonder if you can see differences in reported source by the place the survey respondent navigated to the survey from?

Thank you!

Do you look at non-anonymized user data in your analytics and tracking?

No - we don't look at non-anonymised user data in our analytics. We use Google Analytics events, so we can see e.g. a graph of how many forecasts are made each day, and this tracks the ID of each user so we can see e.g. how many users made forecasts each day (to disambiguate a small number of power-users from lots of light users). IDs are random strings of text that might look like cwudksndspdkwj. I think you'd call technically this "pseudo-anonymised" because user IDs are stored, not sure!

Who specifically gets access to user submitted predictions (can't quite tell how large your team is, for instance)

Your predictions are private to you unless you share them. I and the other two devs who have helped out with parts of the project have access to the production database, but we commit to not looking at users' questions unless you specifically share them with us (e.g. to help us debug something). I am interested in encrypting the questions in the database so that we're unable to theoretically access them, but haven't got round to implementing this yet (I want to focus on some bigger user-visible improvements first!)

Hope this makes sense! Thanks for your kind words and for checking about this, let me know if you think we could improve on any of this!

Thank you! I'm interested to hear how you find it!

often lacks the motivation to do so consistently

Very relatable! The 10 Conditions for Change framework might be helping for thinking of ways to do it more consistently (if on reflection you really want to!) Fatebook aims to help with 1, 2, 4, 7, and 8, I think.

One way to do more prediction I'm interested in is integrating prediction into workflows. Here are some made-up examples:

  • At the start of a work project, you always forecast how long it'll take (I think this is almost always an important question, and getting good at predicting this is powerful)
  • When you notice you're concerned about some uncertainty (e.g. some risk) you operationalise it and write it down as a question
  • In your weekly review with your manager, you make forecasts about how likely you are to meet each of your goals. Then you discuss strategies to raise the P(success) on the important goals
  • When there's a disagreement two team members about what to prioritise, you make operationalise it as a forecasting question, and get the whole team's view. If the team as a whole disagrees, you look for ways to get more information, or if the team agrees (after sharing info) you follow that prioitisation

If anyone that either has prediction as part of workflows or would like to do so would be interested in chatting, lmk!

In many ways Fatebook is a successor to PredictionBook (now >11 years old!) If you've used PredictionBook in the past, you can import all your PredictionBook questions and scores to Fatebook.

In a perfect world, this would also integrate with Alfred on my mac so that it becomes extremely easy and quick to create a new private question

I'm thinking of creating a Chrome extension that will let you type /forecast Will x happen? anywhere on the internet, and it'll create and embed an interactive Fatebook question.

I'm thinking of primarily focussing on Google Docs, because I think the EA community could get a lot of mileage out of making and tracking predictions embedded in reports, strategy docs, etc. This extension would also work in messaging apps, on social media, and even here on the forum (though first-party support might be better for the forum!). 

Load more