Hide table of contents
This is a linkpost for https://fatebook.io

Announcing Fatebook: a website that makes it extremely low friction to make and track predictions.

It's designed to be very fast - just open a new tab, go to fatebook.io, type your prediction, and hit enter. Later, you'll get an email reminding you to resolve your question as YES, NO, or AMBIGUOUS.

It's private by default, so you can track personal questions and give forecasts that you don't want to share publicly. You can also share questions with specific people, or publicly.

Fatebook syncs with Fatebook for Slack - if you log in with the email you use for Slack, you’ll see all of your questions on the website.

As you resolve your forecasts, you'll build a track record - Brier score, Relative Brier score, and see your calibration chart. You can use this to track the development of your forecasting skills.

Some stories of outcomes I hope Fatebook will enable

I hope people interested in EA use Fatebook to track many more of the predictions they’re making!

Some example stories:

  1. During 1-1s at EAG, it’s common to pull out your phone and jot down predictions on Fatebook about cruxes of disagreement
  2. Before you start projects, you and your team make your underlying assumptions explicit and put probabilities on them - then, as your plans make contact with reality, you update your estimates
  3. As part of your monthly review process, you might make forecasts about your goals and wellbeing
  4. If you’re exploring career options and doing cheap tests like reading or interning, you first make predictions about what you’ll learn. Then you return to these periodically to reflect on how valuable more exploration might be
  5. Intro programs to EA (e.g. university reading groups, AGISF) and to rationality (e.g. ESPR, Atlas, Leaf) use Fatebook to make both  on- and off-topic predictions. Participants get a chance to try forecasting on questions that are relevant to their interests and lives

 

As a result, I hope that we’ll reap some of the benefits of tracking predictions, e.g.:

  1. Truth-seeking incentives that reduce motivated reasoning => better decisions
  2. Probabilities and concrete questions reduce talking past each other => clearer communication
  3. Track records help people improve their forecasting skills, and help identify people with excellent abilities (not just restricted to the domains that are typically covered on public platforms like Metaculus and Manifold like tech and geopolitics) => forecasting skill development and talent-spotting

 

Ultimately, the platform is pretty flexible - I’m interested to see what unexpected usecases people find for it, and what (if anything) actually seems useful about it in practice!

Your feedback or thoughts would be very useful - we can chat in the comments here, in our Discord, or by email.
 

You can try Fatebook at fatebook.io

 

Thanks to the Atlas Fellowship for supporting this project, and thanks to everyone who's given feedback on earlier versions of the tool.

Comments15


Sorted by Click to highlight new comments since:

This is great! I love the simplicity and how fast and frictionless the experience is.

I think I might be part of the ideal target market, as someone who has long wanted to get more into the habit of concretely writing out his predictions but often lacks the motivation to do so consistently.

Thank you! I'm interested to hear how you find it!

often lacks the motivation to do so consistently

Very relatable! The 10 Conditions for Change framework might be helping for thinking of ways to do it more consistently (if on reflection you really want to!) Fatebook aims to help with 1, 2, 4, 7, and 8, I think.

One way to do more prediction I'm interested in is integrating prediction into workflows. Here are some made-up examples:

  • At the start of a work project, you always forecast how long it'll take (I think this is almost always an important question, and getting good at predicting this is powerful)
  • When you notice you're concerned about some uncertainty (e.g. some risk) you operationalise it and write it down as a question
  • In your weekly review with your manager, you make forecasts about how likely you are to meet each of your goals. Then you discuss strategies to raise the P(success) on the important goals
  • When there's a disagreement two team members about what to prioritise, you make operationalise it as a forecasting question, and get the whole team's view. If the team as a whole disagrees, you look for ways to get more information, or if the team agrees (after sharing info) you follow that prioitisation

If anyone that either has prediction as part of workflows or would like to do so would be interested in chatting, lmk!

In many ways Fatebook is a successor to PredictionBook (now >11 years old!) If you've used PredictionBook in the past, you can import all your PredictionBook questions and scores to Fatebook.

I really love this <3

Compared to more public prediction platforms (e.g. Manifold), I think the biggest value adds for me are: (a) being ridiculously easy to set up + use, and (b) being able to make private predictions.

On (b), I saw the privacy policy is currently a canned template. I'm curious if you could say more on:

  • How and when you access user data
    • e.g. Do you look at non-anonymized user data in your analytics and tracking?
  • Who specifically gets access to user submitted predictions (can't quite tell how large your team is, for instance)

:) I'm a really big fan of Sage's work, thank you so much!

Thank you!

Do you look at non-anonymized user data in your analytics and tracking?

No - we don't look at non-anonymised user data in our analytics. We use Google Analytics events, so we can see e.g. a graph of how many forecasts are made each day, and this tracks the ID of each user so we can see e.g. how many users made forecasts each day (to disambiguate a small number of power-users from lots of light users). IDs are random strings of text that might look like cwudksndspdkwj. I think you'd call technically this "pseudo-anonymised" because user IDs are stored, not sure!

Who specifically gets access to user submitted predictions (can't quite tell how large your team is, for instance)

Your predictions are private to you unless you share them. I and the other two devs who have helped out with parts of the project have access to the production database, but we commit to not looking at users' questions unless you specifically share them with us (e.g. to help us debug something). I am interested in encrypting the questions in the database so that we're unable to theoretically access them, but haven't got round to implementing this yet (I want to focus on some bigger user-visible improvements first!)

Hope this makes sense! Thanks for your kind words and for checking about this, let me know if you think we could improve on any of this!

Thanks for the fast response, all of this sounds very reasonable! :)

By the way, very tiny bug report: The datestamps are rendering a bit weird? I see the correct date stamp for today under the date select, but the description text in italics is rendering as 'Yesterday', and the 'data-tip' value in the HTML is wrong.

Obviously not a big deal, just passing it on :) I'm currently in PST time, where it is 9:39am on 2023.07.25, if it matters. (Let me know if you'd prefer to receive bug reports somewhere else?)

Ah thank you! I've just pushed what should be a fix for this (hard to fully test as I'm in the UK).

Thanks so much! :) FYI that the top level helper text seems fixed:

But the prediction-level helper text is still not locale aware:

(Again, not a big deal at all :) )

Nice! 

Readers might also be interested in the linux utility version of this: https://github.com/NunoSempere/PredictResolveTally 

Awesome! I have been wanting something like this for a while and am looking forward to trying it out.

See this previous comment of mine for some potentially interesting suggestions:

https://forum.effectivealtruism.org/posts/cbtoajkfeXqJAzhRi/metaculus-year-in-review-2022?commentId=dotzeW2wxM5Avm7jL

(Excuse formatting; on mobile)

In a perfect world, this would also integrate with Alfred on my mac so that it becomes extremely easy and quick to create a new private question


I'm thinking of creating a Chrome extension that will let you type /forecast Will x happen? anywhere on the internet, and it'll create and embed an interactive Fatebook question. EDIT: we created this, the Fatebook browser extension.

I'm thinking of primarily focussing on Google Docs, because I think the EA community could get a lot of mileage out of making and tracking predictions embedded in reports, strategy docs, etc. This extension would also work in messaging apps, on social media, and even here on the forum (though first-party support might be better for the forum!). 

Great, thanks!

The format could be "[question text]? [resolve date]" where the question mark serves as the indicator for the end of the question text, and the resolve date part can interpret things like "1w", "1y", "eoy", "5d"

I'm interested in adding power user shortcuts like this! 

Currently, if your question text includes a date that Fatebook can recognise, it'll prepopulate the "Resolve by" field with that date. This works for a bunch of common phrases, e.g. "in two weeks" "by next month" "by Jan 2025" "by February" "by tomorrow".

If you play around with the site, I'd be interested to hear if you find yourself still keen for the addition of concise shortcuts like "2w" or if the current natural language date parsing works well for you.

I absolutely love that it infers resolving dates from the text! I was positively delighted when the field populated itself when I wrote "by the beginning of september". This is especially important on mobile.

Excited to see if this is a useful tool. Very polished, nice work!

It looks fantastic! Great job as always

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 2m read
 · 
In my opinion, we have known that the risk of AI catastrophe is too high and too close for at least two years. At that point, it’s time to work on solutions (in my case, advocating an indefinite pause on frontier model development until it’s safe to proceed through protests and lobbying as leader of PauseAI US).  Not every policy proposal is as robust to timeline length as PauseAI. It can be totally worth it to make a quality timeline estimate, both to inform your own work and as a tool for outreach (like ai-2027.com). But most of these timeline updates simply are not decision-relevant if you have a strong intervention. If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place.  I think people are psychologically drawn to discussing timelines all the time so that they can have the “right” answer and because it feels like a game, not because it really matters the day and the hour of… what are these timelines even leading up to anymore? They used to be to “AGI”, but (in my opinion) we’re basically already there. Point of no return? Some level of superintelligence? It’s telling that they are almost never measured in terms of actions we can take or opportunities for intervention. Indeed, it’s not really the purpose of timelines to help us to act. I see people make bad updates on them all the time. I see people give up projects that have a chance of working but might not reach their peak returns until 2029 to spend a few precious months looking for a faster project that is, not surprisingly, also worse (or else why weren’t they doing it already?) and probably even lower EV over the same time period! For some reason, people tend to think they have to have their work completed by the “end” of the (median) timeline or else it won’t count, rather than seeing their impact as the integral over the entire project that does fall within the median timeline estimate or