markus_over

Comments

EA Münster Predictions 2021

"Bei 80% der Treffen der EA Münster Lokalgruppe in 2021 waren mehr als 5 Personen anwesend" - how will cancelled meetups (due to lack of attendees, if that ever happens) count into this? Not at all, or as <=5 attendees? (kind of reminds me of how the Deutsche Bahn decided to not count cancelled trains as delayed)

Also, coming from EA Bonn where our average attendance is ~4 people, I find the implications of this question impressive. :D

One’s Future Behavior as a Domain of Calibration

I see, so at the end of the day you're assigning a number representing how productive the day was, and you consider predicting that number the day before? I guess in case that rating is based on your feeling about the day as opposed to more objectively predefined criteria, the "predictions affect outcomes" issue might indeed be a bit larger here than described in the post, as in this case the prediction would potentially not only affect your behavior, but also the rating itself, so it could have an effect of decoupling the metric from reality to a degree.

If you end up doing this, I'd be very interested in how things go. May I message you in a month or so?

One’s Future Behavior as a Domain of Calibration

Good point, I also make predictions about quarterly goals (which I update twice a month) as well as my plans for the year. I find the latter especially difficult, as quite a lot can change within a year including my perspective on and priority of the goals. For short term goals you basically only need to predict to what degree you will act in accordance with your preferences, whereas for longer term goals you also need to take potential changes of your preferences into account.

It does appear to me that calibration can differ between the different time frames. I seem to be well calibrated regarding weekly plans, decently calibrated on the quarter level, and probably less so on the year level (I don't yet have any data for the latter). Admittedly that weakens the "calibration can be achieved quickly in this domain" to a degree, as calibrating on "behavior over the next year" might still take a year or two to significantly improve.

One’s Future Behavior as a Domain of Calibration

I personally tend to stick to the following system:

  • Every Monday morning I plan my week, usually collecting anything between 20 and 50 tasks I’d like to get done that week (this planning step usually takes me ~20 minutes)
    • Most such tasks are clear enough that I don’t need to specify any further definition of done; examples would be “publish a post in the EA forum”, “work 3 hours on project X”, “water the plants” or “attend my local group’s EA social” – very little “wiggle room” or risk of not knowing whether any of these evaluates to true or false in the end
    • In a few cases, I do need to specify in greater detail what it means for the task to be done; e.g. “tidy up bedroom” isn’t very concrete, and I thus either timebox it or add a less ambiguous evaluation criterion
  • Then I go through my predictions from the week before and evaluate them based on which items are crossed off my weekly to do list (~3 minutes)
    • “Evaluate” at first only means writing a 1 or a 0 in my spreadsheet next to the predicted probability
    • There are rare exceptions where I drop individual predictions entirely due to inability to evaluate them properly, e.g. because the criterion seemed clear during planning, but it later turned out I had failed to take some aspect or event into consideration[1], or because I deliberately decided to not do the task for unforeseeable reasons[2]. Of course I could invest more time into bulletproofing my predictions to prevent such cases altogether, but my impression is that it wouldn’t be worth the effort.
  • After that I check my performance of that week as well as of the most recent 250 predictions (~2 minutes)
    • For the week itself, I usually only compare the expected value (sum of probabilities) with actually resolved tasks, to check for general over- or underconfidence, as there aren’t enough predictions to evaluate individual percentage ranges
    • For the most recent 250 predictions I check my calibration by having the predictions sorted into probability ranges of 0..9%, 10..19%, … 90..99%.[3] and checking how much the average outcome ratio of each category deviates from the average of predictions in that range. This is just a quick visual check, which lets me know in which percentage range I tend to be far off.
    • I try to use both these results in order to adjust my predictions for the upcoming week in the next step
  • Finally I assign probabilities to all the tasks. I keep this list of predictions hidden from myself throughout the following week in order to minimize the undesired effect of my predictions affecting my behavior (~5 minutes)
    • These predictions are very much System 1 based and any single prediction usually takes no more than a few seconds.
    • I can’t remember how difficult this was when I started this system ~1.5 years ago, but by now coming up with probabilities feels highly natural and I differentiate between things being e.g. 81% likely or 83% likely without the distinction feeling arbitrary.
    • Depending on how striking the results from the evaluation steps were, I slightly adjust the intuitively generated numbers. This also happens intuitively as opposed to following some formal mathematical process.

While this may sound complex when explaining it, I added the time estimates to the list above in order to demonstrate that all of these steps are pretty quick and easy. Spending these 10 minutes[4] each week seems like a fair price for the benefits it brings.


  1. An example would be “make check up appointment with my dentist”, but when calling during the week realizing the dentist is on vacation and no appointment can be made; given there’s no time pressure and I prefer making an appointment there later to calling a different dentist, the task itself was not achieved, yet my behavior was as desired; as there are arguments to be made to evaluate this both as true or false, I often just drop such cases entirely from my evaluation ↩︎

  2. I once had the task “sign up for library membership” on my list, but then during the week realized that membership was more expensive than I had thought, and thus decided to drop that goal; here too, you could either argue “the goal is concluded” (no todo remains open at the end of the week) or “I failed the task” (as I didn’t do the formulated action), so I usually ignore those cases instead of evaluating them arbitrarily ↩︎

  3. One could argue that a 5% and a 95% prediction should really end up in the same bucket, as they entail the same level of certainty; my experience with this particular forecasting domain however is that the symmetry implied by this argument is not necessarily given here. The category of things you’re very likely to do seems highly different in nature from the category of things you’re very unlikely to do. This lack of symmetry can also be observed in the fact that 90% predictions are ~10x more frequent for me in this domain than 10% predictions. ↩︎

  4. It’s 30 minutes total, but the first 20 are just the planning process itself, whereas the 3+2+5 afterwards are the actual forecasting & calibration training. ↩︎

Announcing the Forecasting Innovation Prize

"Before January 1st" in any particular time zone? I'll probably (85%) publish something within the next ~32h at the time of writing this comment. In case you're based in e.g. Australia or Asia that might then be January 1st already. Hope that still qualifies. :)

Make a Public Commitment to Writing EA Forum Posts

Indeed, thank you. :) I haven't started the other, forecasting related one, but intend to spend some time on it next week and hopefully come up with something publishable before the end of the year.

CFAR Workshop in Hindsight

My thoughts on how to best prepare for the workshop (as mentioned in the post):

  • Write down your expectations, i.e. what you personally hope to take away from the workshop (and if you’re fancy, maybe even add quantifications/probability estimates to each point)
  • Make sure you can go into the workshop with a clear head and without any distractions
  • Don’t make the same mistake I made, which was booking a flight home way too early on the day after the end of the workshop. I didn’t realize beforehand how difficult it was to get from the workshop venue to the airport, and figuring out a solution stressed me quite a bit during the week (but was in the end solved for me by the super kind ops people)
  • Do your best in the week(s) before to stay healthy
  • Sleep enough the nights before
  • Maybe prepare a bug list and take it with you; this will also be one of the first sessions, but the more the better
  • Don’t panic; if you don’t manage to prepare in any significant way, the workshop is still extremely well designed and you’ll do just fine.
Announcing the Forecasting Innovation Prize

Sure. Those I can mention without providing too much context:

  • calibrating on one's future behavior by making a large amount of systematic predictions on a weekly basis
  • utilizing quantitative predictions in the process of setting goals and making plans
  • not prediction-related, but another thing your post triggered: applying the "game jam principle" (developing a complete video game in a very short amount of time, such as 48 hours) to EA forum posts and thus trying to get from idea to published post within a single day; because I realized writing a forum post is (for me, and a few others I've spoken to) often a multi-week-to-month endeavour, and it doesn't have to be that way, plus there are surely diminishing returns to the amount of polishing you put into it

If anybody actually ends up planning to write a post on any of these, feel free to let me know so I'll make sure focus on something else.

Make a Public Commitment to Writing EA Forum Posts

Good timing and great idea. Considering I've just read this: https://forum.effectivealtruism.org/posts/8Nwy3tX2WnDDSTRoi/announcing-the-forecasting-innovation-prize I'll gladly commit to submitting at least one forum post to the forecasting innovation prize (precise topic remains to be determined). Which entails writing and publishing a post here or on lesswrong before the end of the year.

I further commit to publishing a second post (which I'd already been writing on for a while) before the end of the year.

If anybody would like to hold me accountable, feel free to contact me around December 20th and be very disappointed if I haven't published a single post by then. 

Thanks for the prompt Neel!

Announcing the Forecasting Innovation Prize

Nice! In the few minutes of reading this post I came up with five ideas for related things I could (and maybe should) write a post on. My only issue is that there's only 6 weeks of time for this, and I'm not sure if that'll be enough for me to finish even one given my current schedule. But I'll see what I can do. May even be the right kind of pressure, as otherwise I'd surely be following Parkinson's law and work on a post for way too long.

(The many examples you posted were very helpful by the way, as without them I would have assumed I don't have much to contribute here)

Load More