In "How to Measure Anything" chapter 5, Douglas Hubbard describes the training he provides to individuals and organizations that want to improve their skills.  He provides a sample test which is based on general knowledge trivia, questions like

 "What is the air distance from LA to NY?" 

for which the student is supposed to provide a 90% confidence interval.  There are also some true/false questions where you provide your level of confidence in the answer e.g. 

"Napoleon was born on Corsica".  

In the following few pages he describes some of the data he's collected about his trainees implying this sort of practice helps people become better estimators of various things, including forecasting the likelihood of future events.  For example, he describes CTO's making more accurate predictions of new tech after completing training.

My question: Is there evidence this approach works?  Does practice making probabilistic estimates about trivia improve people's ability to forecast non-trivial matters?  Have there been published studies?

Thanks!

New Answer
Ask Related Question
New Comment

1 Answers sorted by

This seems like a good place to look for studies:

The research I’ve reviewed broadly supports this impression. For example:

  • Rieber (2004) lists “training for calibration feedback” as his first recommendation for improving calibration, and summarizes a number of studies indicating both short- and long-term improvements on calibration.4 In particular, decades ago, Royal Dutch Shell began to provide calibration for their geologists, who are now (reportedly) quite well-calibrated when forecasting which sites will produce oil.5
  • Since 2001, Hubbard Decision Research trained over 1,000 people across a variety of industries. Analyzing the data from these participants, Doug Hubbard reports that 80% of people achieve perfect calibration (on trivia questions) after just a few hours of training. He also claims that, according to his data and at least one controlled (but not randomized) trial, this training predicts subsequent real-world forecasting success.

Thanks for the reply.  

First bullet: I read citation #4 and it describes improvement in a lab with like domain (e.g. trivia) not across domains (e.g. trivia => world events) as far as I could tell.  The Shell example is also within domain.

The second bullet is the same info shared in Hubbard's book, not a controlled trial and he doesn't provide the underlying data.

Unfortunately, I don't think any of this info is very persuasive for answering the question about cross-domain applicability.

1 comment, sorted by Click to highlight new comments since: Today at 9:41 AM

And more generally: does calibration in one area transfer to other areas? If I'm calibrated on tech trends, will I also be calibrated on politics?