I am doing an Ask Me Anything. Work and other time constraints permitting, I intend to start answering questions on Sunday, 2020/07/05 12:01PM PDT.
__________________________
I am Top 20 (currently #11) out of 1000+ on covid-19 questions on the amateur forecasting website Metaculus. I also do fairly well on other prediction tournaments, and my guess is that my thoughts have a fair amount of respect in the nascent amateur forecasting space. Note that I am not a professional epidemiologist and have very little training in epidemiology and adjacent fields, and there are bound to be considerations I will inevitably miss as an amateur forecaster.
I also do forecasting semi-professionally, though I will not be answering questions related to work. Other than forecasting, my past hobbies and experiences include undergrad in economics and mathematics, a data science internship in the early days of Impossible Foods (a plant-based meats company), software engineering at Google, running the largest utilitarian memes page on Facebook, various EA meetups and outreach projects, long-form interviews of EAs on Huffington Post, lots of random thoughts on EA questions, and at one point being near the top of several obscure games.
For this AMA, I am most excited about answering high-level questions/reflections on forecasting (eg, what EAs get wrong about forecasting, my own past mistakes, outside views and/or expert deference, limits of judgmental forecasting, limits of expertise, why log-loss is not always the best metric, calibration, analogies between human forecasting and ML, why pure accuracy is overrated, the future of forecasting...), rather than doing object-level forecasts.
I am also excited to talk about interests unrelated to forecasting or covid-19. In general, you can ask me anything, though I might not be able to answer everything. All opinions are, of course, my own, and do not represent those of past, current or future employers.
My guess is to just forecast a lot! The most important part is probably just practicing a lot and evaluating how well you did.
Beyond that, my instinct is that the closer you can get to deliberate practice the more you can improve. My guess is that there's multiple desiderata that's hard to satisfy all at once, so you do have to make some tradeoffs between them.
In case you're not aware of this, I think there's also some evidence that calibration games, like OpenPhil's app, is pretty helpful.
Being meta-cognitive and reflective of your mistakes likely helps too.
In particular, beyond just calibration, you want to have a strong internal sense of when and how much your forecasts can update based on new information. If you update too much, then this is probably evidence that your beliefs should be closer to the naive prior (if you went from 20% to 80% to 20% to 75% to 40% in one day, you probably didn't really believe it was 20% to start with). If you update too little, then maybe the bar of evidence for you to change your mind is too high.
Before I started forecasting seriously, I attended several forecasting meetups that my co-organizer of South Bay Effective Altruism ran. Maybe going through the worksheets will be helpful here?
One thing I did that was pretty extreme was that I very closely followed a lot of forecasting-relevant details of covid-19. I didn't learn a lot of theoretical epidemiology, but when I was most "on top of things" (I think around late April to early May), I was basically closely following the disease trajectory, policies, and data ambiguities of ~20 different countries. I also read pretty much every halfway decent paper on covid-19 fatality rates that I could find, and skimmed the rest.
I think this is really extreme and I suspect very few forecasters do it to that level. Even I stopped trying to keep up because it was getting too much (and I started forecasting narrower questions professionally, plus had more of a social life). However, I think it is generally the case that forecasters usually know quite a lot of specific details of the thing they're forecasting: nowhere near that of subject matter experts, but they also have a lot more focus into the forecasting-relevant details, as opposed to grand theories or interesting frontiers of research.
That being said, I think it's plausible a lot of this knowledge is a spandrel and not actually that helpful for making forecasts. This answer is already too long but I might go in more detail about why I believe factual knowledge is a little overrated in other answers.
I also think that by the time I started Forecasting seriously, I probably started with a large leg up because (as many of you know) I spend a lot of my time arguing online. I highly doubt it's the most effective way to train forecasting skills (see first bullet point), and I'm dubious it's a good use of time in general. However, if we ignore efficiency, I definitely think the way I argued/communicated was a decent way to train having above-average general epistemology and understanding of the world.
Other forecasters often have backgrounds (whether serious hobbies or professional expertise) in things that require or strongly benefit from having a strong intuitive understanding of probability. Examples include semi-professional poker, working in finance, data science, some academic subfields (eg AI, psychology) and sometimes domain expertise (eg epidemiology).
It is unclear to me how much of these things are selection effects vs training, but I suspect that at this stage, a lot of the differences in forecasting success (>60%?) is explainable by practice and training, or just literally forecasting a lot.