I am doing an Ask Me Anything. Work and other time constraints permitting, I intend to start answering questions on Sunday, 2020/07/05 12:01PM PDT.

__________________________

I am Top 20 (currently #11) out of 1000+ on covid-19 questions on the amateur forecasting website Metaculus. I also do fairly well on other prediction tournaments, and my guess is that my thoughts have a fair amount of respect in the nascent amateur forecasting space. Note that I am not a professional epidemiologist and have very little training in epidemiology and adjacent fields, and there are bound to be considerations I will inevitably miss as an amateur forecaster.

I also do forecasting semi-professionally, though I will not be answering questions related to work. Other than forecasting, my past hobbies and experiences include undergrad in economics and mathematics, a data science internship in the early days of Impossible Foods (a plant-based meats company), software engineering at Google, running the largest utilitarian memes page on Facebook, various EA meetups and outreach projects, long-form interviews of EAs on Huffington Post, lots of random thoughts on EA questions, and at one point being near the top of several obscure games.

For this AMA, I am most excited about answering high-level questions/reflections on forecasting (eg, what EAs get wrong about forecasting, my own past mistakes, outside views and/or expert deference, limits of judgmental forecasting, limits of expertise, why log-loss is not always the best metric, calibration, analogies between human forecasting and ML, why pure accuracy is overrated, the future of forecasting...), rather than doing object-level forecasts.

I am also excited to talk about interests unrelated to forecasting or covid-19. In general, you can ask me anything, though I might not be able to answer everything. All opinions are, of course, my own, and do not represent those of past, current or future employers.

Comments80
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Most of the forecasting work covered in Expert Political Judgement and Superforecasting related to questions with time horizons of 1-6 months. It doesn't seem like we know much about the feasibility or usefulness of forecasting on longer timescales. Do you think longer-range forecasting, e.g. on timescales relevant to existential risk, is feasible? Do you think it's useful now, or do you think we need to do more research on how to make these forecasts first?

3
MichaelA🔸
I think this is a very important question. In case you haven't seen it, here's Luke Muehlhauser’s overview of his post How Feasible Is Long-range Forecasting? (I'd also highly recommend reading the whole post): (See also the comments on the EA Forum link post.)
5
Stephen Clare
Yeah, I don't blame Linch for passing on this question since I think the answer is basically "We don't know and it seems really hard to find out." That said, it seems that forecasting research has legitimately helped us get better at sussing out nonsense and improving predictions about geopolitical events. Maybe it can improve our epistemic status on ex risks too. Given that there don't seem to be too many other promising candidates in this space, more work to gauge the feasibility of longterm forecasting and test different techniques for improving it seems like it would be valuable.
2
Linch
I agree with what you said at a high-level! Both that it's hard and that I'm bullish on it being plausibly useful. FWIW, I still intend to answer this question eventually, hopefully before the question becomes moot!
2
MichaelA🔸
Yeah, I share the view that that sort of research could be very useful and seems worth trying to do, despite the challenges. (Though I hold that view with relatively low confidence, due to having relatively little relevant expertise.) Some potentially useful links: I discussed the importance and challenges of estimating existential risk in my EAGx lightning talk and Unconference talk, provide some other useful links (including to papers and to a database of all x-risk estimates I know of) in this post, and quote from and comment on a great recent paper here. I think there are at least two approaches to investigating this topic: solicit new forecasts about the future and then see how calibrated they are, or find past forecasts and see how calibrated they were. The latter is what Muehlhauser did, and he found it very difficult to get useful results. But it still seems possible there’d be room for further work taking that general approach, so in a list of history topics it might be very valuable to investigate I mention: Hopefully some historically minded EA has a crack at researching that someday! (Though of course that depends on whether it'd be more valuable than other things they could be doing.) (One could also perhaps solicit new forecasts about what’ll happen in some actual historical scenario, from people who don’t know what ended up happening. I seem to recall Tetlock discussing this idea on 80k, but I’m not sure.)
3
Linch
Hi smclare! This is a very interesting question and I've been spending quite a bit of time mulling over it! Just want to let you know that me not answering (yet) is a result of me wanting to spend some time giving the question the gravity it deserves, rather than deliberately ignoring you!

What do you think helps make you a better forecaster than the other 989+ people?

What do you think makes the other ~10 people a better forecaster than you?

Hey I want to give a more directly informative answer later but since this might color other people's questions too: I just want to flag that I don't think I'm a better forecaster than all the 989+ people below me on the leaderboards, and I also would not be surprised if I'm better than some of the people above me on the leaderboard. There's several reasons for this:

  • Reality is often underpowered. While medium-term covid-19 forecasting is less prone to those issues in comparison to many other EA questions, you still have a bunch of fundamental uncertainty about how actually good you are. Being correct for one question often relies on a "bet" that's loosely correlated with being correct on another question. At or near the top, there are not enough questions for you to be sure if you just got lucky in a bunch of correlated ways that others slightly below you in the ranks got unlucky on, vs you actually being more skilled. The differences are things like whether you "called" it correctly at 90% when others put 80%, or conversely when you were sufficiently calibrated at 70% when others were overconfident (or just unlucky) at 90%.
  • Metacul
... (read more)

This was a lot of good discussion of epistemics, and I highly valued that, but I was also hoping for some hot forecasting tips. ;) I'll try asking the question differently.

6
Linch
I understood your intent! :) I actually plan to answer the spirit of your question on Sunday, just decided to break the general plan to "not answer questions until my official AMA time" because I thought the caveat was sufficiently important to have in the open!
What do you think helps make you a better forecaster than the other 989+ people?

I'll instead answer this as:

What helps you have a higher rating than most of the people below you on the leaderboard?
  • I probably answered more questions than most of them.
  • I update my forecasts more quickly than most of them, particularly in March and April
    • Activity has consistently been shown to be one of (often, the) strongest predictors of overall accuracy in the academic literature.
  • I suspect I have a much stronger intuitive sense of probability/calibration.
    • For example, 17% (1:5) intuitively feels very different to me than 20% (1:4), and my sense is that this isn't too common
    • This could just be arrogance however, there isn't enough data for me to actually check this for actual predictions (as opposed to just calibration games)
  • I feel like I actually have lower epistemic humility compared to most forecasters who are top 100 or so on Metaculus. "Epistemic humility" defined narrowly as "willingness to make updates based on arguments I don't find internally plausible just because others believed them."
    • Caveat is that I'm making this comparison solely to top X% (in
... (read more)

1.) This is amazing, thank you. Strongly upvoted - I learned a lot.

2.) Can we have an AMA with JGalt where he teaches us how to read all the news?

Non-forecasting question: have you ever felt like an outsider in any of the communities you consider yourself to be a part of?

Yes, I think part of feeling like you don't belong is just pretty normal to being human! So on the outside, this should very much be expected.

But specifically:

  • I think of myself as very culturally Americanized (or perhaps more accurately EA + general "Western internet culture"), so I don't really feel like I belong among Chinese people anymore. However, I also have a heavy (Chinese) accent, so I think I'm not usually seen as "one of them" among Americans, or perhaps Westerners in general.
    • I mitigate this a lot by hanging out in a largely international group. I also strongly prefer written communication to talking, especially to strangers, likely to a large part because of this reason (but it's usually not conscious).
    • I also keep meaning to train my accent, but get lazy about it.
  • I think most EAs are natives to, for want of a better word, "elite culture":
    • eg they went to elite high schools and universities,
    • they may have done regular travel in the summers,
    • most but not all of them had rich or upper-class parents even by Western standards
    • Some of them go to burning man and take recreational drugs
    • Some of them are much more naturally comforta
... (read more)

Is forecasting plausibly a high-value use of one's time if one is a top-5% or top-1% forecaster?

What are the most important/valuable questions or forecasting tournaments for top forecasters to forecast or participate in? Are they likely questions/tournaments that will happen at a later time (e.g. during a future pandemic)? If so, how valuable is it to become a top forecaster and establish a track record of being a top forecaster ahead of time?

3
Linch
Yes, it's plausible. My sense is that right now there's a market mismatch with an oversupply of high forecasting talent relative to direct demand/actual willingness/ability to use said talent. I'm not sure why this is, intuitively there are so many things in the world where having a higher-precision understanding of our uncertainty is just extremely helpful. One thing I'd love to do is help figure out how to solve this and find lots of really useful things for people to forecast on.

Here's a ton of questions pick your favourites to answer. What's your typical forecasting workflow like? Subquestions:

  • Do you tend to make guesstimate/elicit/other models, or mostly go qualitative? If this differs for different kinds of questions, how?

  • How long do you spend on initial forecasts and how long on updates? (Per question and per update would both be interesting)

  • Do you adjust towards the community median and if so how/why?

More general forecasting:

  • What's the most important piece of advice for new forecasters that isn't contained in Tetlock's superforecasting?

  • Do you forecast everyday things in your own life other than Twitter followers?

  • What unresolved question are you furthest from the community median on?

  • If you look at your forecasting mistakes, do they have a common thread?
  • How is your experience acquiring expertise at forecasting similar/different to acquiring expertise in other domains, e.g. obscure board-games? How so?
  • Any forecasting ressources you recommend?
  • Who do you look up to?
  • How does the distribution skill / hours of effort look for forecasting for you?
  • Do you want to wax poetically or ramble disorganizedly about any aspects of forecasting?
  • Any secrets of reality you've discovered & which you'd like to share?
If you look at your forecasting mistakes, do they have a common thread?

A botched Tolstoy quote comes to mind:

Good forecasts are all alike; every mistaken forecast is wrong in its own way

Of course that's not literally true. But when I reflect on my various mistakes, it's hard to find a true pattern. To the extent there is one, I'm guessing that the highest-order bit is that many of my mistakes are emotional rather than technical. For example,

  • doubling down on something in the face of contrary evidence,
  • or at least not updating enough because I was arrogant,
  • getting burned that way and then updating too much from minor factors
  • "updating" from a conversation because it was socially polite to not ignore people rather than their points actually being persuasive, etc.

If the emotion hypothesis is true, to get better at forecasting, the most important thing might well to be looking inwards, rather than say, a) learning more statistics or b) acquiring more facts about the "real world."

3
Davidmanheim
I think that as you forecast different domains, more common themes can start to emerge. And I certainly find that my calibration is off when I feel personally invested in the answer. And re: I would say there's a sharp cutoff in terms of needing a minimal level of understanding (which seems to be fairly high, but certainly isn't above, say, the 10th percentile.) After that, it's mostly effort, and skill that is gained via feedback.
4
Linch
Just FYI, I do not consider myself an "expert" on forecasting. I haven't put my 10,000 hours in, and my inside view is that there's so much ambiguity and confusion about so many different parameters. I also basically think judgmental amateur forecasting is a nascent field and there are very few experts[1], with the possible exception of the older superforecasters. Nor do I actually think I'm an expert in those games, for similar reasons. I basically think "amateur, but first (or 10th, or 100th, as the case might be) among equals" is a healthier and more honest presentation. That said, I think the main commonalities for acquiring skill in forecasting and obscure games include: * Focus on generalist optimization for a well-specified score in a constrained system * I think it's pretty natural for both humans and AI to do better in more limited scenarios. * However, I think in practice, I am much more drawn to those types of problems than my peers (eg I have a lower novelty instinct and I enjoy optimization more). * Deliberate practice through fast feedback loops * Games often have feedback loops on the order of tens of seconds/minutes (Dominion) or hundreds of milliseconds/seconds (Beat Saber) * Forecasting has slower feedback loops, but often you can form an opinion in <30 minutes (sometimes <3 if it's a domain you're familiar with), and have it checked in a few days. * In contrast, the feedback loops for other things EA are interested in are often much slower. For example, research might have initial projects on the span of months and have it checked in the span of years, architecture in software engineering might take days to do and weeks to check (and sometimes the time to check is never) * Focus on easy problems * For me personally, it's often easier for me to get "really good" on less-contested domains than kinda good on very contested domains * For example, I got quite good at Dominion but I bounced pretty quickly off Magic,

What do EAs get wrong about forecasting?

I think the biggest is that EAs (definitely including myself before I started forecasting!) often underestimate the degree to which judgmental forecasting is very much a nascent, pre-paradigm field. This has a lot of knock-on effects, including but not limited to:

  • Thinking that the final word on forecasting is the judgmental forecasting literature
    • For example, the forecasting research/literature is focused entirely on accuracy, which has its pitfalls.
    • There are many fields of human study that does things like forecasting, even if it's not always called that, including but not limited to:
      • Weather forecasting (where Brier score came from!)
      • Intelligence analysis
      • Data science
      • Statistics
      • Finance
      • some types of consulting
      • insurance/reinsurance
      • epidemiology
      • ...
        • More broadly, any quantified science needs to make testable predictions
  • Over-estimating how much superforecasters "have it figured out"
  • Relatedly, overestimating how much other good forecasters/aggregation platforms have things figured out.
    • For example, I think some people over-estimate the added accuracy of prediction markets like PredictIt, or aggregation engines like Metaculus/GJO, or that of top
... (read more)

so you've done quite a few different things - right now, would you rather go into research, or entrepreneurship, and why?

I would like to hear your thoughts on Generalist vs Specialist debate.

    • Advice for someone early as a generalist?
    • Did you stumble upon these different fields of interest by your own or did you surround yourself with smart people to get good understandings of various fields?
    • Thoughts on impact comparissons? (Eg can a generalist maybe bring knowledge/wisdom from intuitively non-adjacent disciplines into a project and help advance it?)
    • What skills are you lacking \ or which ones would you like to aquire to become a "Jack of all trades"?
    • Are you even aiming to become even more of a generalist? Yes or no - please elaborate.

Hmm this doesn't answer any of your questions directly, but might be helpful context to set: My impression is that relatively few people actually set out to become generalists! I think it's more accurate of an explanation to think of some people being willing to do what needs to get done (or doing things they find interesting, or has high exploration value, or a myriad of other reasons). And if those things keep seeming like highly impactful things to do (or continues to be interesting, has high learning/exploration value, etc), they keep doing them, and then eventually become specialists in that domain.

If this impression is correct, specialists start off as generalists who eventually specialize more and more, though when they start specializing might vary a lot (Some people continue to be excited about the first thing they tried, so are set on their life path by the time they were 12. Others might have tried 30 different things before settling on the right one).

(I obviously can't speak for other EAs; these are just my own vague impressions. Don't take it too seriously, etc)

Advice for someone early as a generalist?

Hmm, I don't feel too strongly about this... (read more)

I vaguely recall hearing something like 'the skill of developing the right questions to pose in forecasting tournaments is more important than the skill of making accurate forecasts on those questions.' What are your thoughts on this and the value of developing questions to pose to forecasters?

4
Linch
Yeah I think Tara Kirk Sell mentioned this on the 80k podcast. I think I mostly agree, with the minor technical caveat that if you were trying to get people to forecast numerical questions, getting the ranges exactly right matters more when you have buckets (like in the JHU Disease Prediction Project that Tara ran, and Good Judgement 2.0), but asking people to forecast a distribution (like in Metaculus) allows the question asker to be more agnostic about ranges. Though the specific thing I would agree with is something like: I think other elements of the forecasting pipeline plausibly matter even more, which I talked about in my answer to JP's question.
6
Davidmanheim
"The right question" has 2 components. First is that the thing you're asking about is related to what you actually want to know, and second is that it's a clear and unambiguously resolvable target. These are often in tension with each other. One clear example is COVID-19 cases - you probably care about total cases much more than confirmed cases, but confirmed cases are much easier to use for a resolution criteria. You can make more complex questions to try to deal with this, but that makes them harder to forecast. Forecasting excess deaths, for example, gets into whether people are more or less likely to die in a car accident during COVID-19, and whether COVID reduction measures also blunt the spread of influenza. And forecasting retrospective population percentages that are antibody positive runs into issues with sampling, test accuracy, and the timeline for when such estimates are made - not to mention relying on data that might not be gathered as of when you want to resolve the question.

Can you give your reflections on the limits of expertise?

Relatedly, on the nature of expertise. What's the relative importance of domain-specific knowledge and domain-general forecasting abilities (and which facets of those are most important)?

What should a typical EA who is informed on the standard forecasting advice do if they actually want to become good at forecasting? What did you do to hone your skill?

My guess is to just forecast a lot! The most important part is probably just practicing a lot and evaluating how well you did.

Beyond that, my instinct is that the closer you can get to deliberate practice the more you can improve. My guess is that there's multiple desiderata that's hard to satisfy all at once, so you do have to make some tradeoffs between them.

  • As close to the target domain of what you actually care about as possible. For example, if you care about having accurate forecasts on which psychological results are true, covid-19 tournaments or geo-political forecasting are less helpful than replication markets.
  • Can answer lots of questions and have fast feedback loops. For example, if the question you really care about is "will humans be extinct by 3000 AD?" you probably want to answer a bunch of other short term questions first to build up your forecasting muscles to actually have a better sense of these harder questions.
  • Can initially be easy to evaluate well. For example, if you want to answer "will AI turn out well?" it might be helpful to answer a bunch of easy-to-evaluate questions first and grade them.

In case you're not aware of t... (read more)

1
agent18
What sort of training material did you use to predict and get feedback on (#deliberate practice)
7
Linch
I mostly just forecasted the covid-19 questions on Metaculus directly. I do think predicting covid early on (before May?) was a near-ideal epistemic environment for this, because of various factors like The feedback cycle (maybe several times a week for some individual questions) are still slower than what the deliberate practice research was focused on (specific techniques in arts and sports with sub-minute feedback). But it's much much better than other plausibly important things. I probably also benefited from practice through the South Bay EA meetups[1] and the Open Phil calibration game[2]. [1] If going through all the worksheets is intimidating, I recommend just trying this one (start with "Intro to forecasting" and then do the "Intro to forecasting worksheet." EDIT 2020/07/04: Fixed worksheet. [2] https://www.openphilanthropy.org/blog/new-web-app-calibration-training

How many Twitter followers will you have next week?

6
Davidmanheim
I already said I'd stop messing with him now.
3
EdoArad
didn't you just violate that?

Why is pure accuracy overrated?

3
Linch
There's a bunch of things going on here. But roughly speaking, I think there's at least two things going on: * When people think of "success from judgmental forecasting", they usually think of a narrow thing that looks like the end product of the most open part of what Metaculus and Good Judgement .* does: coming up with good answers to specific, well-defined and useful questions. But a lot of the value of forecasting comes before and after that. * Even in the near-ideal situation of specific, well-defined forecasts, there are often metrics other than pure accuracy (beyond a certain baseline) that matters more. For the first point, Ozzie Gooen (and I'm sure many other people) has thought a lot more about this. But my sense is that there's a long pipeline of things that makes a forecast actually useful for people: * Noticing quantifiable uncertainty. I think a lot of the value of forecasting comes from the pre-question operationalization stage. This is being able to recognize both that something that might be relevant to a practical decision that you (or a client, or the world) rely on is a) uncertain and b) can be reasonably quantifiable. I think a lot of our assumptions we do not recognize as such, or the uncertainty is not crisp enough that we can even think that it's a question we can ask others. * Data collection. Not sure where this fits in the pipeline, but often precise forecasts of the future are contextualized in the world of the relevant data that you have. * Question operationalization. This is what William Kiely's question is referring to, which I'll answer more in detail there. But roughly, it's making your quantifiable uncertainty into a precise, well-defined question that can be evaluated and scored later. * Actual judgmental forecasting. This is mostly what I did, and what the leaderboards are ranked on, and what people think about when they think about "forecasting." * Making those forecasts useful. If this is for yourself, it's usually ea

Are you using or do you plan to use your forecasting skills for investing?

9
Linch
No. At the high level I don't think I'm that good at forecasting, and beating the bar for being better at day-to-day investing than the Efficient Market Hypothesis must be really hard. Also, finding financial market inefficiencies is very much not neglected, so even if by some miracle I discovered some small inefficiency, I doubt the payoff would be worth it, relative to finding more neglected things to forecast on. At a lower level, the few times I actually attempted to do forecasting on economic indicators, I did much worse than even I expected. For example, I didn't predict the May jobs rally, and I'm also still pretty confused about why the S&P 500 is so high now. I think it's possible for EAs to sometimes predictably beat the stock market without intense effort. However, the way to do this isn't by doing the typical forecaster thing of having a strong intuitive sense of probabilities and doing the homework (because that's the bare minimum that I assume everybody in finance has). Rather, I think the thing to maybe focus on is that EAs and adjacent communities in a very real sense "live in the future." For example, I think covid and the rise of Bitcoin were both moderately predictable way earlier than the stock market caught on (in Bitcoin's case, not that it will definitely take off, but it would have been reasonable to assign >1% chance of it taking off), and in fact have been predicted by those in our community. So we're maybe really good in relative terms at having an interdisciplinary understanding of discontinuities/black swans that only touch finance indirectly. The financial world will be watching for the next pandemic, but maybe the next time we see the glimmers of something real and big on the horizon (localized nuclear war, AI advances, some large technological shift, something else entirely?), we might be able to act fast and make a lot of (in expectation) money. Or at least lose less money by encouraging our friends and EAs with lots of financial

Thanks for the answer. Makes sense!

I'm also still pretty confused about why the S&P 500 is so high now.

Some possible insight: the NASDAQ is doing even better, at its all-time high and wasn't hit as hard initially, and the equal-weight S&P 500 is doing worse than the regular S&P 500 (which weights based on market cap), so this tells me that disproportionately large companies (and tech companies) are still growing pretty fast. Some of these companies may even have benefitted in some ways, like Amazon (online shopping and streaming) and Netflix (streaming).

20% of the S&P 500 is Microsoft, Apple, Amazon, Facebook and Google. Only Google is still down since February at their peaks before the crash, the rest are up 5-15%, other than Amazon (4% of the S&P 500), which is up 40%!

Say an expert (or a prediction market median) is much stronger than you, but you have a strong inside view. What's your thought process for validating it? What's your thought process if you choose to defer?

5
Linch
I know this isn't the answer you want, but I think the short answer here is that I really don't know, because I don't think this situation is common. so I don't have a good reference class/list of case studies to describe how I'd react in this situation. If this were to happen often for a specific reference class of questions (where some people just very obviously do better than me for those questions), I imagine I'd quickly get out of the predictions business for those questions, and start predicting on other things instead. As a forecaster, I'm mostly philosophically opposed to updating strongly (arguably at all) based on other people's predictions. If I updated strongly, I worry that this will cause information cascades. However, if I was in a different role, eg making action-relevant decisions myself, or "representing forecasters" to decision-makers, I might try to present a broader community view, or highlight specific experts. Past work on this includes comments on Greg Lewis's excellent EA forum article on epistemic modesty, Scott Sumner on why the US Fed should use market notions of monetary policy rather than what the chairperson of the Fed believes and notions of public vs. private uses of reason by Immanuel Kant. I also raised this question on Metaculus.
4
Linch
Footnote on why this scenario I think is in practice uncommon: I think the ideal example in my head for showcasing what you describe goes something like this: * An expert/expert consensus/prediction market median that I respect strongly (as predictors) have high probability on X * I strongly believe not X. (or equivalently, very low probability on X). * I have strong inside views for why I believe not X. * X is the answer to a well-operationalized question * with a specific definition... * that everybody on the definition of. * I learned about the expert view very soon after they made it * I do not think there is new information that the experts are not updating on * This question's answer has a resolution in the near future, in a context that I have both inside-view and outside-view confidence in our relative track records (in either direction). I basically think that there are very few examples of situations like this, for various reasons: * For starters, I don't think I have very strong inside views on a lot of questions. * Though sometimes the outside views look something like "this simple model predicts stuff around X, and the outside view is that this class of simple models outpredict both experts and my own more complicated models " * Eg, 20 countries have curves that look like this, I don't have enough Bayesian evidence that this particular country's progression will be different. * There are also weird outside views on people's speech acts, for example "our country will be different" is on a meta-level something that people from many countries believe, and this conveys almost no information * These outsideish views can of course be wrong (for example I was wrong about Japan and plausibly Pakistan). * Unfortunately, what is and isn't a good outside view is often easy to self-hack by accident. * Note that outside view doesn't necessarily look like expert deference. * Usually if there are experts or other aggregations whose opinion as for

Lots of EAs seem pretty excited about forecasting, and especially how it might be applied to help assess the value of existential risk projects. Do you think forecasting is underrated or overrated in the EA community?

Good forecasts seem kind of like a public good to me: valuable to the world, but costly to produce and the forecaster doesn't benefit much personally. What motivates you to spend time forecasting?

When I look at most forecasting questions, they seem goodharty in a very strong sense. For example, the goodhart tower for COVID might look something like:

1. How hard should I quarantine?

2. How hard I should quarantine is affected by how "bad" COVID will be.

3. How "bad" COVID should be caches out into something like "how many people", "when vaccine coming", "what is death rate", etc.

By the time something I care about becomes specific enough to be predictable/forecastable, it seems like most of the thing I actually cared about has been lost.

Do you have a sense of how questions can be better constructed to lose less of the thing that might have inspired the question?

Meta: Wow, thanks a lot for these questions. They're very insightful and have made me think a lot, please keep the questions (and voting on them) coming! <3

It turns out I had some prior social commitments on Sunday that I forgot about, so I'm going to start answering these questions tonight plus Saturday, and maybe Friday evening too.

But *please* don't feel discouraged from continuing to ask questions, reading these questions have been a load of fun and I might keep answering things for a while.

4
Linch
Okay, I answered some questions! All the questions are great, keep them coming! If you have a highly upvoted question that I have yet to answer, then it's because I thought answering it was hard and I need to think more before answering! But I intend to get around to answering as many questions as I can eventually (especially highly upvoted ones!)

What do you think you do that other forecasters don't do?

[anonymous]9
0
0

What news sites, data sources, and/or experts have you found to be most helpful for informing your forecasts on COVID-19?

For Covid-19 spread, what seems to be the relative importance of: 1) climate, 2) behaviour, and 3) seroprevalence?

9
Linch
Tl;dr: In the short run (a few weeks) seroprevalence, in the medium run (months) behavior. In the long-run likely behavior as well, but other factors like wealth and technological access might start to dominate in hard-to-predict ways. Thanks for the question! When I made this AMA, I was worried that all the questions would be about covid. Since there’s only one, I might as well devote a bunch of time to it. There are of course factors other than those three, unless you stretch “behavior” to be maximally inclusive. For example, having large family sizes in a small house means it’s a lot harder to control disease spread within the home (in-house physical distancing is basically impossible if 7 people live in the same room). Density (population-weighted) more generally probably means it’s harder to control disease spread. One large factor is state capacity, which I operationalize roughly as “to the extent your gov’t can be said to be a single entity, how much can it carry out the actions it wants to carry out.” Poverty and sanitation norms more generally likely matters a lot, though I haven’t seen enough data to be sure. Among high-income countries, I also will not be surprised if within-country inequality is a large factor, though I am unsure what the causal mechanism will be. In the timescale you need to think about for prioritizing hospital resources and other emergency measures, aka “the short run” of say a few weeks, seroprevalence of the virus (how many people are infected and infectious) dominates by a very large margin. There’s so much we still don’t know about how the disease spreads, so I think (~90%) by far the most predictive factors for how many cases there will be in a few weeks are high-level questions like how many people are currently infected and what the current growth rate is, with a few important caveats like noting that confirmed infections definitely do NOT equal active infections. In the medium run (2+ months), I think (~85%), at least if I

Forecast your win probability in a fight against:

500 horses, each with the mass of an average duck.

1 duck, with the mass of an average horse.

(numbers chosen so mass is roughly equal)

5
Linch
I actually answered this before, on the meme page: I still stand by this. maybe 85% that I can win against the duck, and 20% the horses? Depends a lot on initial starting position of course.
1
Engineer_Jayce314
Speaking of gut instincts, cognitive psychology looks A LOT into what forms gut instincts take shape and fool us into bad answers or bad lines of reasoning. They'd call them cognitive biases. When building models, how do you ensure that there is as little of these biases in the model? To add to that, does some of the uncertainty you mentioned in other answers come from these biases, or are they purely statistical?

How important do you think it is that your or others' forecasts are more well-understood or valued among policy-makers? And if you think they should listen to forecasts more often, how do you think we should go about making them more aware?

I'm very motivated to make accurate decisions about when it will be safe for me to see the people I love again. I'm in Hong Kong and they're in the UK, though I'm sure readers will prefer generalizable stuff. Do you have any recommendations about how I can accurately make this judgement, and who or what I should follow to keep it up to date?

2
Linch
For your second question, within our community, Owain Evans seems to have good thoughts on the UK. alexrj (on this forum) and Vidur Kapur are based in the UK and they both do forecasting pretty actively, so they presumably have reasonable thoughts/internal models about different covid-19 related issues for the UK. To know more, you probably want to follow UK-based domain experts too. I don't know who are the best epidemiologists to follow in the UK, though you can probably figure this out pretty quickly from who Owain/Alex/Vidur listen to. For your first question, I have neither a really good generalizable model or object-level insights to convey at this moment, sorry. I'll update you if something comes up!

As someone with some fuzzy reasons to believe in their own judgement, but little explicit evidence of whether I would be good at forecasting or not, what advice do you have for figuring out if I would be good at it, and how much do you think it's worth focusing on?

How much time do you spend forecasting? (Both explicitly forecasting on Metaculus and maybe implicitly doing things related to forecasting, though the latter I suspect is currently a full-time job for you?)

How optimistic about "amplification" forecast schemes, where forecasters answer questions like "will a panel of experts say <answer> when considering <question> in <n> years?"

I've recently gotten into forecasting and have also been a strategy game addict enthusiast at several points in my life. I'm curious about your thoughts on the links between the two:

  • How correlated is skill at forecasting and strategy games?
  • Does playing strategy games make you better at forecasting?
3
Linch
I’m not very good at strategy games, so hopefully not much! The less quippy answer is that strategy games are probably good training grounds for deliberate practice and quick optimization loops, so that likely counts for something (see my answer to Nuno about games). There are also more prosaic channels, like general cognitive ability and willingness to spend time in front of a computer. I’m guessing that knowing how to do deliberate practice and getting good at a specific type of optimization is somewhat generalizable, and it's good to do that in something you like (though getting good at things you dislike is also plausibly quite useful). I think specific training usually trumps general training, so I very much doubt playing strategy games is the most efficient way to get better at forecasting, unless maybe you’re trying to forecast results of strategy games.

What were your reasons for getting more involved in forecasting?

Hi Linch! So what's up with the Utilitarian Memes page? Can you tell more about it? Any deep lessons from utilitarian memes?

Do you think people who are bad at forecasting or related skills (e.g. calibration) should try to become mediocre at it? (Do you think people who are mediocre should try to become decent but not great? etc.)

What's your process like for tackling a forecast?

Do you think forecasting has a place in improving the decision making in business?

How much time do you spend on forecasting, including researching the topics?

[anonymous]3
0
0

Forecasting has become slightly prestigious in my social circle. At current margins of forecastingness, this seems like a good thing. Do you predict much corruption or waste if the hobby got much more prestigious than it currently is? This question is not precise and comes from a soup of vaguely-related imagery.

In what meaningful ways can forecasting questions be categorized?

This is really broad, but one possible categorization might be questions that have inside view predictions versus questions that have outside view predictions.

I will forecast a personal question for you e.g. "How many new friends will I make this year?" What do you want to ask me?

2
Linch
In 2021, what percentage of my working hours will I spend on things that I would consider to be forecasting or forecasting-adjacent?
2
jungofthewon
I'll make a distribution. Do you want to make a distribution too and then we can compare?
6
jungofthewon
https://elicit.ought.org/builder/RT9kxWoF9 My distribution! Good question Linch; it had a fun mix of investigative LinkedIn sleuthing + decomposition + reasoning about Linch + thoughts that I could sense others might disagree with.

Thanks for doing this AMA! In case you still might answer questions, I'm curious as to how much value you think there'd be in: 

  • further research into forecasting techniques
  • improving existing forecasting tools and platforms
  • developing better tools and platforms

E.g., if someone asked you for advice on whether to do work in academia similar to Tetlock's work, or build things like Metaculus or calibration games, or do something else EAs often think is valuable, what might you say? 

(I ask in part because you wrote about judgemental forecasting being "ve

... (read more)

Often times, to me it seems, machine learning models reveal solutions or insights that, while researchers may have known them already, are actually closely linked to the problem it's modelling. In your experience, does this happen often with ML? If so, does that mean ML is a very good tool to use in Effective Altruism? If not, then where exactly does this tendency come from?


(As an example of this 'tendency', this study used neural networks to find that estrogen exposure and folate deficiency were closely correlated to breast cancer. Source: https://www.sciencedirect.com/science/article/abs/pii/S0378111916000706 )

Which types of forecasting questions do you like / dislike more?

More from Linch
Curated and popular this week
Relevant opportunities