I wrote a draft outline on bottlenecks to more impactful crowd forecasting that I decided to share in its current form rather than clean up into a post.
I really enjoyed your outline, thank you! I have a few questions/notes:
Please note that I have no real relevant background (and am neither a forecast stakeholder nor a proper forecaster).
Hi Lizka, thanks for your feedback and think it touched on some of the sections that I'm most unsure about / could most use some revision which is great!
[Bottlenecks] You suggest "Organizations and individuals (stakeholders) making important decisions are willing to use crowd forecasting to help inform decision making" as a crucial step in the "story" of crowd forecasting’s success (the "pathway to impact"?) --- this seems very true to me. But then you write "I doubt this is the main bottleneck right now but it may be in the future" (and don't really return to this).
I'll say up front it's possible I'm just wrong about the importance of the bottleneck here, and I think it also interacts with the other bottlenecks in a tricky way. E.g. if there were a clearer pipeline for creating important questions which get very high quality crowd forecasts which then affect decisions, more organizations would be interested.
That being said, my intuition that this is not the bottleneck comes from some personal experiences I've had with forecasts solicited by orgs that already are interested in using crowd forecasts to inform decision making. Speaking from the perspective of a forecaster, I personally wouldn't have trusted the forecasts produced as an input into important decisions.
Some examples: [Disclaimer: These are my personal impressions. Creating impactful questions and incentivizing forecaster effort is really hard and I respect OP//RP/Metaculus a lot for giving it a shot, and would love to be proven wrong about the impact of current initiatives like these]
So my argument is: given that AFAIK we haven't had consistent success using crowd forecasts to help institutions making important decisions, the main bottleneck seems to be helping the interested institutions rather than getting more institutions interested.
If, say, the CDC (or important people there, etc.) were interested in using Metaculus to inform their decision-making, do you think they would be unable to do so due to a lack of interest (among forecasters) and/or a lack of relevant forecasting questions? (But then, could they not tell suggest questions they felt were relevant to their decisions?) Or do you think that the quality of answers they would get (or the amount of faith they would be able to put into those answers) wouldn't be sufficient?
[Caveat: I don't feel too qualified too opine on this point since I'm not a stakeholder nor have I interviewed ones, but I'll give my best guess.]
I think for the CDC example:
Overall, I'd expect slightly decent forecasts on good but not great questions and I think that this isn't really enough to move the needle, so to speak. I also think there would need to be reasoning given behind the forecasts for stakeholder to understand and trust in crowd forecasts would need to be built up over time.
Part of the reason it seems tricky to have impactful forecasts is that often there are competing people/"camps" with different world models, and a person which the crowd forecast disagrees with may be reluctant to change their mind unless (a) the question is well targeted at cruxes of the disagreement and (b) they have built up trust of the forecasters and their reasoning process. To the extent this is true within the CDC, the harder it seems for forecasting questions to be impactful.
2. [Separate, minor confusion] You say: "Forecasts are impactful to the extent that they affect important decisions," and then you suggest examples a-d ("from an EA perspective") that range from career decisions or what seem like personal donation choices to widely applicable questions like "Should AI alignment researchers be preparing more for a world with shorter or longer timelines?" and "What actions should we recommend the US government take to minimize pandemic risk?" This makes me confused about the space (or range) of decisions and decision-makers that you are considering here.
Yeah I think this is basically right, I will edit the draft.
[Side note] I loved the section "Idea for question creation process: double crux creation," and in general the number of possible solutions that you list, and really hope that people try these out or study them more. (I also think you identify other really important bottlenecks).
I hope so too, appreciate it!
Speaking from the perspective of a forecaster, I personally wouldn't have trusted the forecasts produced as an input into important decisions.
Fwiw, I expect to very often see forecasts as an input into important decisions, but also usually seem them as a somewhat/very crappy input. I just also think that, for many questions that are key to my decisions or to the decisions of stakeholders I seek to influence, most or all of the available inputs are (by themselves) somewhat/very crappy, and so often the best I can do is:
(See also consilience.)
(I really appreciated your draft outline and left a bunch of comments there. Just jumping in here with one small point.)
I liked this document quite a bit, and I think it would be a reasonable Forum post even without further cleanup — you could basically copy over this Shortform, minus the bit about not cleaning it up. This lets the post be tagged, be visible to more people, etc. (Though I understand if you'd rather leave it in a less-trafficked area.)
Appreciate the compliment. I am interested in making it a Forum post, but might want to do some more editing/cleanup or writing over next few weeks/months (it got more interest than I was expecting so seems more likely to be worth it now). Might also post as is, will think about it more soon.
The efforts by https://1daysooner.org/ to use human challenge trials to speed up vaccine development make me think about the potential of advocacy for "human challenge" type experiments in other domains where consequentialists might conclude there hasn't been enough "ethically questionable" randomized experimentation on humans. 2 examples come to mind:
My impression of the nutrition field is that it's very hard to get causal evidence because people won't change their diet at random for an experiment.
Why We Sleep has been a very influential book, but the sleep science research it draws upon is usually observational and/or relies on short time-spans. Alexey Guzey's critique and self-experiment have both cast doubt on its conclusions to some extent.
Getting 1,000 people to sign up and randomly contracting 500 of them to do X for a year, where X is something like being vegan or sleeping for 6.5 hours per day, could be valuable.
Challenge trials face resistance for very valid historical reasons - this podcast has a good summary. https://80000hours.org/podcast/episodes/marc-lipsitch-winning-or-losing-against-covid19-and-epidemiology/