🔎 DISCOVER MORE QUESTIONS RELEVANT TO YOU WITH METACULUS'S NEW FILTER & SORT
TOOLS
Where do you disagree with other forecasters? Which community predictions have
shifted the most? And what was that nanotech forecast you meant to update?
Metaculus has introduced new filter & sort tools that provide more control over
the forecast feed so you can find the questions that matter to you.
Learn more
[https://www.metaculus.com/questions/15386/-discover-more-with-new-filter--sort-tools/]
Theoretical idea that could be implemented into Metaculus
tldr; add an option to submit models of how to forecast a question, and also
voting on the models.
To be more concrete, when someone submits a question, in addition to forecasting
the question, you can submit a squiggle -- or just plain mathematical model --
of your best current guess of how to approach the problem. You define each
subcomponent that is important in the final forecast and also how these
subcomponents combine into the final forecast. Each subcomponent automatically
becomes another forecasting question on the site that people can do the same to
(if it is not already one).
Then in addition to a normal forecast, as we do right now, people can also
forecast the subcomponents of the models, as well as vote on the models. If a
model already includes previously forecasted questions, they automatically
populate in the model.
The voting system on models could either just draw attention to the best models
and encourage forecasting of the subcomponents, or even weight the models
estimates into the overall forecast of the question. No idea if this would
improve forecasting but it might make it more transparent and scalable.
I wrote a bit more in this google doc
[https://docs.google.com/document/d/1yifAcxXjXtWKndp-kTyNtxwIlxOVWOwik2tU04HEYvc/edit?usp=sharing]
if interested.
edit: I think this might just be guesstimate with memoization
Hi everyone, I am Jia, co-founder of Shamiri Health, an affordable mental health
start-up in Kenya. I am thinking of writing up something on the DALY
cost-effectiveness of investing in our company. I am very new to the community,
and I wonder if I can solicit some suggestions on what is a good framework to
use to evaluate the cost-effectiveness of impact investment into Healthcare
companies.
I think there could be two ways to go about this: 1) take an investment amount,
and using some cashflow modeling, we can figure out how many users we can reach
with that investment and calculate based on the largest user base we can reach,
with the investment amount; or 2) we can do a comparative analysis with another
more mature company in a different country, and use its % of population reach as
our "terminal impact reach". Then, use that terminal user base as the base of
the calculation.
The first approach is no doubt more conservative, but the latter, in my opinion,
is the true impact counterfactual. Without the investment, we will likely not be
able to raise enough funding since our TAM is not particularly attractive for
non-impact investors. The challenge to using the latter is the "likelihood of
success" of us carrying out the plan to reach our terminal user base. How would
you go about this "likelihood number"? I would think it varies case by case, and
one should factor in the team, the business model, the user goal, and the
market, which is closer to venture capital's model of evaluating companies. What
is the average number for impact ventures to succeed?
TLDR:
1. What is the counterfactual of impact investing? The immediate DALY that
could be averted or the terminal DALY that could be averted?
2. What is the average success rate of impact healthcare ventures to reach
their impact goal?
Metaculus is conducting its first user survey in nearly three years. If you have
read analyses, consumed forecasts, or made predictions on Metaculus, we want to
hear from you! Your feedback helps us better meet the needs of the forecasting
community and is incredibly important to us.
Take the short survey here
[https://rutgers.ca1.qualtrics.com/jfe/form/SV_0OhWGuJZg0XpHh4] — we truly
appreciate it! (We'll be sure to share what we learn.)
For a long time I found this surprisingly nonintuitive, so I made a spreadsheet
that did it, which then expanded into some other things.
* Spreadsheet
[https://docs.google.com/spreadsheets/d/1GqdutJuvVDUJVIPLvcwe5a2fuFcgAC2jITV1LlxrjKM/edit#gid=0]
here
[https://docs.google.com/spreadsheets/d/1GqdutJuvVDUJVIPLvcwe5a2fuFcgAC2jITV1LlxrjKM/edit#gid=0],
which has four tabs based on different views on how best to pick the fair
place to bet where you and someone else disagree. (The fourth tab I didn't
make at all, it was added by someone (Luke Sabor) who was passionate about
the standard deviation method!)
* People have different beliefs / intuitions about what's fair!
* An alternative to the mean probability would be to use the product of the
odds ratios.
Then if one person thinks .9 and the other .99, the "fair bet" will have
implied probability more than .945.
* The problem with using Geometric mean can be highlighted if player 1
estimates 0.99 and player 2 estimates 0.01.
This would actually lead player 2 to contribute ~90% of the bet for an EV
of 0.09, while player 1 contributes ~10% for an EV of 0.89. I don't like
that bet. In this case, mean prob and Z-score mean both agree at 50%
contribution and equal EVs.
* "The tradeoff here is that using Mean Prob gives equal expected values
(see underlined bit), but I don't feel it accurately reflects "put your
money where your mouth is". If you're 100 times more confident than the
other player, you should be willing to put up 100 times more money. In
the Mean prob case, me being 100 times more confident only leads me to
put up 20 times the amount of money, even though expected values are more
equal."
* Then I ended up making an explainer video
[https://www.youtube.com/watch?v=KOQ7OugP-Kc]because I was excited about it
Other spreadsheets I've seen in the space:
* Brier score bet
Hi all!
Nice to see that there is now a sub-forum dedicated to Forecasting, this seems
like a good place to ask what might be a silly question.
I am doing some work on integrating forecasting with government decision making.
There are several roadblocks to this, but one of them is generating good
questions (See Rigor-Relevance trade-off
[https://goodjudgment.com/question_clusters/]among other things).
One way to avoid this might be to simple ask questions about the targets the
government has already set for itself, a lot of these are formulated in a
SMART [1] way and are thus pretty forecastable. Forecasts on whether the
government will reach its target also seem like they will be immediately
actionable for decision makers. This seemed like a decent strategy to me, but I
think I have not seen them mentioned very often. So my question is simple: Is
there some sort of major problem here I am overlooking?
The one major problem I could think of is that there might be an incentive for a
sort of circular reasoning: If forecasters in aggregate think that the
government might not be on its way to achieve a certain target then the gov
might announce new policy to remedy the situation. Smart Forecasters might see
this coming and start their initial forecast higher.
I think you can balance this by having forecasters forecast on intermediate
targets as well. For example: Most countries have international obligations to
reduce their CO2 emissions by X% by 2030, instead of just forecasting the 2030
target you could forecasts on all the intermediate years as well.
1. ^
SMART stands for: Specific, Measurable, Assignable, Realistic, Time-related
- See https://en.wikipedia.org/wiki/SMART_criteria
[https://en.wikipedia.org/wiki/SMART_criteria]
On January 6, 2022, at 4pm GMT, I am going to host a gather town meetup
[https://app.gather.town/app/aPVfK3G76UukgiHx/lesswrong-campus] to go through
Scott Alexander's Prediction Competition
[https://astralcodexten.substack.com/p/2023-prediction-contest] on Blind Mode
which means you only spend max 5 minutes on each question.
Because of that, and also possibly because these are the rules (I'm finding
out), we likely won't collaborate (though if the rules ok it, maybe we do!), but
if you've been wanting to enter and haven't yet made time, come, and we'll set
some pomodoros and have a good time!
Event link here:
https://forum.effectivealtruism.org/events/wENgADx63Cs86b6A2/enter-scott-alexander-s-prediction-competition
[https://forum.effectivealtruism.org/events/wENgADx63Cs86b6A2/enter-scott-alexander-s-prediction-competition]
I've heard a variety of takes on this, ranging from "people/decision-makers just
don't use forecasting/prediction markets when they should," to "the main issue
is that it's hard to come up with (and operationalize) useful questions," to
"forecasting methods (including aggregation, etc.) and platforms are just subpar
right now; improving them is the main priority." I'd be interested in what
people think.
Of course, there could also be a meta-take like "this is not the right question"
— I'd be interested in discussing that, too.
Forecasting and estimation are important tools for understanding future risks and planning interventions appropriately. This topic covers methods as well as specific examples of forecasts or estimates on topics relevant to doing good.