NunoSempere

I do research around longtermism, forecasting and quantification, as well as some programming, at QURI.

I'm also a hobbyst forecaster: I am LokiOdinevich on GoodJudgementOpen, and Loki on CSET-Foretell. I have been running a Forecasting Newsletter since April 2020, and have written Metaforecast.org, a search tool which aggregates predictions from many different platforms. I also generally enjoy winning bets against people too confident in their beliefs.

I like to spend my time acquiring deeper models of the world, and generally becoming more formidable. A good fraction of my research is available either on the EA Forum or on nunosempere.github.io.

I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term."

Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 and 2019, and SPARC during 2020; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.

You can share feedback anonymously with me here.

Sequences

Estimating value
Forecasting Newsletter

Wiki Contributions

Load More

Comments

An estimate of the value of Metaculus questions

My thoughts are that this problem is, well, not exactly solved, but perhaps solved in practice if you have competent and aligned forecasters, because then you can ask conditional questions which don't resolve.

  • Given such and such measures, what will the spread of covid be.
  • Given the lack of such and such measures, what will the spread of covid be

Then you can still get forecasts for both, even if you only expect the first to go through.

This does require forecasters to give probabilities even when the question they are going to forecast on doesn't resolve.

This is easier to do with EAs, because then you can just disambiguate the training and the deployment step for forecasters. That is, once you have an EA that is a trustworthy forecaster, you could in principle query them without paying that much attention to scoring rules.

An estimate of the value of Metaculus questions

Here is this point in a picture.

In particular, I'd tend to think that the changing decisions and influencing the event which is being forecasted might be the main pathways to impact, but I could be wrong.

An estimate of the value of Metaculus questions

I agree, but I do think that interestingness is important. In  particular, putting on  my forecaster hat, I think that avoiding drudgerous questions is  very important.

An estimate of the value of Metaculus questions

I see what you mean, particularly for scale. But not so much for decision-relevance and forecasting fit.

Also, the threshold for decision-relevance is in a sense lower for larger events, so I think that evens out some of the variance.

An estimate of the value of Metaculus questions

i got the feedback that this post was too verbose & rambling, so here is a condensed twitter thread instead.

An estimate of the value of Metaculus questions

Good question; I am not sure what the answer would be.

An estimate of the value of Metaculus questions

Even though I think that something like a US-China war would be hard to prevent, I think that forecasts on its likelihood are still valuable because they affect many possible plans. A post that goes through Metaculus questions about whether “shit is going to hit the fan” (US-China war, a war between nuclear powers, nuclear weapons used, etc.), and tries to outline what implications those scenarios would have for the EA community—and perhaps what cheap mitigation steps could be taken—might be a small but valuable project. Note that per Laplace’s law, the likelihood of another great war in the medium term is not that low.

An estimate of the value of Metaculus questions

Motte: We should give $225k to Metaculus every year

Bailey: This very specific method of estimating the value of Metaculus questions leads to a back-of-the-napkin guess that Metaculus questions might be worth on the order of $225k/year to the EA community, but I could imagine this being an overestimate, particularly if nobody ends up changing any decisions because of Metaculus predictions, or if the estimate of $2000 per highly valuable question is too high.

What I actually think: I think that Metaculus is great, but I was worried that their questions might not have any effects on decisions, and thus ultimately not be valuable. After a brief investigation, I think that a fair number of its questions are valuable. To incentivize questions that the EA community finds valuable, and to ensure that Metaculus remains on a good financial footing, the EA community could each year try to estimate the value Metaculus questions produce and pay Metaculus (proportionally to) that amount. I think this would require a bit more effort than my back-of-the-napkin calculation right there, but not that much if EA is still vetting constrained.

Load More