Forecasting
Forecasting
Discussion of forecasting methods, as well as specific forecasts relevant to doing good

Quick takes

22
2mo
1
Not that we can do much about it, but I find the idea of Trump being president in a time that we're getting closer and closer to AGI pretty terrifying. A second Trump term is going to have a lot more craziness and far fewer checks on his power, and I expect it would have significant effects on the global trajectory of AI.
16
2mo
As someone predisposed to like modeling, the key takeaway I got from Justin Sandefur's Asterisk essay PEPFAR and the Costs of Cost-Benefit Analysis was this corrective reminder – emphasis mine, focusing on what changed my mind: More detail: Tangentially, I suspect this sort of attitude (Iraq invasion notwithstanding) would naturally arise out of a definite optimism mindset (that essay by Dan Wang is incidentally a great read; his follow-up is more comprehensive and clearly argued, but I prefer the original for inspiration). It seems to me that Justin has this mindset as well, cf. his analogy to climate change in comparing economists' carbon taxes and cap-and-trade schemes vs progressive activists pushing for green tech investment to bend the cost curve. He concludes:  Aside from his climate change example above, I'd be curious to know what other domains economists are making analytical mistakes in w.r.t. cost-benefit modeling, since I'm probably predisposed to making the same kinds of mistakes. 
3
10d
Metaculus Concludes Nuclear Risk Tournament Supporting Rethink Priorities Metaculus has concluded the Nuclear Risk Tournament, supporting Rethink Priorities' efforts by helping to inform funding, policy, research, and career decisions aimed at reducing existential risk.  Thank you to Rethink Priorities for sponsoring the tournament, thank you to the forecasters who contributed their talents, and congratulations to the tournament winners.  Learn more
4
15d
[Question] How should we think about the decision relevance of models estimating p(doom)? (Epistemic status: confused & dissatisfied by what I've seen published, but haven't spent more than a few hours looking. Question motivated by Open Philanthropy's AI Worldviews Contest; this comment thread asking how OP updated reminded me of my dissatisfaction. I've asked this before on LW but got no response; curious to retry, hence repost)  To illustrate what I mean, switching from p(doom) to timelines:  * The recent post AGI Timelines in Governance: Different Strategies for Different Timeframes was useful to me in pushing back against Miles Brundage's argument that "timeline discourse might be overrated", by showing how choice of actions (in particular in the AI governance context) really does depend on whether we think that AGI will be developed in ~5-10 years or after that.  * A separate takeaway of mine is that decision-relevant estimation "granularity" need not be that fine-grained, and in fact is not relevant beyond simply "before or after ~2030" (again in the AI governance context).  * Finally, that post was useful to me in simply concretely specifying which actions are influenced by timelines estimates.   Question: Is there something like this for p(doom) estimates? More specifically, following the above points as pushback against the strawman(?) that "p(doom) discourse, including rigorous modeling of it, is overrated": 1. What concrete high-level actions do most alignment researchers agree are influenced by p(doom) estimates, and would benefit from more rigorous modeling (vs just best guesses, even by top researchers e.g. Paul Christiano's views)? 2. What's the right level of granularity for estimating p(doom) from a decision-relevant perspective? Is it just a single bit ("below or above some threshold X%") like estimating timelines for AI governance strategy, or OOM (e.g. 0.1% vs 1% vs 10% vs >50%), or something else? * I suppose the easy answer is "t
20
4mo
This December is the last month unlimited Manifold Markets currency redemptions for donations are assured: https://manifoldmarkets.notion.site/The-New-Deal-for-Manifold-s-Charity-Program-1527421b89224370a30dc1c7820c23ec Highly recommend redeeming donations this month since there are orders of magnitude more currency outstanding than can be donated in future months
12
3mo
Metaculus launches round 2 of the Chinese AI Chips Tournament Help bring clarity to key questions in AI governance and support research by the Institute for AI Policy and Strategy (IAPS). Start forecasting on new questions tackling broader themes of Chinese AI capability like:  Will we see a frontier Chinese AI model before 2027? Will a Chinese firm order a large number of domestic AI chips? Will a Chinese firm order a large number of US or US-allied AI chips?
5
2mo
Now You Can Create Multiple Choice Questions on Metaculus Create multiple choice questions and bring greater clarity to topics with multiple potential outcomes where one and only one will occur. To get started, simply Create a Question and set the Question Type to 'multiple choice'. Give the Group Variable a clear label, e.g., 'Option', 'Team', 'Country'. Fill in the Multiple Choice Options, adding more fields as needed. After you share additional details including background information on your topic, we'll be excited to review and publish your multiple choice question!
20
8mo
This is some advice I wrote about doing back-of-the-envelope calculations (BOTECs) and uncertainty estimation, which are often useful as part of forecasting. This advice isn’t supposed to be a comprehensive guide by any means. The advice originated from specific questions that someone I was mentoring asked me. Note that I’m still fairly inexperienced with forecasting. If you’re someone with experience in forecasting, uncertainty estimation, or BOTECs, I’d love to hear how you would expand or deviate from this advice. 1. How to do uncertainty estimation? 1. A BOTEC is estimating one number from a series of calculations. So I think a good way to estimate uncertainty is to assign credible intervals to each input of the calculation. Then propagate the uncertainty in the inputs through to the output of the calculation.  1. I recommend Squiggle for this (the Python version is https://github.com/rethinkpriorities/squigglepy/). 2. How to assign a credible interval: 1. Normally I choose a 90% interval. This is the default in Squiggle. 2. If you have a lot of data about the thing (say, >10 values), and the sample of data doesn’t seem particularly biased, then it might be reasonable to use the standard deviation of the data. (Measure this in log-space if you have reason to think it’s distributed log-normally - see next point about choosing the distribution.) Then compute the 90% credible interval as +/- 1.645*std, assuming a (log-)normal distribution. 3. How to choose the distribution: 1. It’s usually a choice between log-normal and normal. 2. If the variable seems like the sort of thing that could vary by orders of magnitude, then log-normal is best. Otherwise, normal. 1. You can use the data points you have, or the credible interval you chose, to inform this. 3. When in doubt, I’d say that most of the time (for AI-related BOTECs), log-normal distribution is a good choice. Log-normal is the default distribution
Load more (8/48)