## Effective Altruism ForumEA Forum

NunoSempere

I do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI).

I'm also a hobbyst forecaster: I am LokiOdinevich on GoodJudgementOpen, and Loki on CSET-Foretell. I have been running a Forecasting Newsletter since April 2020, and have written Metaforecast.org, a search tool which aggregates predictions from many different platforms. I also generally enjoy winning bets against people too confident in their beliefs.

I like to spend my time acquiring deeper models of the world, and generally becoming more formidable. A good fraction of my research is available either on the EA Forum or on nunosempere.github.io.

I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term."

Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 and 2019, and SPARC during 2020; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.

You can share feedback anonymously with me here.

# Wiki Contributions

NunoSempere's Shortform

## How to get into forecasting, and why?

Taken from this answer, written quickly, might iterate.

As another answer mentioned, I have a forecasting newsletter which might be of interest, maybe going through back-issues and following the links that catch your interest could give you some amount of  background information.

For reference works, the Superforecasting book is a good introduction. For the background behind the practice, personally, I would also recommend E.T. Jaynes' Probability Theory, The Logic of Science (find a well-formatted edition, some of the pdfs online are kind of bad), though it's been a hit-or-miss kind of book (some other recommendations can be found in The Best Textbook on every subject thread over on LessWrong.)

As for the why, because knowledge of the world enables control of the world. Leaning into the perhaps-corny badassery, there is a von Neumann quote that goes "All stable processes we shall predict. All unstable processes we shall control". So one can aim for that.

But it's easy to pretend to have models, or to have models that don't really help you navigate the world. And at its best, forecasting enables you to create better models of the world, by discarding the models that don't end up predicting the future and polishing those that do. Other threads that also point to this are "rationality", "good judgment", "good epistemics", " Bayesian statistics".

For a personal example, I have a list of all times I've felt particularly bad, and all the times that I felt all right the next morning. Then I can use Laplace's rule of succession when I'm feeling bad to realize that I'll probably feel ok the next morning.

For a more EA example, see the probability estimates on Shallow evaluations of longtermist organizations., or maybe pathways to impact for forecasting and evaluation for something more abstract.

But it's also very possible to get into forecasting, or into prediction markets with other goals. For instance, one can go in the making money direction, or in the "high-speed trading" or "playing the market" (predicting what the market will predict) directions. Personally, I do see the appeal of making lots of money, but I dispositionally like the part where I get better models of the world more.

Lastly, I sometimes see people who kind of get into forecasting but don't really make that many predictions, or who are good forecasters aspirationally only. I'd emphasize that even as the community can be quite welcoming to newcomers, deliberate practice is in fact needed to get good at forecasting. For a more wholesome way to put this, see this thread. So good places to start practicing are probably Metaculus (for the community), PredictionBook or a spreadsheet if you want to go solo, or Good Judgment Open if you want the "superforecaster" title.

More importantly, I want to distinguish our goals from the range of possible outcomes

Yeah, this is probably a cultural difference.

Our goal is to direct $1B per month to EA causes by 2030 vs 5% chance the platform directs over$500M/year to EA

These are fairly different, and it feels kind of misleading to start with the first one in the head and change to the second one in the main body.

I'd still want to bet against "5% chance the platform directs over $500M/year to EA [by 2030]", but not as eagerly after a second re-read. Momentum has moved over$10M with our software from 40,000 donors. In our mobile app, 87% of donations went to our recommended charities

Wait, is this 87% of the $10M, or does this include the donation pages? What's the % for total donations as a whole? Do you have a rough sense of how many of those donations are counterfactual? In any case, cheers and best of luck, this feels like an ambitious undertaking that could have a large impact. Where is a good place to start learning about Forecasting? As another answer mentioned, I have a forecasting newsletter which might be of interest, maybe going through back-issues and following the links that catch your interest could give you some amount of background information. For reference works, the Superforecasting book is a good introduction. For the background behind the practice, personally, I would also recommend E.T. Jaynes' Probability Theory, The Logic of Science (find a well-formatted edition, some of the pdfs online are kind of bad), though it's been a hit-or-miss kind of book (some other recommendations can be found in The Best Textbook on every subject thread over on LessWrong.) As for the why, because knowledge of the world enables control of the world. Leaning into the perhaps-corny badassery, there is a von Neumann quote that goes "All stable processes we shall predict. All unstable processes we shall control". So one can aim for that. But it's easy to pretend to have models, or to have models that don't really help you navigate the world. And at its best, forecasting enables you to create better models of the world, by discarding the models that don't end up predicting the future and polishing those that do. Other threads that also point to this are "rationality", "good judgment", "good epistemics", " Bayesian statistics". For a personal example, I have a list of all times I've felt particularly bad, and all the times that I felt all right the next morning. Then I can use Laplace's rule of succession when I'm feeling bad to realize that I'll probably feel ok the next morning. For a more EA example, see the probability estimates on Shallow evaluations of longtermist organizations., or maybe pathways to impact for forecasting and evaluation for something more abstract. But it's also very possible to get into forecasting, or into prediction markets with other goals. For instance, one can go in the making money direction, or in the "high-speed trading" or "playing the market" (predicting what the market will predict) directions. Personally, I do see the appeal of making lots of money, but I dispositionally like the part where I get better models of the world more. Lastly, I sometimes see people who kind of get into forecasting but don't really make that many predictions, or who are good forecasters aspirationally only. I'd emphasize that even as the community can be quite welcoming to newcomers, deliberate practice is in fact needed to get good at forecasting. For a more wholesome way to put this, see this thread. So good places to start practicing are probably Metaculus (for the community), PredictionBook or a spreadsheet if you want to go solo, or Good Judgment Open if you want the "superforecaster" title. Open Thread: Winter 2021 Hey, I have a series of js snippets that I've put some love into that that might be of help, do reach out via PM. SHOW: A framework for shaping your talent for direct work This post influenced my own career to a non-insignificant extent. I am grateful for its existence, and think it's a great and clear way to think about the problem. As an example, this model of patient spending was the result of me pushing the "get humble" button for a while. This post also stands out to me in that I've come back to it again and again. If I value this post at 0.5% of my career, which I ¿do? ¿there aren't really 200 posts which have influenced me that much?, it was worth 400 hours of my time, or$4,000 to \$40,000 of my money. I probably wouldn't pay that upfront, but it might plausibly be worth that much when looking back at a career.

I think the above is probably an overestimate, but I think that it points at something true, particularly if the post reached many people, as it probably did (though I'm probably an unrepresentative fan). If a final book is produced out of this 10 year review, I'd want this post to be on it.

AI Governance: Opportunity and Theory of Impact

This continues to be one of the most clearly written explanations of a speculative or longtermist intervention that I have ever read.

Prediction Markets in The Corporate Setting

These tools forecast issues that managers are not traditionally expected to be able to forecast

The thing is, not really. Some of these ML companies offer predictions for employee retention or project timelines, which managers would in fact be expected to forecast.

Prediction Markets in The Corporate Setting

I've seen similar a few times before and am pretty tired of it at this point

I think I'd sort of encountered the issue theoretically, and maybe some ambiguous cases, but I researched this one at some depth, and it was more shocking.

Fair point on 2. (prediction markets being too restrictive) and 3. ()

4. I think is a feature of the report being aimed at a particular company, so considerations around e.g., office politics making prediction markets fail are still important. As you kind of point out, overall this isn't really the report I would have written for EA, and I'm glad I got bought out of that.

5. I don't think this is what we meant, e.g., see:

Like Eli below, I am also in favour of starting with small interventions and titrating one's way towards more significant ones.

For internal predictions, start with interventions that take the least amount of employee time

I.e., we agree that small experiments (e.g., "Delphi-like automatic prediction markets built on top of dead-simple polls") are great. This could maybe have been expressed more clearly.

On the other hand, I didn't really have the impression that there was someone inside Upstart willing to put in the time to do the experiments if we didn't.

6. Sure. One thing we were afraid was cultures sort of having the incentive to pretend they were more candid that they really are. Social desirability bias feels strong.

7. (experimentation having positive externalities.) Yep!

Prediction Markets in The Corporate Setting

Hey, I appreciate this comment. I've shared this post with a few prediction markets people; we'll see if any of them want to become the Gitlab of prediction markets.