NunoSempere

Researcher @ Quantified Uncertainty Research Institute
10262Joined Nov 2018
nunosempere.com

Bio

I do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI). 

I write a Forecasting Newsletter, and have programmed Metaforecast.org, a search tool which aggregates predictions from many different platforms. I also generally enjoy winning bets against people too confident in their beliefs.

I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:

Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.

Otherwise, I like to spend my time acquiring deeper models of the world, and generally becoming more formidable. A good fraction of my research is available either on the EA Forum or backed-up on nunosempere.com, which also hosts my more casual personal blog.

I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." After that, I joined QURI and spent some time in the Bahamas as part of the FTX EA Fellowship.

Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 2019, 2020 and 2022; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.

You can share feedback anonymously with me here.

Note: You can sign up for all my posts here: <https://nunosempere.com/.subscribe/>, including posts that I don't feel like posting on the EA Forum, or that I don't think are an appropriate venue for it. 

Sequences
3

Vantage Points
Estimating value
Forecasting Newsletter

Comments
973

Topic Contributions
14

Not affiliated with ATLAS, and just a guess. But, with regards to the $50k:

Because being a young adult with $50k gives you way more options about how you can choose to act on the world in a way which being a broke student doesn't. Having rich parents doesn't really help if they aren't in fact willing to give the $50k to the members.

When I reach back to my own experience, I spent some time I wish I could get back selling unexciting corporate software for money and giving math classes, to pay for living expenses while doing some of my early research. $50k would have afforded me with more significant freedom of action.

Note that the Thiel Fellowship, which aims to steer promising people towards being formidable forces for good in the world, gives students $100k. If you look at their notable recipients, and imagine a similar cohort producing a similar amount of impact in a less capitalistic direction, it seems at least conceivable that such a bet could be well worth it.

That's my 2cts.

Because at face value it makes sense to tailor the severity of the countermeasure to the severity of the offense, and I imagine that Wise was commenting on incidents order of magnitude less severe than the ones mentioned in the article.

(I was expecting "Breaking: awkward nerds don't know how to flirt", not "One recalled being “groomed” by a powerful man nearly twice her age who argued that “pedophilic relationships” were both perfectly natural and highly educational"

Significantly worse than what I expecting, so sorry to hear.

Saw your original comment: <https://astralcodexten.substack.com/p/mantic-monday-twitter-chaos-edition/comment/10671562#comment-10678957>. I think that the answer is that the market updates on you not publicly making that bet on the second/nth year.

I'm not following. Suppose X="I will tweet by tomorrow". Then in your example I have to both tweet and not tweet?

Here is a css snippet to make the forum a bit cleaner. <https://gist.github.com/NunoSempere/3062bc92531be5024587473e64bb2984>. I also like ea.greaterwrong.com under the brutalist setting and with maximum width.

Ideally I'd want the bounty to be proportional to the value produced. Some problem with this are:

  • any such estimate is going to be noisy, so this only works if people recognize that the estimate of value is going to be uncalibrated some of the time—e.g., "have $100 bucks for writing something which took you 50 hours and you view as both traumatic and deeply important" isn't a great look
  • it's unclear how you would even go about calculating that estimate, maybe value of community team * percentage improvement. The first item is unclear, but it could be bounded by the money that's spent on the team.
    • So, e.g., $40k to $90k * 1 to 5 FTEs * 0.1% to 3% improvement * 1 to 3 years   = 170 to 9.4k (mean 2.7k). 
    • Ehh, this seems decently valuable.
    •  
Load More