N

NunoSempere

Researcher @ Quantified Uncertainty Research Institute
10996 karmaJoined Nov 2018
nunosempere.com

Bio

I do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI). 

I write a Forecasting Newsletter, and have programmed Metaforecast.org, a search tool which aggregates predictions from many different platforms. I also generally enjoy winning bets against people too confident in their beliefs.

I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:

Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.

Otherwise, I like to spend my time acquiring deeper models of the world, and generally becoming more formidable. A good fraction of my research is available either on the EA Forum or backed-up on nunosempere.com, which also hosts my more casual personal blog.

I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." After that, I joined QURI and spent some time in the Bahamas as part of the FTX EA Fellowship.

Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 2019, 2020 and 2022; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.

You can share feedback anonymously with me here.

Note: You can sign up for all my posts here: <https://nunosempere.com/.subscribe/>, including posts that I don't feel like posting on the EA Forum, or that I don't think are an appropriate venue for it. 

Sequences
3

Vantage Points
Estimating value
Forecasting Newsletter

Comments
1073

Topic Contributions
14

We have an open API on the Forum, and people can and do set up bots to scrape the site for various reasons. This has been causing a few performance issues recently (and in fact for a fairly long time), so we have set up a separate environment for bots to use. This is exactly the same as the regular Forum, with all the same data, just running on different servers: https://forum-bots.effectivealtruism.org/

Also for the /graphql endpoint? Anyways, moved forum.nunosempere.com to that endpoint. But I'd expect this to be a small burden to others running their own stuff.

Personally, I would give more weight to epistemics over making people feel welcome and safe.

including an animal-inclusive bucket, but no animal-only bucket

Good point!

decide

not "decide", but "introspect", or "reflect upon", or "estimate". This is in the same way that I can estimate probabilities.

I think the post from Holden that you point to isn't really enough to go from "we think really hardcore estimation is perilous" to "we should do worldview diversification". Worldview diversification is fairly specific, and there are other ways that you could rein-in optimization even if you don't have worldviews, e.g., adhering to deontological constraints, reducing "intenseness", giving good salaries to employees, and so on.

it seems like you're committed to some minimal moral realism

I don't think I have. In particular, from a moral relativist perspective, I can notice that Open Philanthropy's funding comes from one person, notice that they have some altruistic & consequentialist inclinations and then wonder about whether worldview diversification is really the best way to go about satisfying those. 

Or even simpler, I could be saying something like: "as a moral relativist with consequentialist sympathies, this is not how I would spend my billions if I had them, because I find the dangling relative values thing inelegant."

I'm not sure whether the following example does anything for you; it could be that our intuitions about what is "elegant" are just very different:

Imagine that I kill all animals except one Koala. Then if your worldview diversification which had some weight on animals, you would spend the remaining of that worldview on the Koala. But you could buy way more human QALYs or units of x-risk work per unit of animal welfare.

More generally, setting up things such that sometimes you would end up valuing e.g., a salmon as 0.01% of a human and other times as 10% of a human just seems pretty inelegant.

So for example, if every year the AI risk worldview gets more and more alarmed, then it might "borrow" more and more money from the factory farming worldview, with the promise to pay back whenever it starts getting less alarmed. But the whole point of doing the bucketing in the first place is so that the factory farming worldview can protect itself from the possibility of the AI risk worldview being totally wrong/unhinged, and so you can't assume that the AI risk worldview is just as likely to update down as to update upwards.

Suppose that as the AI risk worldview becomes more alarmed, you are paying more and more units of x-risk prevention (according to the AI risk worldview) for every additional farmed animal QALY (as estimated by the farmed animal worldview). I find that very unappealing.

Load more