NunoSempere

Researcher @ Shapley Maximizers
11425 karmaJoined Nov 2018
nunosempere.com

Bio

I am an independent research and programmer working at my own consultancy, Shapley Maximizers ÖU. I like to spend my time acquiring deeper models of the world, and generally becoming more formidable.  I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:

Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.


I used to post prolifically on the EA Forum, but nowadays, I post my research and thoughts at nunosempere.com / nunosempere.com/blog rather than on this forum, because:

  • I disagree with the EA Forum's moderation policy—they've banned a few disagreeable people whom I like, and I think they're generally a bit too censorious for my liking. 
  • The Forum website has become more annoying to me over time: more cluttered and more pushy in terms of curated and pinned posts (I've partially mitigated this by writing my own minimalistic frontend)
  • The above two issues have made me take notice that the EA Forum is beyond my control, and it feels like a dumb move to host my research in a platform that has goals different from my own. 

But a good fraction of my past research is still available here on the EA Forum. I'm particularly fond of my series on Estimating Value. And I haven't left the forum entirely: I remain subscribed to its RSS, and generally tend to at least skim all interesting posts.


I used to do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI). At QURI, I programmed Metaforecast.org, a search tool which aggregates predictions from many different platforms, which I still maintain. I spent some time in the Bahamas as part of the FTX EA Fellowship. Previously, I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." I used to write a Forecasting Newsletter which gathered a few thousand subscribers, but I stopped as the value of my time rose. I also generally enjoy winning bets against people too confident in their beliefs.

Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 2019, 2020 and 2022; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.


You can share feedback anonymously with me here.

Note: You can sign up for all my posts here: <https://nunosempere.com/.newsletter/>, or subscribe to my posts' RSS here: <https://nunosempere.com/blog/index.rss>

Sequences
3

Vantage Points
Estimating value
Forecasting Newsletter

Comments
1128

Topic Contributions
14

This isn’t to say that the Forum can claim 100% counterfactual value for every interaction that happens in this space

This isn't a convincing less of analysis to me, as these two things can both be true at the same time:

  • The EA Forum as a whole is very valuable
  • The marginal $1.8M spent on it isn't that valuable

i.e., you don't seem to be thinking on the margin.

because of how averages work

I think that with a strong prior, you should conclude that RP's research would be incorrect at representing your values.

This answer seems very diplomatically phrased, and also compatible with many different probabilities for a question like: "in the next 10 years, will any nuclear capable states (wikipedia list to save some people a search) cease to be so"

  • Does the conclusion flip if you don't value 30 shrimps/shrimp moments the same as a human?  
  • It might be more meaningful to present your results as a function, e.g., if you value shrimps and chicken at xyz, then the overall value is negative/positive 
  • Particularly in uncertain domains, it might have been worth it to consider uncertainty explicitly, and RP does give confidence intervals.

1/2-2/3 of people to already have sunscreen in their group and likely using their own

Yeah, good point; on the back of my mind I would have been inclined to model this not as the sunscreen going to those who don't have it, but as having some chance of going to people who would otherwise have had their own.

Nice!

Two comments:

  • Sunburn risk without shared sunscreen seems a bit too high; do 30% of people at such concerts get sunburnt?
  • I recently got a sunburn, and I was thinking about the daly weight. A DALY improvement of 0.1 would mean prefering the experience of 9 days without a sunburn over 10 days with a sunburn seems... ¿reasonable? But also something confuses me here.

There isn't actually any public grant saying that Open Phil funded Anthropic

I was looking into this topic, and found this source:

Anthropic has raised a $124 million Series A led by Stripe co-founder Jaan Tallinn, with participation from James McClave, Dustin Moskovitz, the Center for Emerging Risk Research and Eric Schmidt. The company is a developer of AI systems.

Speculating, conditional on the pitchbook data being correct, I don't think that Moskovitz funded Anthropic because his object level beliefs about their value or because they're such good pals, rather I'm guessing he received a recommendation from Open Philanthropy, even if Open Philanthropy wasn't the vehicle he used to transfer the funds.

Also note that Luke Muehlhauser is part of the board of Anthropic; see footnote 4 here.

Talks about "calibrated trust", has no forecasters, sad. In the absence of someone to represent my exact niche interest I guess that if we were doing a transferable votes, mine would go to Habryka.

Load more