I am an independent research and programmer working at my own consultancy, Shapley Maximizers ÖU. I like to spend my time acquiring deeper models of the world, and generally becoming more formidable. I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:
Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.
I used to post prolifically on the EA Forum, but nowadays, I post my research and thoughts at nunosempere.com / nunosempere.com/blog rather than on this forum, because:
But a good fraction of my past research is still available here on the EA Forum. I'm particularly fond of my series on Estimating Value. And I haven't left the forum entirely: I remain subscribed to its RSS, and generally tend to at least skim all interesting posts.
I used to do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI). At QURI, I programmed Metaforecast.org, a search tool which aggregates predictions from many different platforms, which I still maintain. I spent some time in the Bahamas as part of the FTX EA Fellowship. Previously, I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." I used to write a Forecasting Newsletter which gathered a few thousand subscribers, but I stopped as the value of my time rose. I also generally enjoy winning bets against people too confident in their beliefs.
Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 2019, 2020 and 2022; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.
You can share feedback anonymously with me here.
Note: You can sign up for all my posts here: <https://nunosempere.com/.newsletter/>, or subscribe to my posts' RSS here: <https://nunosempere.com/blog/index.rss>
This isn’t to say that the Forum can claim 100% counterfactual value for every interaction that happens in this space
This isn't a convincing less of analysis to me, as these two things can both be true at the same time:
i.e., you don't seem to be thinking on the margin.
This answer seems very diplomatically phrased, and also compatible with many different probabilities for a question like: "in the next 10 years, will any nuclear capable states (wikipedia list to save some people a search) cease to be so"
1/2-2/3 of people to already have sunscreen in their group and likely using their own
Yeah, good point; on the back of my mind I would have been inclined to model this not as the sunscreen going to those who don't have it, but as having some chance of going to people who would otherwise have had their own.
Nice!
Two comments:
There isn't actually any public grant saying that Open Phil funded Anthropic
I was looking into this topic, and found this source:
Anthropic has raised a $124 million Series A led by Stripe co-founder Jaan Tallinn, with participation from James McClave, Dustin Moskovitz, the Center for Emerging Risk Research and Eric Schmidt. The company is a developer of AI systems.
Speculating, conditional on the pitchbook data being correct, I don't think that Moskovitz funded Anthropic because his object level beliefs about their value or because they're such good pals, rather I'm guessing he received a recommendation from Open Philanthropy, even if Open Philanthropy wasn't the vehicle he used to transfer the funds.
Also note that Luke Muehlhauser is part of the board of Anthropic; see footnote 4 here.
Talks about "calibrated trust", has no forecasters, sad. In the absence of someone to represent my exact niche interest I guess that if we were doing a transferable votes, mine would go to Habryka.
Nonlinear was this, and then...