I do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI).
I write a Forecasting Newsletter, and have programmed Metaforecast.org, a search tool which aggregates predictions from many different platforms. I also generally enjoy winning bets against people too confident in their beliefs.
I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:
Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.
Otherwise, I like to spend my time acquiring deeper models of the world, and generally becoming more formidable. A good fraction of my research is available either on the EA Forum or backed-up on nunosempere.com, which also hosts my more casual personal blog.
I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." After that, I joined QURI and spent some time in the Bahamas as part of the FTX EA Fellowship.
Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 2019, 2020 and 2022; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.
You can share feedback anonymously with me here.
Note: You can sign up for all my posts here: <https://nunosempere.com/.subscribe/>, including posts that I don't feel like posting on the EA Forum, or that I don't think are an appropriate venue for it.
Yeah, you also see this with criticism, where for any given piece of criticism, you could put more effort into it and make it more effective. But having that as a standard (even as a personal one) means that it will happen less.
So I don't think we disagree on the fact that there is a demand curve? Maybe we disagree that I want to have more sapphires and less politeness, on the margin?
Oh nice. Maybe there could also be a setting for "do not show pinned post if you've already read it".
Also, you can turn off the sidebar and the intercom with something like
@-moz-document domain("forum.effectivealtruism.org") {
.SingleColumnSection-root {
/* To do: tweak for current redesign */
/* width: 1000px; */
/* margin-left: 60.5px; */
/* max-width: 1200px */
}
.NavigationStandalone-sidebar {
display: none;
}
.intercom-lightweight-app{
display: none;
}
}
in an extension like stylus.
I continue to dislike how much space pinned and highlighted posts occupy:
Compare for instance with hackernews:
In my particular case, I prefer to use an rss reader with a more dense presentation, in this case newsboat. Other users who prefer packed information might want to explore similar setups:
Also, personally I found it a shame that the community posts were removed from the general rss feed. Is there an rss feed for the community posts specifically?
Yeah, I agree they seem acceptable/good on an absolute level (though as mentioned I think that much better interventions exist).
This was great, thanks
Hey, thanks for writting this. You might want to check out the work by Just Impact, and the estimations by Open Philanthropy when they were working on this area.
I strongly, strongly, strongly disagree with this decision.
Per my own values and style of communication, I think that welcoming people like sapphire or Sabs who a) are or can be intensely disagreeable, and b) have points worth sharing and processing, is strongly on the side of worth doing, even if c) they make other people uncomfortable, and d) even if they occasionally misfire, and even if they are wrong most of the time, as long as the expected value of the stuff they say remains high.
In particular, I think that doing so is good for arriving at correct beliefs and for becoming stronger, which I value a whole lot. It is the kind of communication which we use on my forecasting group, where the goal is to arrive at correct beliefs.
I understand that the EA Forum moderators may have different values, and that they may want to make the forum a less spiky place. Know that this has the predictable consequence of losing a Nuño, and it is part of the reason why I've bothered to create a blog and added comments to it in a way which I expect to be fairly uncensorable[1].
Separately, I do think it is the case that EA "simps" for tech billionaires[2]. An answer I would have preferred to see would be a steelmanning of why that is good, or an argument of why this isn't the case.
Uncensorable by others: I am hosting the blog on top of nja.la and the comments on my own servers. Not uncensorable by me; I can and will censor stuff that I think is low value by my own utilitarian/consequentialist lights.
Less sure of AI companies, but you could also make the case, e.g., 80kh does recommend positions at OpenAI (<https://jobs.80000hours.org/?query=OpenAI>)
Here is a squiggle model which addresses some of these concerns
p_student_donates = beta(2.355081, 49.726) // 0.01 to 0.1, i.e., 1% to 10%, from <https://nunosempere.com/blog/2023/03/15/fit-beta/>
amount_student_donates = 5 to 20 // dollars
p_parent_donates = beta(3.287, 17.7577) // 0.05 to 0.3, i.e., 5% to 30%
amount_parent_donates = 50 to 500 // dollars
num_parents = 5 to 20
num_students = 15 to 90
expected_impact = num_students * p_student_donates * amount_student_donates + num_parents * p_parent_donates * amount_parent_donates
To execute it, you can paste it here: <https://www.squiggle-language.com/playground>.
Neat post, and nice to see squiggle in the wild.
Some points
You could create a mixture distribution, you could fit a lognormal whose x% confidence interval is the range expressed by the points you've already found, you could use your subjective judgment to come up with a distribution which could fit it, you could use kernel density estimation (https://en.wikipedia.org/wiki/Kernel_density_estimation).
In your number of habitable planets estimate, you have a planetsPerHabitablePlanet estimate. This is an interesting decomposition. I would have looked at the fraction of planets which are habitable, and probably fit a beta distribution to it, given that we know that the fraction is between 0 and 1. This seems a bit like a matter of personal taste, though.