NunoSempere

Researcher @ Shapley Maximizers
12194 karmaJoined Nov 2018
nunosempere.com/blog

Bio

I am an independent research and programmer working at my own consultancy, Shapley Maximizers ÖU. I like to spend my time acquiring deeper models of the world, and generally becoming more formidable.  I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:

Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.


I used to post prolifically on the EA Forum, but nowadays, I post my research and thoughts at nunosempere.com / nunosempere.com/blog rather than on this forum, because:

  • I disagree with the EA Forum's moderation policy—they've banned a few disagreeable people whom I like, and I think they're generally a bit too censorious for my liking. 
  • The Forum website has become more annoying to me over time: more cluttered and more pushy in terms of curated and pinned posts (I've partially mitigated this by writing my own minimalistic frontend)
  • The above two issues have made me take notice that the EA Forum is beyond my control, and it feels like a dumb move to host my research in a platform that has goals different from my own. 

But a good fraction of my past research is still available here on the EA Forum. I'm particularly fond of my series on Estimating Value.


I used to do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI). At QURI, I programmed Metaforecast.org, a search tool which aggregates predictions from many different platforms, which I still maintain. I spent some time in the Bahamas as part of the FTX EA Fellowship, and did a bunch of work for the FTX Foundation, which then went to waste when it evaporated. 

Previously, I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." I used to write a Forecasting Newsletter which gathered a few thousand subscribers, but I stopped as the value of my time rose. I also generally enjoy winning bets against people too confident in their beliefs.

Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 2019, 2020 and 2022; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.


You can share feedback anonymously with me here.

Note: You can sign up for all my posts here: <https://nunosempere.com/.newsletter/>, or subscribe to my posts' RSS here: <https://nunosempere.com/blog/index.rss>

Sequences
3

Vantage Points
Estimating value
Forecasting Newsletter

Comments
1165

Topic contributions
14

Yes, and also I was extra-skeptical beyond that because you were getting a too small amount of early traction.

Iirc I was skeptical but uncertain about GiveWiki/your approach specifically, and so my recommendation was to set some threshold such that you would fail fast if you didn't meet it. This still seems correct in hindsight.

In practice I don't think these trades happen, making my point relevant again.

My understanding though is that the (somewhat implicit but more reasonable) assumption being made is that under any given worldview, philanthropy in that worldview's preferred cause area will always win out in utility calculations

I'm not sure exactly what you are proposing. Say you have three inconmesurable views of the world (say, global health, animals, xrisk), and each of them beats the other according to their idiosyncratic expected value methodology. But then you assign 1/3rd of your wealth to each. But then:

  • What happens when you have more information about the world? Say there is a malaria vaccine, and global health interventions after that are less cost-effective.
  • What happens when you have more information about what you value? Say you reflect and you think that animals matter more than before/that the animal worldview is more likely to be the case.
  • What happens when you find a way to compare the worldviews? What if you have trolley problems comparing humans to animals, or you realize that units of existential risk avoided correspond to humans who don't die, or...

Then you either add the epicycles or you're doing something really dumb.

My understanding though is that the (somewhat implicit but more reasonable) assumption being made is that under any given worldview, philanthropy in that worldview's preferred cause area will always win out in utility calculations, which makes sort of deals proposed in "A flaw in a simple version of worldview diversification" not possible/use

I think looking at the relative value of marginal grants in each worldview is going to be a good intuition pump for worldview diversification type stuff. Then even if, every year, every worldview prefers their marginal grants over those of other worldviews, you can/will still have cases where the worldviews can shift money between years and get more than what they all want.

Unflattering things about the EA machine/OpenPhil-Industrial-Complex', it's titled "Unflattering thins about EA". Since EA is, to me, a set of beliefs I think are good, then it reads as an attack on the whole thing which is then reduced to 'the EA machine', which seems to further reduce to OpenPhil

I think this reduction is correct. Like, in practice, I think some people start with the abstract ideas but then suffer a switcheroo where it's like: oh well, I guess I'm now optimizing for getting funding from Open Phil/getting hired at this limited set of institutions/etc. I think the switcheroo is bad. And I think that conceptualizing EA as a set of beliefs is just unhelpful for noticing this dynamic.

But I'm repeating myself, because this is one of the main threads in the post. I have the weird feeling that I'm not being your interlocutor here.

I think people will find a very similar criticsm expressed more clearly and helpfully in Michael Plant's What is Effective Altruism? How could it be improved? post

I disagree with this. I think that by reducing the ideas in my post to those of that previous one, you are missing something important in the reduction.

I see saying that I disagree with the EA Forum's "approach to life" rubbed you the wrong way. It seemed low cost, so I've changed it to something more wordy.

Hey, thanks for the comment. Indeed something I was worried about with the later post was whether I was a bit unhinged (but the converse is, am I afraid to point out dynamics that I think are correct?). I dealt with this by first asking friends for feedback, then posting it but distributing it not very widely, then once I got some comments (some of which private) saying that this also corresponded to other people's impressions, I decided to share it more widely.

The examples Nuño gives...

You are picking on the weakest example. The strongest one might be Sapphire. A more recent one might have been John Halstead, who had a bad day, and despite his longstanding contributions to the community was treated with very little empathy and left the forum.

Furthermore, while criticising OpenPhil/EA 'leadership', Nuňo doesn't make points that EA is too naïve/consequentialist/ignoring common-sense enough. Instead, they don't think we've gone far enough into that direction.[1] See in Alternate Visions of EA, the claim "if you aren’t producing an SBF or two, then your movement isn’t being ambitious enough". In a comment reply in Why are we not harder, better, faster, stronger?, they say "There is a perspective from which having a few SBFs is a healthy sign." While many of you may be very critical of EA leadetship and OpenPhil, I suspect many of you will be critiquing that orthodoxy from exactly the opposite direction. Be aware of this if you're upvoting.

I think this paragraph misrepresents me:

  • I don't claim that "if you aren’t producing an SBF or two, then your movement isn’t being ambitious enough". I explore different ways EA could look, and then write "From the creators of “if you haven’t missed a flight, you are spending too much time in airports” comes “if you aren’t producing an SBF or two, then your movement isn’t being ambitious enough.”". The first is a bold assertion, the second one is reasonable to present in the context of exploring varied possibilities.
  • The full context for the other quote is "There is a perspective from which having a few SBFs is a healthy sign. Sure, you would rather have zero, but the extreme in which one of your members scams billions seems better than one in which your followers are bogged down writing boring plays, or never organize to do meaningful action. I'm not actually sure that I do agree with this perspective, but I think there is something to it." (bold mine). Another way to word this less provocatively is: even with SBF, I think the EA community has had positive impact.
  • In general, I think picking quotes out of context just seems deeply hostile.

First, their priorities are different from mine" (so what?)

So if leadership has priorities different from the rest of the movement, the rest of the movement should be more reluctant to follow. But this is for people to decide individually, I think.

"the EA machine has been making some weird and mediocre moves" (would like some actual examples on the object-level of this)

but without evidence to back this up

You can see some examples in section 5.

A view towards maximisation above all,

I think the strongest version of my current beliefs is that quantification is underdeployed on the margin and it can unearth Pareto improvements. This is joined with an impression that we should generally be much more ambitious. This doesn't require me to believe that more maximization will always be good, rather than, at the current margin, more ambition is.

Over the last few years, the EA Forum has taken a few turns that have annoyed me:

  • It has become heavier and slower to load
  • It has added bells and whistles, and shiny notifications that annoy me
  • It hasn't made space for disagreable people I think would have a lot to add. Maybe they had a bad day, and instead of working with them forum banned them.
  • It has added banners, recommended posts, pinned posts, newsletter banners, etc., meaning that new posts are harder to find and get less attention.
    • To me, getting positive, genuine exchanges in the forum as I was posting my early research was hugely motivating, but I think this is less likely to happen if the forum is steering readers somewhere else
    • If I'm trying to build an audience as a researcher, the forum offers a larger audience in the short term in exchange for less control over the long-term, which I think ends up being a bad bargain
  • It has become very expensive (in light of which this seems like a good move), and it just "doesn't feel right".

Initially I dealt with this by writing my own frontend, but I ended up just switching to my blog instead.

Load more