This is a novelty account created by Nuño Sempere for the purpose of providing perhaps-flawed frank criticism.
The problems it's intended to mitigate are that when providing criticism:
And yet, criticism seems particularly valuable in that it can change the actions of its recipients. For this reason, I thought it would be worth it to try to signal potentially flawed and upsetting criticism as such, and maybe develop a set of standard disclaimers around it.
Past examples using the set of patterns which I intend to use this account to manifest include: this comment or the examples in post
An example interaction might look like:
I am open to negative feedback requests, either here or through my main account.
which of the categories are you putting me in?
I don't think this is an important question, it's not like "tall people" and "short people" are a distinct cluster. There is going to be a spectrum, and you would be somewhere in the middle. But still using labels is a convenient shorthand.
So the thing that worries me is that if someone is optimizing for something different, they might reward other people for doing the same thing. The case has been on my mind recently where someone is a respected member of the community, but what they are doing is not optimal, and it would be awkward to point that out. But still necessary, even if it looses one brownie points socially.
Overall, I don't really read minds, and I don't know what you would or wouldn't do.
EA should accept/reward people in proportion to (or rather, in a monotone increasing fashion of) how much good they do.
I think this would work if one actually did it, but not if impact is distributed with long tails (e.g., power law) and people take offense to being accepted very little.
Thanks Matthijs
One "classic internet essay" analyzing this phenomenon is Geeks, MOPs, and sociopaths in subculture evolution. A phrase commonly used in EA would be "keep EA weird". The point is that adding too many people like Eric would dillute EA, and make the social incentive gradients point to places we don't want them to point to.
I really enjoy socializing and working with other EAs, more so than with any other community I’ve found. The career outcomes that are all the way up (and pretty far to the right) are ones where I do cool work at a longtermist office space, hanging out with the awesome people there during lunch and after work
My understanding is that this is a common desire. I'm not sure what proportion of harcore EAs vs chill people would be optimal, and I could imagine it being 100% hardcore EAs.
Circling back to this, this report hits almost none of the notes in lukeprog's Features that make a report especially helpful to me, which might be one reason why I got the impression that the authors were speaking a different dialect.
I get the impression that some parts of CSER are fairly valuable, whereas others are essentially dead weight. E.g., if I imagine ranking in pairs all the work referenced in your presentation, my impression is that value would range 2+ orders of magnitude between the most valuable and the least valuable.
Is that also your impression? Even if not, how possible is it to fund some parts of CSER, but not others?
These were written as I was reading the post, so some of them are addressed by points brought up later. They are also a bit too sardonic.
Epistemic status: not too sure. See account description.
Sections 2.5. We can engage and provide value to both our research and policy audiences, and 2.7. Our current research capacity is limited but our thinking valued are fairly strong. On 2.7., you could also commission or suggest the kinds of research you are most excited about, even if you don't have the ability to do them yourself.
I also thought that these paragraphs were very strong:
Building on these reference points, the UN’s Our Common Agenda presents a rare opportunity to directly work on future generations and existential risks in national and international political agendas. This is because they are explicitly introduced in the report and 17 out of its 69 proposals are directly relevant to longtermist goals
Our biggest upcoming opportunity is that we have been solicited by the UN SG Office to contribute to the development of Our Common Agenda. We evaluate it to be a unique opportunity for impact as it has introduced the concepts of existential risk and intergenerational global public goods to the international policy discourse, as well as put a strong focus on future generations in a way that is separate from youth engagement.
I would have put them at the beginning, rather than somewhere in the post. I would have also been more explicit at the beginning, rather than talking about a "mandate" in the abstract. What exactly does the mandate involve?
I would also have liked to see more emphasis on your work as experimentation. This writeup strikes me as the kind of writeup that a more mature organization might produce. But your organization is much younger and has just one to three FTEs, so I would have liked to see an emphasis on your capacity for experimentation, ability to pivot, and general competence.
Some key points I would look at if I had unlimited time:
Overall, these seem hard to resolve. But this grant seems like it still makes a lot of sense from a hits-based perspective.
Our main fundraising targeted the EA Funds, resulting in six proposals for a total of ca. $ 600,000. Of these proposals, only $ 46,000 was granted. Another application to the Survival and Flourishing Fund for network-building activities was rejected without comment. Founders Pledge investigated our plans and concluded SI to be promising but with an insufficient track record for a recommendation.
The main reason for worry seems to have been our limited track record in downside-risk-conscious policy engagement, given SI’s ambitious, public-facing plans. Thus, the EA Infrastructure Fund decided to fund a minimal version of SI to first gather more data for a more telling evaluation after March 2022.
So my understanding here is that EA Funds didn't really trust your judgment, and wanted to see a track record before funding you further. I'm not sure to what extent this post achieves that goal with regards to the good judgment part.
Furthermore, as SI works at the interface between existential risk research and related policymaking, we have to frame our work differently to various sides and communicate these choices transparently to all sides. Some framings appear insufficiently technical to existential risk researchers while others appear too theoretical to our policy audiences. We are planning to conduct message testing studies to improve cross-context coherence, beyond local salience.
Yeah, this is tricky. For instance, I'm not sure to what extent you are thinking clearly but expressing your thoughts a bit too verbosely or in essentially a different dialect, or directly thinking unclearly.
I've put some specific nitpicks in a separate comment below. Surprisingly, many of these nitpicks mostly relate to the way you word things, rather than to your actual content. But I think that the way you word things formally also has you leave out a fair amount of detail.
Particularly around the UN's "Our Common Agenda". I can imagine situations that would make it worth it for you to be fully funded, but also situations where it's not all that exciting. A year ago I spent some time trying to figure out how the Sustainable Development Goals were decided, and I really couldn't exactly tell. I also couldn't tell by how much the SDGs ended up affecting, for instance, US aid funding. I think you probably have a bunch of implicit models around how this could be valuable, but this post would have been a good time to write them up.
In an ideal world, you could also have an estimate of how much better $4M to you would make the OCA, and how much funding/resources the OCA would influence, to get a reasonable estimate on impact. For instance, here is a guesstimate model that assumes that the only pathway to funding is Simon Institute -> OCA -> SDG -> Foreign aid. I'm guessing that this is wrong, but I also don't have a good sense of by what the impact of OCA is mediated through. So I'd be curious to get a less fake estimate using your own models.
Overall, I'm left hoping that EA would have its own Kissinger to be able to evaluate whether this kind of thing is really promising.
I recently read a post which:
Normally, I would just ask if they wanted to get a comment from this account. Or just downvote it and explain my reasons for doing so. Or just tear it apart. But today, I am low on energy, and I can't help but feel: What's the point? Sure, if I was more tactful, more charismatic, and glibber, I might both be able to explain the mistakes I see, and provide social cover both for myself and for the author I'm critizicing. But I'm not, not today. And besides, if the author was such that they produced a pretty sloppy piece, I don't think I'm going to change their mind.