NegativeNuno

This is a novelty account created by Nuño Sempere for the purpose of providing perhaps-flawed frank criticism.

The problems it's intended to mitigate are that when providing criticism:

  • providing criticism which is on-point and yet emotionally thoughtful is doubly hard,
  • leaving negative criticism can sometimes give the impression that I think a project is not worth it, whereas I think that providing criticism is most valuable for projects which are in fact worth it
  • conversely, it's sometimes hard to communicate in a polite manner that a project is ~essentially worthless, and that the author should probably be doing something else. E.g., "your theory of impact sucks". But someone has to do it
  • mechanisms to signal good intent can become trite and fake to read and to produce, like shit sandwiches, or the phrase "fairly good."
  • mechanisms to signal good intent can vary from culture to culture, and result in miscommunication.
  • the above points become trickier in the presence of uncertainty about whether the criticism is on-point or stemming from confusion.

And yet, criticism seems particularly valuable in that it can change the actions of its recipients. For this reason, I thought it would be worth it to try to signal potentially flawed and upsetting criticism as such, and maybe develop a set of standard disclaimers around it.

Past examples using the set of patterns which I intend to use this account to manifest include: this comment or the examples in post

An example interaction might look like:

  • NN: Hey, do you want to hear some negative feedback under Crocker's rules?
  • A: No, thanks.

  • NN: Hey, do you want to hear some negative feedback under Crocker's rules?
  • A: Sure, why not.
  • NN: [negative feedback]

I am open to negative feedback requests, either here or through my main account.

Posts

Sorted by New

Topic Contributions

Comments

NegativeNuno's Shortform

I recently read a post which:

  • I thought was treating the reader like an idiot
  • I thought was below-par in terms of addressing the considerations of the topic it broached
  • I would nonetheless expect to be influential, because [censored]

Normally, I would just ask if they wanted to get a comment from this account. Or just downvote it and explain my reasons for doing so. Or just tear it apart. But today, I am low on energy, and I can't help but feel: What's the point? Sure, if I was more tactful, more charismatic, and glibber, I might both be able to explain the mistakes I see, and provide social cover both for myself and for the author I'm critizicing. But I'm not, not today. And besides, if the author was such that they produced a pretty sloppy piece, I don't think I'm going to change their mind.

My bargain with the EA machine

which of the categories are you putting me in?

I don't think this is an important question, it's not like "tall people" and "short people" are a distinct cluster. There is going to be a spectrum, and you would be somewhere in the middle. But still using labels is a convenient shorthand.

So the thing that worries me is that if someone is optimizing for something different, they might reward other people for doing the same thing. The case has been on my mind recently where someone is a respected member of the community, but what they are doing is not optimal, and it would be awkward to point that out. But still necessary, even if it looses one brownie points socially.

Overall, I don't really read minds, and I don't know what you would or wouldn't do.

My bargain with the EA machine

EA should accept/reward people in proportion to (or rather, in a monotone increasing fashion of) how much good they do.

I think this would work if one actually did it, but not if impact is distributed with long tails (e.g., power law) and people take offense to being accepted very little.

My bargain with the EA machine

One "classic internet essay" analyzing this phenomenon is Geeks, MOPs, and sociopaths in subculture evolution. A phrase commonly used in EA would be "keep EA weird". The point is that adding too many people like Eric would dillute EA, and make the social incentive gradients point to places we don't want them to point to.

I really enjoy socializing and working with other EAs, more so than with any other community I’ve found. The career outcomes that are all the way up (and pretty far to the right) are ones where I do cool work at a longtermist office space, hanging out with the awesome people there during lunch and after work

My understanding is that this is a common desire. I'm not sure what proportion of harcore EAs vs chill people would be optimal, and I could imagine it being 100% hardcore EAs.

Update on the Simon Institute: Year One

Circling back to this, this report hits almost none of the notes in lukeprog's Features that make a report especially helpful to me, which might be one reason why I got the impression that the authors were speaking a different dialect.

A primer & some reflections on recent CSER work (EAB talk)

I get the impression that some parts of CSER are fairly valuable, whereas others are essentially dead weight. E.g., if I imagine ranking in pairs all the work referenced in your presentation, my impression is that value would range 2+ orders of magnitude between the most valuable and the least valuable.

Is that also your impression? Even if not, how possible is it to fund some parts of CSER, but not others?

Update on the Simon Institute: Year One

Specific nitpicks

These were written as I was reading the post, so some of them are addressed by points brought up later. They are also a bit too sardonic.

  • "For all key risks, humanity’s path to existential security cannot be brought about by the actions of any single country, making more effective international cooperation essential"
    • Is this actually true? Not sure. For instance, if the US, China and maybe the UK decide to not do anything too crazy like getting into an AI arms race, that seems like it might leave us in a decent position, AI policy-wise.
  • "This combination of activities has granted SI mandates from UN institutions, as well as the Swiss government, to directly work on policy processes relevant to existential risk reduction"
    • Mandates but no money? "Mandates" sounds good, but not sure what it means.
    • [note: my initial impression was wrong, for instance, later you say] "We have signed a grant agreement of CHF 50,000 from the Swiss Government’s International Public Law Division for a project on existential risk governance led by SI’s board member Igor Linkov. Our collaboration with the Geneva Science-Policy Interface has yielded another grant agreement of CHF 30,000 for work on the tabletop exercise on pandemic preparedness.". Nice. But is this the same "mandate".
  • "Science-policy interface" is a really neat construction, but I wouldn't call Global Priorities research a "science"
  • "This is why SI could fill a gap in an information-rich but time-scarce environment". More plainly expressed sentences could also fill a gap in verbiage-rich but transparency-scare environments. Ok, this is mean. But, for instance, I think I could get a better idea of what you are doing if you word this as: "We try to build relationships with and make recommendations to really busy bureacrats who are nonetheless a bit altruistically inclined. Eventually, we could position ourselves so as to build international institutions for existential risk reduction, like a new treaty or the International Atomic Energy Agency". But is that what you are doing? I sort of get the vague feeling that I don't know how you are spending most of your hours.
  • "SI is embedded in one of the few international policy hubs - Geneva - and adapts its strategy in response to arising opportunities". The word "embedded" rubs me the wrong way.
  • "2021 has marked a breakthrough in international policymakers’ responsiveness to longtermist concerns." => Policymakers are now more worried about weird and unexpected things because COVID provides a salient example.
  • "Leveraging this window of opportunity, SI works with them to reduce existential risks and further long-term governance via the impact pathways of the international system.". I am really not sure what proportion of impact of this leveraging SI is claiming, or what proportion should be allocated to it.
  • "Began to work on the UN’s Our Common Agenda processes". Directly??? In what capacity?
  • "co-develop a workshop series": Would the series have happened in your absence? What % of the value was due to your participation?
  • "Some international organizations, like the UN Office for Disaster Risk Reduction (UNDRR) and the International Science Council, also explicitly stated an interest in developing a better understanding of GCRs as a result". My first thought on reading this is "nice!". But "stated an interest in developing a better understanding" is actually extremely non-committal. What do you think it will/could cash out to?
  • "Without SI, FHI Bio would have been less likely to get an in-depth look behind the scenes of international diplomacy and unlikely to connect to key stakeholders in a setting where personal discussion allowed for the divulging of insider information and personal opinion". How much less likely?
  • "Beyond taking a significant amount of time out of their busy days, many diplomats would also have needed security clearance to participate.". Why the security clearance?
Update on the Simon Institute: Year One

Epistemic status: not too sure. See account description.

Overall thoughts

  • The first few sections of this post came across to me as a bit "fake-ish", and really put me off as a reader. Some sardonic notes on that below.
  • Depending on the details, the work on the UN's "Our Common Agenda" (OCA) and your work with the Swiss government seems fairly to very exciting! I'd be curious to get a few more details on it

What parts I'm most excited about, and how I would have structured this post

  1. Sections 2.5. We can engage and provide value to both our research and policy audiences, and 2.7. Our current research capacity is limited but our thinking valued are fairly strong. On 2.7., you could also commission or suggest the kinds of research you are most excited about, even if you don't have the ability to do them yourself.

  2. I also thought that these paragraphs were very strong:

Building on these reference points, the UN’s Our Common Agenda presents a rare opportunity to directly work on future generations and existential risks in national and international political agendas. This is because they are explicitly introduced in the report and 17 out of its 69 proposals are directly relevant to longtermist goals

Our biggest upcoming opportunity is that we have been solicited by the UN SG Office to contribute to the development of Our Common Agenda. We evaluate it to be a unique opportunity for impact as it has introduced the concepts of existential risk and intergenerational global public goods to the international policy discourse, as well as put a strong focus on future generations in a way that is separate from youth engagement.

  1. The fact that your funding appeal is directly connected to your work on the UN's "Our common agenda" specifically makes it stronger.

I would have put them at the beginning, rather than somewhere in the post. I would have also been more explicit at the beginning, rather than talking about a "mandate" in the abstract. What exactly does the mandate involve?

I would also have liked to see more emphasis on your work as experimentation. This writeup strikes me as the kind of writeup that a more mature organization might produce. But your organization is much younger and has just one to three FTEs, so I would have liked to see an emphasis on your capacity for experimentation, ability to pivot, and general competence.

Key points

Some key points I would look at if I had unlimited time:

  • Does the UN actually matter? I come away with the impression that you believe that this is the case. But my prior on the UN being decidedly mediocre and not that relevant is actually quite strong, and it is actually not clear to me how much I should defer to your expert understanding.
    • For instance, weapon reduction treaties such as New START were not conceived as part of a UN framework. Similarly, the fact that the Biological Weapons Convention has so low a budget makes me update very negatively on the UN's competence.
    • One factor contributing to this is the US' recluctance to take part in UN's structures, rather than doing its own thing.
  • Because you have mostly been doing collaborations, I am really unsure about the Simon Institute's (SI) Shapley and counterfactual value.

Overall, these seem hard to resolve. But this grant seems like it still makes a lot of sense from a hits-based perspective.

Thoughts on fundraising

Our main fundraising targeted the EA Funds, resulting in six proposals for a total of ca. $ 600,000. Of these proposals, only $ 46,000 was granted. Another application to the Survival and Flourishing Fund for network-building activities was rejected without comment. Founders Pledge investigated our plans and concluded SI to be promising but with an insufficient track record for a recommendation.

The main reason for worry seems to have been our limited track record in downside-risk-conscious policy engagement, given SI’s ambitious, public-facing plans. Thus, the EA Infrastructure Fund decided to fund a minimal version of SI to first gather more data for a more telling evaluation after March 2022.

So my understanding here is that EA Funds didn't really trust your judgment, and wanted to see a track record before funding you further. I'm not sure to what extent this post achieves that goal with regards to the good judgment part.

Furthermore, as SI works at the interface between existential risk research and related policymaking, we have to frame our work differently to various sides and communicate these choices transparently to all sides. Some framings appear insufficiently technical to existential risk researchers while others appear too theoretical to our policy audiences. We are planning to conduct message testing studies to improve cross-context coherence, beyond local salience.

Yeah, this is tricky. For instance, I'm not sure to what extent you are thinking clearly but expressing your thoughts a bit too verbosely or in essentially a different dialect, or directly thinking unclearly.

Parting thoughts

I've put some specific nitpicks in a separate comment below. Surprisingly, many of these nitpicks mostly relate to the way you word things, rather than to your actual content. But I think that the way you word things formally also has you leave out a fair amount of detail.

Particularly around the UN's "Our Common Agenda". I can imagine situations that would make it worth it for you to be fully funded, but also situations where it's not all that exciting. A year ago I spent some time trying to figure out how the Sustainable Development Goals were decided, and I really couldn't exactly tell. I also couldn't tell by how much the SDGs ended up affecting, for instance, US aid funding. I think you probably have a bunch of implicit models around how this could be valuable, but this post would have been a good time to write them up.

In an ideal world, you could also have an estimate of how much better $4M to you would make the OCA, and how much funding/resources the OCA would influence, to get a reasonable estimate on impact. For instance, here is a guesstimate model that assumes that the only pathway to funding is Simon Institute -> OCA -> SDG -> Foreign aid. I'm guessing that this is wrong, but I also don't have a good sense of by what the impact of OCA is mediated through. So I'd be curious to get a less fake estimate using your own models.

Overall, I'm left hoping that EA would have its own Kissinger to be able to evaluate whether this kind of thing is really promising.

Load More