NegativeNuno

458 karmaJoined Dec 2021

Bio

This is a novelty account created by Nuño Sempere for the purpose of providing perhaps-flawed frank criticism.

The problems it's intended to mitigate are that when providing criticism:

  • providing criticism which is on-point and yet emotionally thoughtful is doubly hard,
  • leaving negative criticism can sometimes give the impression that I think a project is not worth it, whereas I think that providing criticism is most valuable for projects which are in fact worth it
  • conversely, it's sometimes hard to communicate in a polite manner that a project is ~essentially worthless, and that the author should probably be doing something else. E.g., "your theory of impact sucks". But someone has to do it
  • mechanisms to signal good intent can become trite and fake to read and to produce, like shit sandwiches, or the phrase "fairly good."
  • mechanisms to signal good intent can vary from culture to culture, and result in miscommunication.
  • the above points become trickier in the presence of uncertainty about whether the criticism is on-point or stemming from confusion.

And yet, criticism seems worth it expectation, particularl if that it can change the actions of its recipients, or of its readers. For this reason, I thought it would be worth it to try to signal potentially flawed and upsetting criticism as such, and maybe develop a set of standard disclaimers around it.

Past examples using the set of patterns which I intend to use this account to manifest include: this comment or the examples in post

An example interaction with individuals might look like:

  • NN: Hey, do you want to hear some negative feedback under Crocker's rules?
  • A: No, thanks.

or like:

  • NN: Hey, do you want to hear some negative feedback under Crocker's rules?
  • A: Sure, why not.
  • NN: [negative feedback]

or like: 

  • Someone else: "Hey, you should fund X!"
  • NN: Funding X sounds like a terrible idea.

I currently consider organizations, particularly large ones, to be fair game.

I am open to negative feedback requests, either here or through my main account.

Posts
1

Sorted by New

Comments
35

It is 2AM in my timezone, and come morning I may regret writing this. By way of introduction, let me say that I dispositionally skew towards the negative, and yet I do think that OP is amongst the best if not the best foundation in its weight class. So this comment generally doesn't compare OP against the rest but against the ideal.

One way which you could allow for somewhat democratic participation is through futarchy, i.e., using prediction markets for decision-making. This isn't vulnerable to brigading because it requires putting proportionally more money in the more influence you want to have, but at the same time this makes it less democratic.

More realistically, some proposals in that broad direction which I think could actually be implementable could be:

  • allowing people to bet against particular OpenPhilanthropy grants producing successful outcomes. 
  • allowing people to bet against OP's strategic decisions (e.g., against worldview diversification)
  • I'd love to see bets between OP and other organizations about whose funding is more effective, e.g., I'd love to see a bet between your and Jaan Tallinn on who's approach is better, where the winner gets some large amount (e.g., $200M) towards their philanthropic approach

I'm particularly attracted to bets which have the shape of "you will change your mind about this in the future".

At various points in the past, I think I would have  personally appreciated having the option to bet...

  • against hypothetically continued funding towards Just Impact beating GiveDirectly 
  • against your $8M towards INFER having been efficiently spent
  • that the marginal $5M given out as grants in an ACX grants-type process would be better than your marginal $5M to forecasting (you are giving more than $5M/yeear to forecasting, cf. your $8M grant to INFER).
  • against worldview diversification being evaluated positively by a neutral third party.
  • for closer or later AI timelines.
  • on more abstract topics, e.g., "your forecasting grantmaking is understaffed/underrated", or "your forecasting grantmaking is too institutional", "OP finds it too hard to exercise trust and would obtain better results by having more grant officers".
  • at the odds implied by some of your public forecasts.

Note that individual people inside OP may agree with some of the above propositions, even though "OP as a whole" may act as if they believe the opposite. 

I have not myself come up with a non-geographic strategy that doesn't seem highly vulnerable to corrupt intent or vote brigading.

You could also delegate the research of a strategy for democratic participation to other researchers, rather than doing it yourself, e.g., Robin Hanson's time is probably buy-able with money.  It would really surprise me if he (or other researchers) wasn't able to come up with a few futarchy-adjacent ideas that were at least worth considering.

More broadly, I think that there is a spectrum between:

  • OpenPhilanthropy makes all decisions democratically and we all sing Kumbaya
  • Influencing OP decisions requires people to move to the Bay area and become chummy friends with its grants officers. Karnofsky writes tens of thousands of words in blogposts but does not answer comments. At the same time OP ultimately makes decisions which steer the EA community and reverberate across many lives.

Both extremes are caricatures, but we are closer to the second. Contrast with the Survival and Flourishing Fund, which has a number of regrantors with pots which grow proportionally to their estimated success.

I also think that comparison with FTX's FF is instructive, because it was willing to trust a larger number of regrantors much earlier, and I think was able to produce a  number of more experimental, ambitious and innovative grants as a result. For what it's worth, my impression here is that Beckstead and MacAskill & the others in the FFF team did a great job here which was pretty much independent of FTX's fraud.

So anyways, I've brought up some mechanisms here:

  • Allowing people to bet against the success of your grants
  • Allowing people to bet against the success of your strategic decisions
  • Allowing people to bet that they are better at giving out grants than OP is
    • Or generally trying out systems other than grants officers.
  • Using a wide number of regrantors rather than a small number of grant officers.

which perhaps get some the same benefits that democratization could produce for decision-making, namely information aggregation from a wider pool,  and distribution of trust. 

My sense is that OP could take these and other steps, and they could have some value of information, while perhaps not being all that risky if tried out at a small scale. It's unclear though whether the managerial effort would be worth it.

PS: I liked the idea behind the Cause Exploration prizes, though I think that they did fail to produce a mechanism for addressing the above points, since the cause proposals were limited to Global Health & Wellbeing, and the worldview questions were too specific, whereas I think that the most important decisions are at the strategic level.

I notice that this comment was pretty controversial (16 people voted, karma of 3). Here is how I would rewrite this comment to better fit in the EA forum:

Yes, this is true that men are more likely to be victims of non-sexual violence. However, note that most men are killed by other men, whereas a large number of the women who are killed (50% according to the UN) are killed by their partners or family. (1) (2). So "while men are more likely than women to be victims of homicide, they are even more likely to be the perpetrators."

I think that recognizing gender disparities is important for understanding what kind of violence occurs, and that that this is key to ending it because [and here goes a few specific pathways]. For example, if we look at the power dynamics between aggressors and victims, we can [do some example specific thing differently.] [1]

I think that for me the thing that was most missing is a pathway between noticing gender disparities and taking a different action, rather than caring about it in the abstract. I haven't really looked, but this might also be what's going on in some of your other unpopular comments.


  1. For a toy example of something you might say: "If we look at the power dynamics between aggressors and victims, we might notice that a specific cluster of violence is husbands beating up or murdering their wifes, and we can do things like putting up billboards encouraging women to leave their abusive husbands. This seems like it would have a different cost-effectiveness profile than other kinds of murders, and I personally think (but can't prove/and here is a study that suggests that this is the case) that it might be pretty cost-effective." ↩︎

Here is a model that I want to share with you:

It's worded in terms of starting projects and receiving funding because that's been on mind, but you could translate it to other domains. There should also be a third dimension which is "well, but how good are you, really".

I claim that knowing where you are on that grid is important, because it will lead you to better actions (in the case of "correctly depressed", it might be "attain mastery of a skill" so that you move one level up, or "being ok with being humble" [1]).

I don't know what you are claiming with regards to that grid.


  1. E.g., supppose that "project" in this grid is "starting your own organization". In many respects you'll want to be "correctly depressed" w/r to that. Maybe not the best name. ↩︎

The more I reread your post, the more I feel our differences might be more nuances, but I think your contrarian / playing to an audience of cynics tone (which did amuse me) makes them seem starker?

I think that I disagree with you with regards to how people value other people, and how people should expect other people to value them, and less about where one should derive one's own self-worth from [1]. As such, I do think that we have a disagreement.


I am not sure whether you're saying "treating people better / worse depending on their success is good"; particularly in the paragraphs about success and worth. Or that you think that's just an immutable fact of life (which I disagree with). What's your take?

I think it is good in the case of, for instance, your professional life. For instance, funders are likely to fund projects differentially for people who have previous successes under their belt. People might fire other people if they haven't been going well at their jobs.

In the case of personal life, it's more ambiguous. As we both agree on, it causes sorrow. However, I think it's hard to change, because there are traits that make someone a good friend, romantic partner, colleague, and I think that it's a bit futile to go against that. I don't think it's literally impossible, but I think that there are time tradeoffs, and developing existential chill is one of many things one could do with one's time.

I've also had bad experiences with situations which gave the outer impression of being high trust/high acceptance, but weren't in the end when that acceptance was pushed a bit.

I think that sometimes you can get away with a "judge once" regime, where once you are in someone's circle of care they care about you unconditionally, but I also think that people have limited spots.


How do you see "having given my honest best shot"as distinct from my point of the value in trying your hardest? I'm suspicious we'd find them most the same thing if we looked into it...

I'm not sure what your point of trying your hardest is, maybe:

I can donate effectively as much as I can, and work as hard as I can on what I think matters, but ultimately the odds are stacked against me, like they are for everyone

I think a difference might be that I derive some self-worth from staying true to my ideals, or "staying true to inner self", but I read you as saying that you derive self-worth from some intrinsic value. I read that paragraph as saying that "you can work as hard as you can", but not making a statement related to that as self-worth.

It's possible I'm missing what the point was.


I think that we have different things:

  • How you value yourself
  • How other people value you
  • How you value other people

your other points/questions are more about how you value yourself ("self-worth"?), but I am mostly talking about how other people value you ("external worth"?), and neither agree nor disagree on the points about self-worth.

Muddying the above do think that how other people perceive one is usually a pretty important part of people's self-worth, and while I think this might be changeable with effort, I'm not sure to what extent that is a good use of one's time.

Maybe I should have written

I've found more value in deriving worth (part of my internal self-worth) from "having given my honest best shot" and taking actions that will make me more formidable, like mastery over skills (which increases both self-worth and external worth).

I don't think that mastery over skills is incompatible with notions of internal self-worth.

I'm confident that feeling like their worth doesn't depend on sucessful mastery of skills is itself a pretty good foundation for mastery of skills.

I would disagree over external self-worth. I think that people with more mastery over more skills are more valuable to those around them.


I don't mind if people think I'm better / worse at something and 'measure me' in that way; I don't mind if it presents fewer opportunities. But I take issue when anyone...:

  • uses that measurement to update on someone's value as a person, and treat them differently because of it, or;
  • over-updates on someone's ability; the worst of which looks like deference or writing someone off.

The "I don't mind if it presents fewer opportunities" vs "[I do mind if they] treat them differently because of it" seem incompatible.

Here is a scenario: We have a few conversations. These conversations aren't enough for me to be very sure, but I come away with the impression that you are a boring conversationalist. In the future, I tend to seek other conversations. Is this something you'd object to?

What if you change ("conversations, "boring conversationalist") to ("dancing sessions", "clumsy dancer"), ("trial tasks", "unproductive contractor"), ("date", probably not a potential relationship"), ("chess matches", "vastly superior/inferior chess player"). I'm unsure what you would say here, or why.

writing someone off.

I actually really do to this, I think that writing off people quickly is necessary in contexts like dates, job opportunities with many potential applicants, bloggers to read, etc. It's possible you have some more nuanced meaning here, though.


  1. I reserve my right to take issue with that at some future point. Also, I liked the " I grace you with more sappy reasons why you're wrong, and sign you up to my life-coaching platform" sentence. ↩︎

Content warning: If you stare too much into the void, the void stares back at you.











So the title of my blog is Measure is unceasing partly as a reminder to myself that some of the ideas which are presented in this blogpost are dead wrong. In short, I think that people are judging each other all the time. In the past, pretending or wanting to believe that this isn't the case has provided me with temporary relief but ultimately led to a path of sorrow.

I particularly take issue with:

But you'll still suffer a lot if you think that the worth others ascribe to you is pegged to your success

The problem with that form of reasoning is that the worth others ascribe to you is in fact pegged to your success. Other people will hold you in higher regard and esteem if you are in fact successful. You will get more grants, jobs or career opportunities, your ability to intervene in the world will be greater, and, perhaps most importantly for your wellbeing, you will attain more romantic success.

To be clear, I agree with your diagnosis that taking this fact to be true is emotionally hard. But I disagree that pretending that it isn't the case is a good solution. I've personally found value in learning to accept it instead and taking action to make reality come closer to what I desire it to be.

Or, in other words, I agree that having psychological safety is good. But I think this is the case for true psychological safety, which could come from a circle of close friends or family who are in fact willing to support you in hard times. So psychological safety > no psychological safety >> a veneer of psychological safety that fails when it is tested.

they are worthy as they are

So I think you can defend versions of this, but you end up with a notion of "worth" that is pretty essentialist and isn't really correlated with many of the stuff you care about (career success, influence over the world, romantic success, etc.). As such, I haven't found it valuable. I've found more value in deriving worth from "having given my honest best shot" and taking actions that will make me more formidable, like mastery over skills.

cosmic insignificance

I don't find the yardstick of the universe useful, but I like humans v. nature framing.

I recently read a post which:

  • I thought was treating the reader like an idiot
  • I thought was below-par in terms of addressing the considerations of the topic it broached
  • I would nonetheless expect to be influential, because [censored]

Normally, I would just ask if they wanted to get a comment from this account. Or just downvote it and explain my reasons for doing so. Or just tear it apart. But today, I am low on energy, and I can't help but feel: What's the point? Sure, if I was more tactful, more charismatic, and glibber, I might both be able to explain the mistakes I see, and provide social cover both for myself and for the author I'm critizicing. But I'm not, not today. And besides, if the author was such that they produced a pretty sloppy piece, I don't think I'm going to change their mind.

which of the categories are you putting me in?

I don't think this is an important question, it's not like "tall people" and "short people" are a distinct cluster. There is going to be a spectrum, and you would be somewhere in the middle. But still using labels is a convenient shorthand.

So the thing that worries me is that if someone is optimizing for something different, they might reward other people for doing the same thing. The case has been on my mind recently where someone is a respected member of the community, but what they are doing is not optimal, and it would be awkward to point that out. But still necessary, even if it looses one brownie points socially.

Overall, I don't really read minds, and I don't know what you would or wouldn't do.

EA should accept/reward people in proportion to (or rather, in a monotone increasing fashion of) how much good they do.

I think this would work if one actually did it, but not if impact is distributed with long tails (e.g., power law) and people take offense to being accepted very little.

Load more