All of NegativeNuno's Comments + Replies

It is 2AM in my timezone, and come morning I may regret writing this. By way of introduction, let me say that I dispositionally skew towards the negative, and yet I do think that OP is amongst the best if not the best foundation in its weight class. So this comment generally doesn't compare OP against the rest but against the ideal.

One way which you could allow for somewhat democratic participation is through futarchy, i.e., using prediction markets for decision-making. This isn't vulnerable to brigading because it requires putting proportionally more mone... (read more)

4
Dawn Drescher
1y
I’d like to highlight this paragraph some more: We’re all interested in mostly agent-neutral goals, so these should be much more aligned by default than agent-relative goals such as profit. That’s a huge advantage that we’re not using sufficiently (I think). Impact markets such as ours or that of the SFF make use of the alignment with regrantors and that between funders (through the S-Process).  The upshot is that there are plenty of mechanism that promise to solve problems for funders while (almost as a side-effect) democratizing funding.  With impact markets in particular we want to enable funders to find more funding opportunities and fund more projects that would otherwise be too small for them to review. On the flip side that means that a much more diverse set of fledgling projects gets funded. It’s a win-win.

Strongly disagree about betting and prediction markets being useful for this; strongly agree about there being a spectrum here, where at different points the question "how do we decide who's an EA" is less critical and can be experimented with.

One point on the spectrum could be, for example, that the organisation is mostly democratically run but the board still has veto power (over all decisions, or ones above some sum of money, or something).

5
MichaelStJules
1y
Why would you bet against worldview diversification? All in on one worldview? Or something more specific about the way Open Phil does it?

I notice that this comment was pretty controversial (16 people voted, karma of 3). Here is how I would rewrite this comment to better fit in the EA forum:

Yes, this is true that men are more likely to be victims of non-sexual violence. However, note that most men are killed by other men, whereas a large number of the women who are killed (50% according to the UN) are killed by their partners or family. (1) (2). So "while men are more likely than women to be victims of homicide, they are even more likely to be the perpetrators."

I think that recognizing

... (read more)

Here is a model that I want to share with you:

It's worded in terms of starting projects and receiving funding because that's been on mind, but you could translate it to other domains. There should also be a third dimension which is "well, but how good are you, really".

I claim that knowing where you are on that grid is important, because it will lead you to better actions (in the case of "correctly depressed", it might be "attain mastery of a skill" so that you move one level up, or "being ok with being humble" [1]).

I don't know what you are claiming with r... (read more)

3
Linch
2y
I appreciate this chart! I think one thing that surprises me about a lot of these conversations is that people come from the presumption that intuitions/beliefs carry zero information and will always carry zero information, whereas I prefer to approach it from the angle of intuitions having nonzero information and it's valuable for us to align them to be more accurate. 

The more I reread your post, the more I feel our differences might be more nuances, but I think your contrarian / playing to an audience of cynics tone (which did amuse me) makes them seem starker?

I think that I disagree with you with regards to how people value other people, and how people should expect other people to value them, and less about where one should derive one's own self-worth from [1]. As such, I do think that we have a disagreement.


I am not sure whether you're saying "treating people better / worse depending on their success is good";

... (read more)

Here is a model that I want to share with you:

It's worded in terms of starting projects and receiving funding because that's been on mind, but you could translate it to other domains. There should also be a third dimension which is "well, but how good are you, really".

I claim that knowing where you are on that grid is important, because it will lead you to better actions (in the case of "correctly depressed", it might be "attain mastery of a skill" so that you move one level up, or "being ok with being humble" [1]).

I don't know what you are claiming with r... (read more)

Content warning: If you stare too much into the void, the void stares back at you.











So the title of my blog is Measure is unceasing partly as a reminder to myself that some of the ideas which are presented in this blogpost are dead wrong. In short, I think that people are judging each other all the time. In the past, pretending or wanting to believe that this isn't the case has provided me with temporary relief but ultimately led to a path of sorrow.

I particularly take issue with:

But you'll still suffer a lot if you think that the worth others a

... (read more)
6
Denise_Melchin
2y
Thank you for writing this Nuno. Posts around self-worth, not feeling "smart enough" and related topics on the EA Forum don't resonate with me despite having had some superficially similar experiences in EA to the people who are struggling. My best guess is this is because this is true for me I am happily married (to someone I found in the EA Community in 2014) and have a strong relationship with my parents. That said, I do think there is something wrong with the EA Community when people trying to do as much good as they can do not feel appreciated! But it's important to narrow down what exactly it is that people should be able to expect from the Community (and where it needs to change) and what not.
8
howdoyousay?
2y
The more I reread your post, the more I feel our differences might be more nuances, but I think your contrarian / playing to an audience of cynics tone (which did amuse me) makes them seem starker?  Before I grace you with more sappy reasons why you're wrong, and sign you up to my life-coaching platform[1] counter-argue, I want to ask a few things... * I am not sure whether you're saying "treating people better / worse depending on their success is good"; particularly in the paragraphs about success and worth. Or that you think that's just an immutable fact of life (which I disagree with). What's your take? * How do you see "having given my honest best shot"as distinct from my point of the value in trying your hardest? I'm suspicious we'd find them most the same thing if we looked into it... * Do you think that mastery over skills (as a tool to achieve goals) is incompatible with having an intrinsic sense of self worth? I would argue that they're pretty compatible. Moreover, for people feeling terrible and sh*t-talking themselves non-stop, which makes them think badly, I'm confident that feeling like their worth doesn't depend on sucessful mastery of skills is itself a pretty good foundation for mastery of skills. Honestly I'm quite surprised by you saying you haven't found 'essentialist' self-worth, or what I'd call intrinsic self-worth, very valuable. I'd be down to understand this much better. For my part...: * I abandoned the success oriented self-worth because of a) the hedonic treadmill, and b) the practical benefits: believing you are good enough is a much better foundation for doing well in life[2], I've found, and c) reading David Foster Wallace[3].  * I don't mind if people think I'm better / worse at something and 'measure me' in that way; I don't mind if it presents fewer opportunities. But I take issue when anyone...: * uses that measurement to update on someone's value as a person, and treat them differently because of it, or; * over-upd

I recently read a post which:

  • I thought was treating the reader like an idiot
  • I thought was below-par in terms of addressing the considerations of the topic it broached
  • I would nonetheless expect to be influential, because [censored]

Normally, I would just ask if they wanted to get a comment from this account. Or just downvote it and explain my reasons for doing so. Or just tear it apart. But today, I am low on energy, and I can't help but feel: What's the point? Sure, if I was more tactful, more charismatic, and glibber, I might both be able to explain ... (read more)

4
MaxRa
2y
Possible solution: I imagine some EAs would be happy to turn a rambly voice message about your complaints into a tactful comment now and then.

which of the categories are you putting me in?

I don't think this is an important question, it's not like "tall people" and "short people" are a distinct cluster. There is going to be a spectrum, and you would be somewhere in the middle. But still using labels is a convenient shorthand.

So the thing that worries me is that if someone is optimizing for something different, they might reward other people for doing the same thing. The case has been on my mind recently where someone is a respected member of the community, but what they are doing is not optima... (read more)

EA should accept/reward people in proportion to (or rather, in a monotone increasing fashion of) how much good they do.

I think this would work if one actually did it, but not if impact is distributed with long tails (e.g., power law) and people take offense to being accepted very little.

Thanks ‪Matthijs

One "classic internet essay" analyzing this phenomenon is Geeks, MOPs, and sociopaths in subculture evolution. A phrase commonly used in EA would be "keep EA weird". The point is that adding too many people like Eric would dillute EA, and make the social incentive gradients point to places we don't want them to point to.

I really enjoy socializing and working with other EAs, more so than with any other community I’ve found. The career outcomes that are all the way up (and pretty far to the right) are ones where I do cool work at a longtermist office space

... (read more)

I guess I have two reactions. First, which of the categories are you putting me in? My guess is you want to label me as a mop, but "contribute as little as they reasonably can in exchange" seems an inaccurate description of someone who's strongly considering devoting their career to an EA cause; also I really enjoy talking about the weird "new things" that come up (like idk actually trade between universes during the long reflection).

My second thought is that while your story about social gradients is a plausible one, I have a more straightforward story ab... (read more)

4
Andrea_Miotti
2y
As usual, it would be great to see downvotes accompanied by reasons for downvoting, especially in the case of NegativeNuno's comments, since it's an account literally created to provide frank criticism with a clear disclaimer in its bio.

Circling back to this, this report hits almost none of the notes in lukeprog's Features that make a report especially helpful to me, which might be one reason why I got the impression that the authors were speaking a different dialect.

I get the impression that some parts of CSER are fairly valuable, whereas others are essentially dead weight. E.g., if I imagine ranking in pairs all the work referenced in your presentation, my impression is that value would range 2+ orders of magnitude between the most valuable and the least valuable.

Is that also your impression? Even if not, how possible is it to fund some parts of CSER, but not others?

Thanks Nuño! I don't think I've got well thought out views on relative importance or rankings of these work streams; I'm mostly focused on understanding scenarios in which my own work might be more or less impactful  (I also should note that if some lines of research mentioned here seem much more impactful, that may be more a result of me being more familiar with them, and being able to give a more detailed account of what the research is trying to get at / what threat models and policy goals it is connected to).

On your second question, as with other ... (read more)

Specific nitpicks

These were written as I was reading the post, so some of them are addressed by points brought up later. They are also a bit too sardonic.

  • "For all key risks, humanity’s path to existential security cannot be brought about by the actions of any single country, making more effective international cooperation essential"
    • Is this actually true? Not sure. For instance, if the US, China and maybe the UK decide to not do anything too crazy like getting into an AI arms race, that seems like it might leave us in a decent position, AI policy-wise.
... (read more)
6
NegativeNuno
2y
Circling back to this, this report hits almost none of the notes in lukeprog's Features that make a report especially helpful to me, which might be one reason why I got the impression that the authors were speaking a different dialect.

Epistemic status: not too sure. See account description.

Overall thoughts

  • The first few sections of this post came across to me as a bit "fake-ish", and really put me off as a reader. Some sardonic notes on that below.
  • Depending on the details, the work on the UN's "Our Common Agenda" (OCA) and your work with the Swiss government seems fairly to very exciting! I'd be curious to get a few more details on it

What parts I'm most excited about, and how I would have structured this post

  1. Sections 2.5. We can engage and provide value to both our research and

... (read more)
2
konrad
2y
Dear Nuño, thank you very much for the very reasonable critiques! I had intended to respond in depth but it's continuously not the best use of time. I hope you understand. Your effort has been thoroughly appreciated and continues to be integrated into our communications with the EA community.  We have now secured around 2 years of funding and are ramping up our capacity . Until we can bridge the inferential gap more broadly, our blog offers insight into what we're up to. However, it is written for a UN audience and non-exhaustive, thus you may understandably remain on the fence.
9
NegativeNuno
2y
Specific nitpicks These were written as I was reading the post, so some of them are addressed by points brought up later. They are also a bit too sardonic. * "For all key risks, humanity’s path to existential security cannot be brought about by the actions of any single country, making more effective international cooperation essential" * Is this actually true? Not sure. For instance, if the US, China and maybe the UK decide to not do anything too crazy like getting into an AI arms race, that seems like it might leave us in a decent position, AI policy-wise. * "This combination of activities has granted SI mandates from UN institutions, as well as the Swiss government, to directly work on policy processes relevant to existential risk reduction" * Mandates but no money? "Mandates" sounds good, but not sure what it means. * [note: my initial impression was wrong, for instance, later you say] "We have signed a grant agreement of CHF 50,000 from the Swiss Government’s International Public Law Division for a project on existential risk governance led by SI’s board member Igor Linkov. Our collaboration with the Geneva Science-Policy Interface has yielded another grant agreement of CHF 30,000 for work on the tabletop exercise on pandemic preparedness.". Nice. But is this the same "mandate". * "Science-policy interface" is a really neat construction, but I wouldn't call Global Priorities research a "science" * "This is why SI could fill a gap in an information-rich but time-scarce environment". More plainly expressed sentences could also fill a gap in verbiage-rich but transparency-scare environments. Ok, this is mean. But, for instance, I think I could get a better idea of what you are doing if you word this as: "We try to build relationships with and make recommendations to really busy bureacrats who are nonetheless a bit altruistically inclined. Eventually, we could position ourselves so as to build international institutions for existential risk reduction

Downvoted because of the clickbait title and the terrible formatting

7
Jack R
2y
I disagree with your reasons for downvoting the post, since I generally judge posts on their content, but I do appreciate your transparency here and found it interesting to see that you disliked a post for these reasons. I’m tempted to upvote your comment, though that feels weird since I disagree with it

One of the the EA forum norms that I like to see is people explaining why they downvoted a post/comment so I'm a bit annoyed that NegativeNuno's comment that supported this norm was fairly heavily downvoted (without explanation).

Not long enough for the formatting to matter in my opinion. We can, and should, encourage people to post some low-effort posts, as long as they're an original thought.

I know this isn't the central part of the post but I'm not sure the title is really clickbait.  It seems like an accurate headline to me? I understand clickbait to be "the intentional act of over-promising or otherwise misrepresenting — in a headline, on social media, in an image, or some combination — what you’re going to find when you read a story on the web."  Source.

A real clickbait title for this would be something like "The one secret fact FTX doesn't want you to know" or "Grantmakers hate him! One weird trick to make spending transparent" 

Personally, I don't have a problem with the title. It clearly states the central point of the post. 

Do you want an overly negative and perhaps inaccurate comment from this account, under Crocker's rules?

9
konrad
2y
Yes, happily!

Hey, do you want a comment from this account (see description) on this post?

5
Jan_Kulveit
2y
Hi, as the next post in the sequencd is about 'failures' I think it would be more useful after that is published.
3
Gavin
2y
(Are you in the right headspace to receive information that could possibly hurt you?)

I think that your answer to that is something like: "...But introducing people to EA is hard, so it makes sense to start with effective giving. Also, there are some better and worse ways to do earning to give, like donating to donor lotteries, donating to small projects that are legible to you but not to larger funders yet, etc."

Which is fine. But it's still surprising that the strategies which EA chose when it was relatively young would still be the best strategies now, and I'm still skeptical to the extent that is the case in your post.

7
Michael Townsend
2y
RE your pet peeve: Obviously, it'll depend on the fit for earning to give/starting a new NGO, but this sounds plausible to me in general — I'm extremely excited about people creating new NGOs through Charity Entrepreneurship (among other ways of doing direct good in global health and development, animal welfare, etc.).

The above is consistent with the idea that most people who could do highly impactful direct work should do that instead of earning to give, even if they could have extremely lucrative careers. There’s no cap on how good something can be: despite how much good you can do through effective giving, it’s possible direct work remains even better. **But in any case, I think that in general, effective giving is not in tension with pursuing direct work**. And for many people, effective giving is the best opportunity to have an impact.

The highlighted part is why I ... (read more)

Thanks for your reply. 

It seems to me your key disagreement is with my view that promoting effective giving is compatible with (even complementary to) encouraging people to do direct work. Though, I’m not exactly sure I understand your precise claim — there are two I think you might be making, and I’ll respond to each. 

One way to interpret what you’re saying is that you think that promoting effective giving actually reduces the number of people doing direct work:

Because in fact, effective giving is in tension with pursuing direct work. 

As an... (read more)

5
NegativeNuno
2y
I think that your answer to that is something like: "...But introducing people to EA is hard, so it makes sense to start with effective giving. Also, there are some better and worse ways to do earning to give, like donating to donor lotteries, donating to small projects that are legible to you but not to larger funders yet, etc." Which is fine. But it's still surprising that the strategies which EA chose when it was relatively young would still be the best strategies now, and I'm still skeptical to the extent that is the case in your post.

Hey, I think that these are all good comments, and I wouldn't call you "a dud". I agree with your thoughts around possible cofounders, though a decrease in average participant quality was the most salient explanation to me.

It was a sunny winter night, and the utilitarians had gathered in their optimal lair. At the time, they hadn't yet taken over the world, but their holdings were vast, and even vaster in expectation, because they were sure to attract the right kind of multi-billionaire in the future. So vast were their holdings, that they were most bottlenecked on projects and people to give it out to. And yet, their best estimates suggested that even though doing direct work was the optimal thing to do—and indeed the thing that all the conspirators were doing—, the optimal... (read more)

2
NunoSempere
2y
New update to the "utilitarianismverse" just dropped: <https://archiveofourown.org/works/41911392/chapters/105186876>

It was a sunny winter night and a utilitarian was walking through a park. In the middle of the park was a pond, and in the pond was a drowning child. The utilitarian considering jumping into save them, but then remembered that they did direct work in effective altruism and it was a weekend, so they strolled on past. They felt good because saving the child and doing direct work was in tension.

I appreciate the point of your story, Nuño, but I don't think it fairly characterises my post, and I think its dismissiveness is unwarranted. 

For one, I didn't suggest that, from a longtermist perspective, "the optimal thing to promote was earning to give." I explicitly said the opposite here:

...my personal all-things-considered view is pretty similar to Ben’s: when someone has a good personal fit for high-impact direct work, they’re likely to have more impact pursuing that than earning to give. This view is also shared by Giving What We Can leadershi

... (read more)

If only someone was working on how to evaluate hard to evaluate projects

4
Yonatan Cale
2y
Ref for others: https://forum.effectivealtruism.org/posts/3hH9NRqzGam65mgPG/five-steps-for-quantifying-speculative-interventions

Epistemic status: See profile.

tl;dr: Skeptical about measuring "conections".

Yeah, in the abstract, I'm skeptical of the way you are measuring this, because you are measuring quantity and not quality. You don't just want "more connections", you want more connections that lead somewhere, and it's not clear to me that doubling the number of (junior) participants does this. You have a higher number of potential connections, but also a dillution effect.

So in a simple model where there are only "junior" (people looking for opportunities) and "senior" (people giv... (read more)

Ollie here from CEA's events team

Tl;DR: we basically agree. We think the number of connections is (one of!) our decent, measurable proxies for Good Things Happening but we could do better and we’re working on that.

Yeah, in the abstract, I'm skeptical of the way you are measuring this, because you are measuring quantity and not quality. You don't just want "more connections", you want more connections that lead somewhere

Yes, we agree. We’re working on ideas that actually capture the “lead somewhere” part. This might be impact-adjusted connections or, more c... (read more)

4
Charles He
2y
My read of your comment is that you have a well informed, personal view of many EAGs, and this is driving your skepticism. I think your perspective and experience on EAG is useful, especially if this can lead to insights that could improve it, or create new narratives that allow us to perceive EA coordination better.  I wanted to write a few questions with this motivation. I think I find the metric less relevant. (I think it might turn out to be difficult to measure match quality, especially impact, because Goodhart or something. Sometimes simple metrics as a proxy is ok.)

Easy fix, if we can link survey responses to accounts:

modify the event survey in year  to ask for a list of named connections this year, then pull this same response in year +1 and ask what number have so far proved to be valuable. 

Hey, do you want a comment from this account on this post?

8
Amy Labenz
2y
Sure! We'd be happy for some red-teaming or suggestions on how to improve our work.

This seems more likely to be worth building if you have a large organization/client who will use it (and you might know this if they offer to pay you to build it.) Otherwise I would be more skeptical.

Big fan, though the intro in this post isn't very EA-Forum-ish. 

5
Austin
2y
Yes, I got this feeling as well; I think we'll aim for more technically interesting posts in the future (e.g. explanation of DPM vs other market mechanisms, observations on asking good prediction market questions)

Whoops, senator != representative. For the house of representatives, it's ~34 republicans vs ~31 democrats

I came in with a negative predisposition because I really don't like politics and particularly US politics as a cause area. But nothing you are saying seems crazy, particularly given your endorsement and personal experience. 

Historically, there have been ~24 Republicans vs ~19 Democrats as senators (and  1 independent) from Oregon, so partisan affiliation doesn't seem that important. "$1 million for an additional 2% chance of winning" seems a bit high on the probability side, but I'm not actually familiar with the  money flows of US election... (read more)

In lieu of a liquid real-money market, I started a pair of Manifold markets for:

Historically, there have been ~24 Republicans vs ~19 Democrats as senators (and  1 independent) from Oregon, so partisan affiliation doesn't seem that important.

A better way of looking at this is the partisan lean of his particular district. The answer is D+7, meaning that in a neutral environment (i.e. an equal number of Democratic and Republican votes nationally), a Democrat would be expected to win this district by 7 percentage points.

This year is likely to be a Republican "wave" year, i.e. Republicans are likely to outperform Democrats (the party ... (read more)

In addition to the fact that representatives aren't senators, looking to the distant past and other districts (not to mention total number of officials rather than number of elections won) is a bad way to predict elections. Based on recent elections, good election handicappers rate this seat Likely Democratic; if Carrick wins the primary, he will likely win the general election.

4
NegativeNuno
2y
Whoops, senator != representative. For the house of representatives, it's ~34 republicans vs ~31 democrats

Negative thoughts on a proposal for a database of project ideas

Written: Jun 3, 2021.

Epistemic status: See NegativeNuno's profile.

Hey [person],

Essentially, I think that it is quite likely that this will fail (70% as a made up number with less than 15 mins of thought; I'm thinking of these as "1 star predictions"); I don't think that the "if I build it they will come" theory of change is likely to work. In particular, I would be fairly surprised if (amount of time people spent working of stuff from your database) / (amount of time you spend creating that dat... (read more)