It is 2AM in my timezone, and come morning I may regret writing this. By way of introduction, let me say that I dispositionally skew towards the negative, and yet I do think that OP is amongst the best if not the best foundation in its weight class. So this comment generally doesn't compare OP against the rest but against the ideal.
One way which you could allow for somewhat democratic participation is through futarchy, i.e., using prediction markets for decision-making. This isn't vulnerable to brigading because it requires putting proportionally more mone...
Strongly disagree about betting and prediction markets being useful for this; strongly agree about there being a spectrum here, where at different points the question "how do we decide who's an EA" is less critical and can be experimented with.
One point on the spectrum could be, for example, that the organisation is mostly democratically run but the board still has veto power (over all decisions, or ones above some sum of money, or something).
I notice that this comment was pretty controversial (16 people voted, karma of 3). Here is how I would rewrite this comment to better fit in the EA forum:
Yes, this is true that men are more likely to be victims of non-sexual violence. However, note that most men are killed by other men, whereas a large number of the women who are killed (50% according to the UN) are killed by their partners or family. (1) (2). So "while men are more likely than women to be victims of homicide, they are even more likely to be the perpetrators."
...I think that recognizing
Here is a model that I want to share with you:
It's worded in terms of starting projects and receiving funding because that's been on mind, but you could translate it to other domains. There should also be a third dimension which is "well, but how good are you, really".
I claim that knowing where you are on that grid is important, because it will lead you to better actions (in the case of "correctly depressed", it might be "attain mastery of a skill" so that you move one level up, or "being ok with being humble" [1]).
I don't know what you are claiming with r...
The more I reread your post, the more I feel our differences might be more nuances, but I think your contrarian / playing to an audience of cynics tone (which did amuse me) makes them seem starker?
I think that I disagree with you with regards to how people value other people, and how people should expect other people to value them, and less about where one should derive one's own self-worth from [1]. As such, I do think that we have a disagreement.
...I am not sure whether you're saying "treating people better / worse depending on their success is good";
Here is a model that I want to share with you:
It's worded in terms of starting projects and receiving funding because that's been on mind, but you could translate it to other domains. There should also be a third dimension which is "well, but how good are you, really".
I claim that knowing where you are on that grid is important, because it will lead you to better actions (in the case of "correctly depressed", it might be "attain mastery of a skill" so that you move one level up, or "being ok with being humble" [1]).
I don't know what you are claiming with r...
Content warning: If you stare too much into the void, the void stares back at you.
So the title of my blog is Measure is unceasing partly as a reminder to myself that some of the ideas which are presented in this blogpost are dead wrong. In short, I think that people are judging each other all the time. In the past, pretending or wanting to believe that this isn't the case has provided me with temporary relief but ultimately led to a path of sorrow.
I particularly take issue with:
...But you'll still suffer a lot if you think that the worth others a
I recently read a post which:
Normally, I would just ask if they wanted to get a comment from this account. Or just downvote it and explain my reasons for doing so. Or just tear it apart. But today, I am low on energy, and I can't help but feel: What's the point? Sure, if I was more tactful, more charismatic, and glibber, I might both be able to explain ...
which of the categories are you putting me in?
I don't think this is an important question, it's not like "tall people" and "short people" are a distinct cluster. There is going to be a spectrum, and you would be somewhere in the middle. But still using labels is a convenient shorthand.
So the thing that worries me is that if someone is optimizing for something different, they might reward other people for doing the same thing. The case has been on my mind recently where someone is a respected member of the community, but what they are doing is not optima...
EA should accept/reward people in proportion to (or rather, in a monotone increasing fashion of) how much good they do.
I think this would work if one actually did it, but not if impact is distributed with long tails (e.g., power law) and people take offense to being accepted very little.
One "classic internet essay" analyzing this phenomenon is Geeks, MOPs, and sociopaths in subculture evolution. A phrase commonly used in EA would be "keep EA weird". The point is that adding too many people like Eric would dillute EA, and make the social incentive gradients point to places we don't want them to point to.
...I really enjoy socializing and working with other EAs, more so than with any other community I’ve found. The career outcomes that are all the way up (and pretty far to the right) are ones where I do cool work at a longtermist office space
I guess I have two reactions. First, which of the categories are you putting me in? My guess is you want to label me as a mop, but "contribute as little as they reasonably can in exchange" seems an inaccurate description of someone who's strongly considering devoting their career to an EA cause; also I really enjoy talking about the weird "new things" that come up (like idk actually trade between universes during the long reflection).
My second thought is that while your story about social gradients is a plausible one, I have a more straightforward story ab...
Circling back to this, this report hits almost none of the notes in lukeprog's Features that make a report especially helpful to me, which might be one reason why I got the impression that the authors were speaking a different dialect.
I get the impression that some parts of CSER are fairly valuable, whereas others are essentially dead weight. E.g., if I imagine ranking in pairs all the work referenced in your presentation, my impression is that value would range 2+ orders of magnitude between the most valuable and the least valuable.
Is that also your impression? Even if not, how possible is it to fund some parts of CSER, but not others?
Thanks Nuño! I don't think I've got well thought out views on relative importance or rankings of these work streams; I'm mostly focused on understanding scenarios in which my own work might be more or less impactful (I also should note that if some lines of research mentioned here seem much more impactful, that may be more a result of me being more familiar with them, and being able to give a more detailed account of what the research is trying to get at / what threat models and policy goals it is connected to).
On your second question, as with other ...
These were written as I was reading the post, so some of them are addressed by points brought up later. They are also a bit too sardonic.
Epistemic status: not too sure. See account description.
One of the the EA forum norms that I like to see is people explaining why they downvoted a post/comment so I'm a bit annoyed that NegativeNuno's comment that supported this norm was fairly heavily downvoted (without explanation).
Not long enough for the formatting to matter in my opinion. We can, and should, encourage people to post some low-effort posts, as long as they're an original thought.
I know this isn't the central part of the post but I'm not sure the title is really clickbait. It seems like an accurate headline to me? I understand clickbait to be "the intentional act of over-promising or otherwise misrepresenting — in a headline, on social media, in an image, or some combination — what you’re going to find when you read a story on the web." Source.
A real clickbait title for this would be something like "The one secret fact FTX doesn't want you to know" or "Grantmakers hate him! One weird trick to make spending transparent"
Personally, I don't have a problem with the title. It clearly states the central point of the post.
Do you want an overly negative and perhaps inaccurate comment from this account, under Crocker's rules?
I think that your answer to that is something like: "...But introducing people to EA is hard, so it makes sense to start with effective giving. Also, there are some better and worse ways to do earning to give, like donating to donor lotteries, donating to small projects that are legible to you but not to larger funders yet, etc."
Which is fine. But it's still surprising that the strategies which EA chose when it was relatively young would still be the best strategies now, and I'm still skeptical to the extent that is the case in your post.
The above is consistent with the idea that most people who could do highly impactful direct work should do that instead of earning to give, even if they could have extremely lucrative careers. There’s no cap on how good something can be: despite how much good you can do through effective giving, it’s possible direct work remains even better. **But in any case, I think that in general, effective giving is not in tension with pursuing direct work**. And for many people, effective giving is the best opportunity to have an impact.
The highlighted part is why I ...
Thanks for your reply.
It seems to me your key disagreement is with my view that promoting effective giving is compatible with (even complementary to) encouraging people to do direct work. Though, I’m not exactly sure I understand your precise claim — there are two I think you might be making, and I’ll respond to each.
One way to interpret what you’re saying is that you think that promoting effective giving actually reduces the number of people doing direct work:
Because in fact, effective giving is in tension with pursuing direct work.
As an...
Hey, I think that these are all good comments, and I wouldn't call you "a dud". I agree with your thoughts around possible cofounders, though a decrease in average participant quality was the most salient explanation to me.
It was a sunny winter night, and the utilitarians had gathered in their optimal lair. At the time, they hadn't yet taken over the world, but their holdings were vast, and even vaster in expectation, because they were sure to attract the right kind of multi-billionaire in the future. So vast were their holdings, that they were most bottlenecked on projects and people to give it out to. And yet, their best estimates suggested that even though doing direct work was the optimal thing to do—and indeed the thing that all the conspirators were doing—, the optimal...
It was a sunny winter night and a utilitarian was walking through a park. In the middle of the park was a pond, and in the pond was a drowning child. The utilitarian considering jumping into save them, but then remembered that they did direct work in effective altruism and it was a weekend, so they strolled on past. They felt good because saving the child and doing direct work was in tension.
I appreciate the point of your story, Nuño, but I don't think it fairly characterises my post, and I think its dismissiveness is unwarranted.
For one, I didn't suggest that, from a longtermist perspective, "the optimal thing to promote was earning to give." I explicitly said the opposite here:
......my personal all-things-considered view is pretty similar to Ben’s: when someone has a good personal fit for high-impact direct work, they’re likely to have more impact pursuing that than earning to give. This view is also shared by Giving What We Can leadershi
Epistemic status: See profile.
tl;dr: Skeptical about measuring "conections".
Yeah, in the abstract, I'm skeptical of the way you are measuring this, because you are measuring quantity and not quality. You don't just want "more connections", you want more connections that lead somewhere, and it's not clear to me that doubling the number of (junior) participants does this. You have a higher number of potential connections, but also a dillution effect.
So in a simple model where there are only "junior" (people looking for opportunities) and "senior" (people giv...
Ollie here from CEA's events team
Tl;DR: we basically agree. We think the number of connections is (one of!) our decent, measurable proxies for Good Things Happening but we could do better and we’re working on that.
Yeah, in the abstract, I'm skeptical of the way you are measuring this, because you are measuring quantity and not quality. You don't just want "more connections", you want more connections that lead somewhere
Yes, we agree. We’re working on ideas that actually capture the “lead somewhere” part. This might be impact-adjusted connections or, more c...
Easy fix, if we can link survey responses to accounts:
modify the event survey in year to ask for a list of named connections this year, then pull this same response in year +1 and ask what number have so far proved to be valuable.
This seems more likely to be worth building if you have a large organization/client who will use it (and you might know this if they offer to pay you to build it.) Otherwise I would be more skeptical.
Whoops, senator != representative. For the house of representatives, it's ~34 republicans vs ~31 democrats
I came in with a negative predisposition because I really don't like politics and particularly US politics as a cause area. But nothing you are saying seems crazy, particularly given your endorsement and personal experience.
Historically, there have been ~24 Republicans vs ~19 Democrats as senators (and 1 independent) from Oregon, so partisan affiliation doesn't seem that important. "$1 million for an additional 2% chance of winning" seems a bit high on the probability side, but I'm not actually familiar with the money flows of US election...
In lieu of a liquid real-money market, I started a pair of Manifold markets for:
Historically, there have been ~24 Republicans vs ~19 Democrats as senators (and 1 independent) from Oregon, so partisan affiliation doesn't seem that important.
A better way of looking at this is the partisan lean of his particular district. The answer is D+7, meaning that in a neutral environment (i.e. an equal number of Democratic and Republican votes nationally), a Democrat would be expected to win this district by 7 percentage points.
This year is likely to be a Republican "wave" year, i.e. Republicans are likely to outperform Democrats (the party ...
In addition to the fact that representatives aren't senators, looking to the distant past and other districts (not to mention total number of officials rather than number of elections won) is a bad way to predict elections. Based on recent elections, good election handicappers rate this seat Likely Democratic; if Carrick wins the primary, he will likely win the general election.
This market (which starts at 15%) is done in the spirit of this account: Will I find that the PIBBSS Fellowship was a success? @ Mantic Markets.
Written: Jun 3, 2021.
Epistemic status: See NegativeNuno's profile.
Hey [person],
Essentially, I think that it is quite likely that this will fail (70% as a made up number with less than 15 mins of thought; I'm thinking of these as "1 star predictions"); I don't think that the "if I build it they will come" theory of change is likely to work. In particular, I would be fairly surprised if (amount of time people spent working of stuff from your database) / (amount of time you spend creating that dat...
Cheers, I thought this comment was very informative