All of Ruby's Comments + Replies

Ruby
7mo17
2
0
1

RobertM and I are having a "dialogue"[1] on LessWrong with a lot of focus on whether it was appropriate for this to be posted when it was and with info collected so far (e.g. not waiting for Nonlinear response).

What is the optimal frontier for due diligence?
 

Just wanted to say (without commenting on the points in the dialogue) that I appreciate you and Robert having this discussion, and I think the fact you're having it is an example of good epistemics.

I think it matters a lot to be precise with claims here. If someone believes that any case of people with power over others asking them to commit crimes is damning, then all we need to establish is that this happened. If it's understood that whether this was bad depends on the details, then we need to get into the details. Jack's comment was not precise so it felt important to disambiguate (and make the claim I think is correct).

7
Jack Lewars
7mo
Thanks, I agree with your clarification on the point I was trying to make
Ruby
7mo35
15
3

There are a lot of dumb laws. Without saying it was right in this case, I don't think that's categorical a big red line.

Thanks, this also made me pause. I can imagine some occasions where you might encourage employees to break the law (although this still seems super ethically fraught) - for example, some direct action in e.g. animal welfare. However, the examples here are 'to gain recreational and productivity drugs' and to drive around doing menial tasks'.

So if you're saying "it isn't always unambiguously ethically wrong to encourage employees to commit crimes" then I guess yes, in some very limited cases I can see that.

But if you're saying "in these instances it was honest, honourable and conscientious to encourage employees to break the law" then I very strongly disagree.

1
JWS
7mo
Yes of course there are - I don't think anyone who has to live with them contests that! But where this story (and other ones EA has dealt with) is that this shows a willingness to break laws if they're deemed "stupid" or "low value" or "woke shibboleths"[1]. There are some cases where laws are worth breaking, and depending on the regime it may be morally required to do so, but the cases involved don't seem to be like this. What Jack is pointing to, and people like myself and lilly[2], is that often the law (or norm) breaking seems to happen in a manner which is inconsistent with the integrity that people in the EA community[3] should have - especially when they're dealing with responsibilities such as employing others and being responsible for their income, being in a position of mentorship, being in a position to influence national or international policy, or trying to 'save the world' 1. ^ not direct quotes, just my representation of an attitude in some EA/Rationalist spaces 2. ^ as far as I've interpreted her comments in this thread. Jack also feel free to say I've got your view wrong 3. ^ and people in general, to be honest

Or if it's majority false, pick out the things you think are actually true, implying everything else you contest!

Ruby
7mo11
7
10

I would think you could go through the post and list out 50 bullet points of what you plan to contest in a couple of hours.

4
Ruby
7mo
Or if it's majority false, pick out the things you think are actually true, implying everything else you contest!
Ruby
7mo30
20
9
2

My guess is it was enough time to say which claims you objected to and sketch out the kind of evidence you planned to bring. And Ben judged that your response didn't indicate you were going to bring anything that would change his mind enough that the info he had was worth sharing. E.g. you seemed to focus on showing that Alice couldn't be trusted, but Ben felt that this would not refute enough of the other info he had collected / the kinds of refutation (e.g. only a $50 for driving without a license, she brought back illegal substances anyway) were not com... (read more)

Ruby
7mo120
62
1
2

I think asking your friends to vouch for you is quite possibly okay, but that people should disclose there was a request.

It's different evidence between "people who know you who saw this felt motivated to share their perspective" vs "people showed up because it was requested". 

Yeah, this seems right.

I appreciate the frame of this post and the question it proposes, it's worth considering. The questions I'd want to address before fully buying though is:
1) Are the standard of investigative journalism actually good for their purpose? Or they did get distorted along the way for the same reason lots of regulated/standardized things do (e.g. building codes)
2) Supposing they're good for their purpose, does that really apply not in mainstream media, but rather a smaller community.

I think answering (2), we really do have a tricky false positive/false negative t... (read more)

I can follow that reasoning.

I think what you get with fewer dedicated people is people with the opportunity for a build-up of deep moderation philosophy and also experience handling tricky cases. (Even after moderating for a really long time, I still find myself building those and benefitting from stronger investment.)

Quick thought after skimming, so forgive me if was already addressed. Why is the moderator position for ~3 hours? Why not get full-time people (or at least half-time), or go for 3 hours minimum. Mostly I expect fewer people spending more time doing the task will be better than more people doing it less.

8
Jason
1y
Although they didn't state exact numbers, it sounds like there may be ~ .5 FTE of moderator capacity right now (~ 4 mods averaging 3 hours a week, plus advisors) and they are looking to hire another fraction of an FTE worth of capacity. Expending all the available budget on 1 or 2 mods with more hours would likely make it more difficult to achieve a "broader diversity of perspectives and more capacity in times of crisis, or when there’s a sudden cascade of moderation incidents."

I think this post falls short of arguing compellingly for the conclusion.

  • It brings 1 positive example of a successful movement that didn't schism early one, and 2 examples of large movements that did schism and then had trouble.
    • I don't think it's illegitimate to bring suggestive examples vs a system review of movement trajectories, but I think it should be admitted that cherry-picking isn't hard for three examples.
  • There's no effort expended to establish equivalence between EA and its goals and Christianity, Islam, or Atheism at the gears level of what they
... (read more)
Ruby
1y32
12
5

When I think about being part of the movement or not, I'm not asking whether I feel welcomed, valued, or respected. I want to feel confident that it's a group of people who have the values, culture, models, beliefs, epistemics, etc that means being part of the group will help me accomplish more of my values than if I didn't join the group.

Or in other words, I'd rather push uphill to join an unwelcoming group (perhaps very insular) that I have confidence in their ability to do good, than join a group that is all open arms and validation, but I don't think w... (read more)

If you indicate to X group, directly or otherwise, that they're not welcome in your community, then most people who identify with X are probably gonna take you at your word and stop showing up. Some people might be like you and be willing to push past the unwelcomeness for the greater good, but these people are rare, and are not numerous enough to prevent a schism. 

Ultimately, you can't make a place welcoming for every single identity without sacrificing things. If the X is "neo-nazis", then trying to make the place welcoming for them is a mistake that would drive out everyone else. But if X is like, "Belgians", then all you have to do is not be racist towards Belgians.  

Ruby
1y13
1
0

I think I agree with your clarification and was in fact conflating the mere act of speaking with strong emotion with speaking in a way that felt more like a display. Yeah, I do think it's a departure from naive truth-seeking.

In practice, I think it is hard, though I do think it is hard for the second order reasons you give and others. Perhaps an ideal is people share strong emotion when they feel it, but in some kind of format/container/manner that doesn't shut down discussion or get things heated. "NVC" style, perhaps, as you suggest.

2
ChanaMessinger
1y
Fwiw, I do think "has no place in the community" without being owned as "no place in my community" or "shouldn't have a place in the community" is probably too high a simulacrum level by default (though this isn't necessarily a criticism of Shakeel, I don't remember what exactly his original comment said.)
2
RobBensinger
1y
Cool. :) I think we broadly agree, and I don't feel confident about what the ideal way to do this is, though I'd be pretty sad and weirded out by a complete ban on expressing strong feelings in any form.
Ruby
1y46
14
3

Hey Shakeel,

Thank you for making the apology, you have my approval for that! I also like your apology on the other thread – your words are hopeful for CEA going in a good direction.

Some feedback/reaction from me that I hope is helpful. In describing your motivation for the FLI comment, you say that it was not to throw FLI under the bus, but because of your fear that some people would think EA is racist, and you wanted to correct that. To me, that is a political motivation, not much different from a PR motivation.

To gesture at the difference (in my ontology... (read more)

2
ChanaMessinger
1y
Really appreciated a bunch about this comment. I think it's that it: * flags where it comes from clearly, both emotionally and cognitively * expresses a pragmatism around PR and appreciation for where it comes from that to my mind has been underplayed * Does a lot of "my ideal EA", "I" language in a way that seems good for conversation * Adds good thoughts to the "what is politics" discussion

To me, the ideal spirit is "let me add my cognition to the collective so we all arrive at true beliefs" rather than "let me tug the collective beliefs in the direction I believe is correct" or "I need to ensure people believe the correct thing." 

I like this a lot.

I'll add that you can just say out loud "I wish other people believed X" or "I think the correct collective belief here would be X", in addition to saying your personal belief Y.

(An example of a case where this might make sense: You think another person or group believes Z, and you think they... (read more)

8
RobBensinger
1y
Speaking locally to this point: I don't think I agree! My first-pass take is that if something's horrible, reprehensible, flawed, etc., then I think EAs should just say so. That strikes me as the default truth-seeking approach.[1] There might be second-order reasons to be more cautious about when and how you report extreme negative evaluations (e.g., to keep forum discussions from degenerating as people emotionally trigger each other), but I would want to explicitly flag that this is us locally departing from the naive truth-seeking approach ("just say what seems true to you") in the hope that the end result will be more truth-seeky via people having an easier time keeping a cool head. (Note that I'm explicitly responding to the 'extreme language' side of this, not the 'was this to some extent performative or strategic?' side of things.) 1. ^ With the caveat that maybe evaluative judgments in general get in the way of truth-seeking, unless they're "owned" NVC-style, because of common confusions like "thinking my own evaluations are mind-independent properties of the world". But if we're allowing mild evaluative judgments like "OK" or "fine", then I think there's less philosophical basis for banning more extreme judgments like "awesome" or "terrible".
-2
Sharmake
1y
IMO, I think this is an area EA needs to be way better in. For better or worse, most of the world runs on persuasion, and PR matters. The nuanced truth doesn't matter that much for social reality, and EA should ideally be persuasive and control social reality.
Ruby
1y45
7
0

I came to the comments here to also comment quickly on Kathy Forth's unfortunate death and her allegations. I knew her personally (she subletted in my apartment in Australia for 7 months in 2014, but more meaningfully in terms of knowing her, we also we overlapped at Melbourne meetups many times, and knew many mutual people). Like Scott, I believe she was not making true accusations (though I think she genuinely thought they were true). 

I would have said more, but will follow Scott's lead in not sharing more details. Feel free to DM me.

Ruby
1y83
47
1

Those accusations seem of a dramatically more minor and unrelated nature and don't update me much at all that allegations of mistreatment of employees are more likely.

Also, the naming was completely on me, not them, as I explained in another comment.

3
pseudonym
1y
I largely agree with Ruby here, but wanted to note one comment, where one justification for "violating " (this word seems too strong) this norm was "a descendant of Truman would have to actually learn of this prize". If the research eventually done happened prior to the announcement, I think there would not be any meaningful update for me. OTOH, if this justification was a reason to not have done this research, and if it was applied more generally and not just for the naming of the prize, it would make me more suspicious that the allegations leveled against them are plausible, and it fits the "ends justify the means"-type reasoning that the OP refers to.  

The couple arguments against this do not likely hold up against the vast utility discrepancies from resource allocations...



This kind of utilitarian reasoning seems not too different from the kind that would get one to commit fraud to begin within. I don't think whether it's legally required to return or not makes the difference – morality does not depend on laws. If someone else steals money from a bank and gives it to me, I won't feel good about using that money even if I don't have to give it back and will use it much better.

1
Sharmake
1y
More importantly though, we have a vast bias to have motivated reasoning in order to view ourselves as basically good and trustworthy, and we have no good reason to suspect anything else, so I really am not accepting that argument. From Dann Luu here: https://danluu.com/wat/
2
Brad West
1y
It is very different from the kind of reasoning that leads to fraud. Fraud and many other kinds of other criminal behavior corrodes at the fabric of trust that enables our communities, large and small, to operate effectively. Thus, when you diminish the trust that members of society can place in each other, you do immense damage. Thus, in an EV calculation incorporating these kinds of activities, they are seldom justified because the harm risked is colossal. A retention of a benefit in these circumstances where the grantee is not complicit and is not legally required to return it does not cause or risk the above harm in the least. If a grant recipients'use of resources is extremely high EV, which it should be, the unnecessary defunding of it is obscenely immoral.

Sounds an awful lot like LessWrong, but competition can be healthy[1] ;) 

  1. ^

    I think this is less likely to be true of things like "places of discussion" because splitting the conversation / eroding common knowledge, but I think it's fine/maybe good to experiment here.

I didn't scrutinize, but at a high-level, new intro article is the best I've seen yet for EA. Very pleased to see it!

I think 20% might be a decent steady-state but at the start of their involvement I think I'd like to see new aspiring community builders do something like six months on intensive object-level work/research.

Fwiw, my role is similar to yours, and granted that LessWrong has a much stronger focus on Alignment, but I currently feel that a very good candidate for the #1 reason that I will fail to steer LW to massive impact is because I'm not and haven't been an Alignment researcher (and perhaps Oli hasn't been either, but he's a lot more engaged with the field than I am).

Again, thanks for taking the time to engage.

I think this post is maybe a format that the EA Forum hasn't done before, but this is intended to be a repository of advice that's crowd-sourced. This is also maybe not obvious because I "seeded" it with a lot of content I thought was worth sharing (and also to make it less sad if it didn't get many contributions – so far a few).

As I wrote:

I've seeded this post with a mix of advice, experience, and resources from myself and a few friends, plus various good content I found on LessWrong through the Relationships ta

... (read more)
1
Charles He
2y
Minor comment: In my tiny opinion, I thought the post was fine and I strong upvoted it and I think it should remain. Also in my tiny opinion, I also thought Lark's comment was fine and I strong upvoted it because of content.  We are doing the learnings.

Hi Larks, thanks for taking the time to engage.

I'm not sure how relevant this is to the EA forum?

I personally think that for Effective Altruists to be effective, they need to be healthy/well-adjusted/flourishing humans and therefore something as crucial as good relationship advice ought to be shared on the EA Forum (much the same productivity, agency or motivation advice).

I didn't mention it in the post, but part of the impetus for this post came from Julia's recent Power Dynamics between people in EA  post that discusses relationships, and it seemed ... (read more)

In terms of thinking about why solutions haven't been attempted, I'll plug Inadequate Equilibria. Though it probably provides a better explanation for why problems in the broader world haven't been addressed. I don't think the EA world is yet in an equilibrium and so things don't get done because {it's genuinely a bad idea, it seems like the thing you shouldn't be unilateral on and no one has built consensus,  sheer lack of time}.

Ruby
2y11
0
0

Good comment!!


Most ideas for solving problems are bad, so your prior should be that if you have an idea, and it's not being tried, probably the idea is bad;


A key thing here is to be able to accurately judge whether the idea would be harmful if tried or not. "Prior is bad idea != EV is negative". If the idea is a random research direction, probably won't hurt anyone if you try it. On the other hand, for example, certain kinds of community coordination attempts deplete a common resource and interfere with other attempts, so the fact no one else is acting is ... (read more)

For LessWrong, we've thought about some kind of "karma over views" metrics for a while. We experimented a few years ago but it proved to be a hard UI design challenge to make it work well. Recently we've thought about having another crack at it.

3
Emrik
2y
I have no idea how feasible it is. But I made this post because I personally would like to search for posts like that to patch the most important missing holes in my EA Forum knowledge. Thanks for all the forum work you've done, the result is already amazing! <3

Yes! This. Thank you for writing.

I often get asked why LessWrong doesn't hire contractors in the meantime while we're hiring, and this is the answer. In particular the fact that getting contractors to do good work would require all of the onboarding that getting a team member to do good work would require.

I don't mean that I expect EA Forum software to replace Swapcard for EAG itself probably, just that the goal is to provide similar functionality all year round.

7
Sarah Cheng
2y
That's right, we are planning to adapt some Swapcard-like functionality to the EA Forum. We are still in the product exploration phase so no concrete roadmap, but it's likely we will focus on user profiles / search / matching rather than features such as friending or scheduling. Swapcard is more tailored to conferences specifically, so we will not be replacing that entirely any time soon.

My understanding (which could be wrong, and I hope they don't mind me mentioning it on their behalf) is that the EA Forum dev team is working to build Swapcard functionality into the forum, including the ability to import your Swapcard data.

In the meantime, I agree with the OP.

4
Austin
2y
FWIW: I purchased http://www.swap.contact/ for a hackathon project -- but if EA Forum would use the domain I'd be happy to send it over~
8
NunoSempere
2y
Seems a bit unlikely; I created a market on this here.

I bet that if they are impressive to you (and your judgment is reasonable), you can convince grantmakers at present.

But there already is from the major funders.

1
Tobias Häberli
2y
It might be the combination of small funding and local knowledge about people's skills that is valuable. For example, funding a person that is (currently) not impressive to grantmakers but impressive if you know them and their career plans deeply.

Thank you for the detailed reply!

I agree that Earning to Give may make sense if you're neartermist or don't share the full moral framework. This is why my next sentence beings "if you'd be donating to longtermist/x-risk causes." I could have emphasized these caveats more.

I will say that if a path is not producing value, I very much want to demotivate people pursuing that path. They should do something else! One should only be motivated for things that deserve motivation.

I've looked at the posts you shared and I don't find them compelling. 

I think the ... (read more)

1
david_reinstein
2y
OK. I guess it would be better to have phrased it a little differently ... make it more like 'my belief is, and the consensus of people I've spoken with ... in the context of longtermist and x-risk causes' I agree with this, which is why I also said something like 'and I think ETG actually has great value' What about the LW post? That seems like the most compelling one to me that 'actually you probably could use more money to hire better people into AI research etc, it just isn't being done right'. My basic skepticism is sort of a classical economics argument. Unless intrinsic motivation is both rare and extremely important... if 'problem X needs more talent' you should be able to hire people to consider problem X, subsidize training people to build skills to address X, fund prizes for solutions to X, etc. If the issue is 'the problems are not defined well enough', you also should be able to fund people to target these problems, maybe fund people to refocus their research on these problems. My fear is that the 'ETG is not important' is coming from a sort of drop-in-the-ocean fallacy ("there's already $1 billion going into X, so my $10,000 can't make a difference") I also think that some of the critiques about "we don't know what to do next in X-risk/S-risk that isn't being funded" probably also apply to direct work. If we don't know what to do/fund, then how do we know that an additional EA skilling/focusing on this stuff will have a major impact?
Ruby
2y17
0
0

[Speaking from LessWrong here:] based on our experiments so far, I think there's a fair amount more work to be done before we'd want to widely roll out a new voting system. Unfortunately for this feature, development is paused while we work on some other stuff.

Ruby
3y47
0
0

 I also see that a lot of the issues were predictable from last year's comments but were not addressed.

This is my fault. I was the lead organizer for Petrov Day this year though wasn't an organizer in previous years. I recalled that there were issues with ambiguity last year, which I attempted to address (albeit quite unsuccessfully), however, I didn't go through and read/re-read all of the comments from last year. If I had done so, I might have corrected more of the design.

I'm sorry for the negative experience you had due to the poor design. I do thi... (read more)

4
Kirsten
3y
Thanks Ruby!

LessWrong mod speaking here. Just wanted to confirm that everything written here is correct. 

To be clear, only the identities of the account that enters a valid code will be shared. 

2
BrianTan
3y
Great, thanks!

There's a user setting that lets you do this. 

There is already a (clunky) feature that enables this.

If you hyperlink text with a tag url with the url parameter ?userTagName=true, the hyperlinked text will be replaced by whatever the current name of the tag is.

E.g. If the tag is called "Global dystopia" and I put in a post or other tag with the hyperlink url /global-distoptia?useTagName=true and then it gets renamed to "Dystopia":

  1. The old URL will still work
  2. The text "Global dystopia" will be replaced with the current name "Dystopia"
     

See: https://www.lesswrong.com/posts/E6CF8JCQAWqqhg7ZA/wiki-tag-fa... (read more)

Ah, sorry, we meant to fix that. Should have all been CET.

Ruby
3y10
0
0

Collaborative calendar/schedule for the event is now live! https://docs.google.com/spreadsheets/d/1xUToQ-Wu6w-Uaow7q8Bo5s61beWWRJhIh9P-DNAvx4Q/edit?usp=sharing

Please add any events or activities you'd like to run. Comment here or in the doc if you have questions, e.g . about good places to host your session.

Upvote suggestions from others if you like them too.

The Ultra Party Radio (TM) has been constructed (bottom right of attached image). We'll be streaming tunes to the entire Garden from our own server, but the music will be optimized for the ballroom dancefloor.

What music would you like to hear? Please comment with:

  1. Genres
  2. Specific song requests
  3. Playlists you might like us to use or borrow from
2
Ruby
3y
Upvote suggestions from others if you like them too.

We've now got a rough map of the venue. 
 

Some images of the party locations to pump the imagination:
 

Early testing of the ballroom
Here fishy, fishy
Meet new people in the Violet Study

We've now designated many activities to many different regions of the Walled Garden. If you're interested in hosting or attending a specific activity, please comment. The organizers can help you set it up and put it on the Official Party Schedule.

The following are scheduling throughout the party, but it seems great to have more specific things scheduled for like-interested people to join.

Ballroom: dancing, toasts & roasts, countdown
Violet Study: meet new people
Moloch Maze: games, e.g., poker, Among Us
Great Library (1st floor): deep philosophical conver... (read more)

Load more