RobBensinger

@ Machine Intelligence Research Institute
5736Berkeley, CA, USAJoined Sep 2014

Sequences
1

Late 2021 MIRI Conversations

Comments
496

Topic Contributions
2

"I’m now associated with predatory polyamorous rationalists." doesn't explicitly assert that all poly people are predatory, but it does read to me similar to "I'm now associated with predatory gay rationalists." The implication is that it's gross to be associated with poly people, just as it's gross to be associated with predators. ("This is not what I signed up for. ") And the implication is that polyamory and predatory behavior are a sort of package deal.

Compare, for example, "I'm now associated with greedy Jewish EAs" or "I'm now associated with smelly autistic gamers". Are these explicitly asserting that all Jews are greedy, or that all autistic people are smelly? No, but I get the message loud and clear. OP is not being subtle here regarding what they think of polyamorous people.

I think it's probably good for EA discourse right now to be able to talk about scandals etc. openly but also have, like, minimum moats preventing some of the lowest effort bad-faith targeting by external parties.

What does "targeting" mean? I could see it being good to have a higher karma threshold for commenting on Community posts, so it's harder for e.g. non-EAs to pretend to be EAs, people to create multiple sockpuppet accounts to skew discussion in a Community post, etc. (Not saying this has happened already on the EA Forum, just noting this as an obvious sort of thing that could happen.)

On the other hand, if "targeting" means people on other sites discussing EA Forum stuff, I disagree. IMO the goal of moats should be to make EA Forum discussion better, not to make it hide particular EA Forum posts. (As a beleaguered MIRI employee, I warn you: Beware the Streisand effect!)

The important parts of the EA forum to people who are Googling us are, like, the things that we object-level care about!

No? That's the stuff that we wish were important to them. The things people actually want to know are more diverse than that, and probably do skew more toward juicy scandals than toward the things EAs wish people were more interested in.

If we want to override that preference, then we can choose to do so. But let's not lie to ourselves about why we're doing it.

I think they [community posts] make discourse norms worse and having the frontpage full of object-level takes about the world (which in fact actually tracks what most people doing direct EA work are actually focused on, instead of writing Forum posts!) is better for both discourse norms and, IDK, the health of the EA community.

I agree with this. I bid that you die on this hill instead, not on the other one. :P

Finally, I find myself instinctively censoring myself on the Forum because anything I say can be adversarially quoted by a journalist attempting to take it out of context. There's not a lot I can do about that, but we could at least make it slightly harder for discussion amongst EAs that often require context about EA principles and values and community norms to be on the public internet.

That makes total sense to me. I'd suggest that rather than hiding parts of the EA Forum, we just make a new forum that's designed from the start to be more casual and insider-y. That could include search engine pessimization, as well as "only vetted EAs in good standing can post or comment", as well as "the vetting system allows EAs to post pseudonymously, have multiple accounts, etc." Heck, even small things like not having the word "EA" in the forum's name may help people feel more comfortable being candid.

(I'm not bidding that this hypothetical new forum hide the fact that it's EA-ish. I'm just imagining it serving a similar function to something like Dank EA Memes, where it's branded as a less official and important sort of thing, so people feel more comfortable just chatting casually, shitposting, being candid, etc.)

(And I'm not bidding that this thing replace the EA Forum. I just think it can be useful to try out multiple sorts of forums and see what norms, culture, rules, etc. work best. Centralizing everything on a single forum isn't actually the best approach, IMO.)

  • Dialling down weirdness is often difficult and stressful, and in some cases effectively amounts to excluding people entirely (whether intentionally or not),
  • Cultural designation of harmless lifestyles, or beliefs, or neurodivergence, as weird or inappropriate is immoral, and we should not tolerate it lightly.

I strongly agree with this.

I do think it's important to keep in mind that there are competing access needs here: you can't fully optimize EA for feeling emotionally safe, low-stress, etc. to one group, without giving up on fully optimizing EA for feeling emotionally safe and low-stress to at least one other group.

A classic example is 'groups for male survivors of rape by women' and 'groups for female survivors of rape by men'. Rather than "safe space" being a person-invariant property, it's relative to what group you're tailoring the space around. Similar for spaces that are extra welcoming to left-wing people (and for that very reason tend to feel less comfortable to the average right-wing person), vs. spaces that are extra welcoming to right-wing people (and for that reason are less cozy on average for left-wing people).

It's possible to create social spaces that are physically safe for everyone, but it's not possible to create social spaces that feel equally emotionally "safe" for every type of well-intentioned human being. The two bullet points above are true, and also EA does have to make some tradeoffs that will make some people thrive less in EAs than others do.

We don't have to be happy that we're doomed to make tradeoffs like this, and we don't have to think that the tradeoffs are currently being made optimally, but we should recognize this as part of the territory.

This comment and the OP are blurring the line between "office" and "network". I think some people want a strong taboo within EA against dating co-workers, and other people want a strong taboo within EA against dating other EAs. "Network" makes it sound like folks are proposing the latter, but most of the specifics so far are about workplace relationships. Regardless of the merits of the different views, it seems helpful to clearly distinguish those proposals and argue for them separately.

Side-note: the OP says "Wildly unusual social practices like polyamory", but I think poly is fairly common in the Bay Area outside of EA/rat circles.

I suspect it's fairly common in other young, blue-tribe, urban contexts in the US too? (Especially if we treat "polyamorous", "non-monogamous", and many "monogamish" relationship styles as more-or-less the same phenomenon.)

Historical note: If EA had emerged in the 1970s era of the gay rights movement rather than the 2010s, I can imagine an alternative history in which some EAs were utterly outraged and offended that gay or lesbian EAs had dared to invite them to a gay or lesbian event. The EA community could have leveraged the latent homophobia of the time to portray such an invitation as bizarrely unprofessional, and a big problem that needs addressing. Why are we treating polyamory and kink in 2023 with the same reactive outrage that people would have treated gay/lesbian sexuality fifty years ago?

I agree with this. Though the thing I'd want to push for isn't "treat it as an axiom that poly and BDSM are exactly as socially and psychologically healthy and good as LGBT things, and accuse people of bigotry if they ever criticize those practices".

The thing I'd push for instead is: Err on the side of treating EAs' consensual choices in their personal lives as None Of The Movement's Business. But if topics like "what are the costs and benefits of poly?" come up (either because EAs are trying to make personal decisions, or because they're trying to understand the world at large), try to make it socially safe for people to express their actual views (both pro and con), as long as they're civil, willing to provide supporting arguments and hear counter-arguments, and otherwise following good epistemic norms in the conversation.

The OP suggests that Claire's position is unusual because:

"While it is common for funders to serve on boards, it is not necessarily best practice."

Not because Wytham Abbey was a poor purchase.

The OP says: "Claire Zabel oversees significant grant-making to EVF organizations through her role at Open Phil, some of which have come under fire. While it is common for funders to serve on boards, it is not necessarily best practice."

I interpret this as saying Claire should be removed because  'funders serving on boards is not necessarily best practice', and also because the Wytham Abbey purchase was controversial and/or bad.

I think it's bad to cite the Abbey as a reason for a decision like this, while maintaining ambiguity about whether you think the purchase was a bad idea vs. merely controversial. I also think it would be unhealthy for EA to go down the road of making decisions heavily based on what seems controversial, without saying anything about whether you think the idea was also bad.

Social environments with a lot of "obviously X is suspect, everyone knows that, no need for me to say why I think that" talk tend to fall into a lot of deference cascades and miasma-based bad reputations. (Things that are perceived as bad largely because other people keep reporting that they think others perceive the thing as bad.)

it seems to me that you've used this to re-open the WA discussion here rather than discuss whether EVF should consider changing the composition/structure of its boards, and what the merits of that would be

If Wytham Abbey is going to be cited as one of the reasons to remove people from boards (as indeed it has been here), then it needs to be OK for people to say why they agree or disagree with that call.

(Which is also part of why it's helpful to say specifically what you think we should take away from the abbey case for this decision, rather than just fuzzily saying the decision has "come under fire". Many good ideas inspire disagreement! If you think this was a bad idea, then just say so, and ideally gesture at why you think so.)

For context: I was already planning to share Edward's comment somewhere, since I liked the analysis (similar to one he previously posted during the abbey discussion), and it wasn't available on the public internet. (And he'd given me permission to cross-post it.)

But I've been busy and hadn't gotten around to posting Edward's thing anywhere, hence me posting it here. If it already existed somewhere linkable, then I'd have just posted a link here instead.

I'm not convinced that it was poorly executed; I'd need to hear more details first.

they didn't disclose it publicly, so that the community mostly learned about it via Emile Torres' Twitter account

Claire said, "(This isn’t my domain but) we typically aim to publish grants within three months of when we make our initial payment, but we're currently working through a backlog of older grants."

This may reflect that Open Phil made a mistake, or it may reflect that they made the right call and de-prioritized "get announcements out ASAP" in favor of more objectively important tasks. I'd want to know more about why the backlog existed before weighing in.

I don't think Open Phil should heavily reshape their prioritization on things like this based on what they're scared Emile Torres will tweet about; that does not sound like the sort of heuristic that  I'd expect to result in a functional Open Phil that is keeping its eye on the ball.

But separate from Torres, I can see arguments for it being useful to EAs for us to get faster updates from Open Phil, given what a large funder they are in this space. They of course don't morally owe EAs even 1% of the details they've provided to date, but it's a genuinely valuable community service that they share so much. So yes, faster may be better, and maybe the slow announcement in this case reflects some upstream process error I'm not aware of.

contrary to what Edward Kmett says, there are modern conference centres in Oxford, which would probably have cost substantially less

I'd be interested to see examples, and I'd be idly curious to know why they didn't pick one. If there's a need for more venue space, possibly we should purchase at least one of those too.

it doesn't sound like they did much or any external consultation, and the board have not shown any indication that they recognise the genuine risk of rationalising high-value decisions like this 

I don't know what you mean by this or why you think it. Who should Claire have talked to who you think she didn't talk to? Why is this decision at higher risk of rationalizing than any other decision?

Or are you just saying "this decision was important, and it's not clear to me that Claire realized how important it is, and therefore risky to get wrong"? If so, I have no idea why you think that either. Maybe if you wrote a longer-form thing detailing what you think the evaluation process should have looked like, and how you think that differed from the actual evaluation process for the abbey, it would be clearer where the disagreement is.

Sharing a conversation between Oliver Habryka and an anonymous person from December, in response to the FTX implosion:

Anonymous: Actually, not clear if it’s obvious to you, but everyone is reluctant to say ANYTHING public right now [in the wake of FTX]. Downside risk is extremely extremely high. What’s your story of higher upside?

[Note from Anonymous: I meant from upthread context to talk about “downside risk [to an org that isn’t overtly EA]”, and not downside risk [to individual EAs] or [to EA leadership orgs] or [to the median person considering speaking up]. But we cleared that up later downthread and here Oli is replying more generally. I like what he’s saying and thanks Robbie for asking to post this.]

Oliver Habryka: I mean, it is very obvious to me, as trust in my ecosystem is collapsing because nobody is around to steer the ship and I am standing in front of an empty parliament chanting "reform" while the leadership is cowering and absent

But to be less poetic, I do think the downside of not saying anything are huge. I think trust in most things we built is collapsing very quickly (including with me), and the actual downside is talking is maybe on the order of $1BB-$2BB, which seems much lower than the costs.

I think it's quite unlikely that saying anything right now will cost you much more than the money that you or people around you might have received from FTX. And like, I think indeed it is probably ethical to give most of that money back, so the marginal cost seems relatively low

Yep, it's definitely an important consideration that points in that direction! I'm not sure what the balance of arguments favors here, though I lean toward thinking it's good EA is a thing.

Since people have different visions of what they'd like EA to become, I think the best option is for people to articulate their visions and argue for them, and then we can try to converge; and to the extent we persistently disagree, we try to negotiate and plan some fair compromise. (Keeping in mind that it's hard to bind a huge informal community/movement to anything, no matter how much a small subset wants to negotiate a specific plan!)

Load More