I’m Catherine from CEA’s Community Health and Special Projects Team

I’ve been frustrated and angered by some of the experiences some women and gender minorities have had in this community, ranging from feeling uncomfortable being in an extreme minority at a meeting through to sexual harassment and much worse. And I’ve been saddened by the lost impact that resulted from these experiences. I’ve tried to make things a bit better (including via co-founding Magnify Mentoring before I started at CEA), and I hope to do more this year.

In December 2022, after a couple of very sad posts by women on the EA Forum, Anu Oak  and I started working on a project to get a better understanding of the experiences of women and gender minorities in the EA community. Łukasz Grabowski is now also helping out. Hopefully this information will help us form effective strategies to improve the EA movement. 

I don’t really know what we’re going to find, and I’m very uncertain about what actions we’ll want to take at the end of this. We’re open to the possibility that things are really bad and that improving the experiences of women and gender minorities should be a major priority for our team. But we’re also open to finding out that things aren’t – on the whole – all that bad, or aren’t all that tractable, and there are no significant changes we want to prioritise.

We are still in the early stages of our project. The things we are doing now are:

  • Gathering together and analysing existing data (EA Survey data, EAG(x) event feedback forms, incoming reports to the Community Health team, data from EA community subgroups, etc).
  • Talking to others in the community who are running related projects, or who have relevant expertise. 
  • Planning our next steps.

If you have existing data you think would be helpful and that you’d like to share please get in touch by emailing Anu on anubhuti.oak@centreforeffectivealtruism.org

If you’re running a related project, feel free to get in touch if you’d like to explore coordinating in some way (but please don’t feel obligated to).
 

Comments37
Sorted by Click to highlight new comments since: Today at 5:58 AM

Consider hiring an outside firm to do an independent review.

lilly
1y61
25
7

I think collecting data is a great idea, and I'm really glad this is happening. Thank you for doing this! Because one of your goals is to "better [understand] the experiences of women and gender minorities in the EA community," I wanted to relay one reaction I had to the Community Health Team's website. 

I found some of the language offputting because it seems to suggest that instances of (e.g.) sexual misconduct will be assessed primarily in terms of their impact on EA, rather than on the people involved. Here's an example:

"Our goal is to help the EA community and related communities have more positive impact, because we want a radically better world. A healthy community is a means to that end."

My basic reaction is: it is important to prevent sexual harassment (etc) because harassment is bad for the people who experience it, regardless of whether it affects the EA community's ability to have a positive impact.

This language is potentially alienating in and of itself, but also risks contributing to biased reporting by suggesting that the Community Health Team's response to the same kind of behavior might depend, for instance, on the perceived importance to EA of the parties involved. People are often already reluctant to disclose bad experiences, and I worry that framing the Community Health Team's work in this way will compound this, particularly in cases where accusations are being made against more established members of the community.

I read this with the knowledge that "we don't do smartass trolley problem calculations when it comes to shit like this, it never helps" is something reasonably  well ingrained in the community, but this might be a good moment to make this clear to people who may perhaps be newer

lilly
1y63
24
2

That this is reasonably well-ingrained in the community is less clear to me, especially post FTX. If the Community Health Team does see their goal as simply “support the community by supporting community members,” why not just plainly state that?

I’d actually love the Community Health Team to clarify:

  1. Holding fixed the facts of a case, would the Community Health Team endorse a policy of considering the value of the accused/their work to EA when deciding how forcefully to respond? For example, if someone did something bad at an EAG, would “how valuable is this person’s work to the community?” be considered when deciding whether to ban them from future EAGs?

  2. If the Community Health Team does endorse (1), how much weight does the “value to the community” criterion get relative to other criteria in determining a response?

  3. If the Community Health Team does not endorse (1), are there any policies or procedures on the books to prevent (1) from happening?

This is especially important to get some clarity on since most people's priors about how a community or community health team makes these decisions is based on their experiences from other communities they may be a part of like their universities, workplaces, social groups. If the Community Health team's values or weights in this area are different to those of non-EA communities, it is absolutely essential for people to know this.
I would go far enough to say that depending on the difference in values and the difference in approaches to sexual harassment (etc) policy, not offering clarity here can be considered as being deceptive because it prevents people from making their own decisions based on how they value their personal safety and well-being.

I appreciate your attention to the language here. Having personal experience of not being believed or supported (outside of EA), I know how challenging it can be to try to keep going, let alone consider relative impact. I was quick to endorse the spirit of the overall message (which was, at least in part, informed by my knowledge of those involved) and should have noted my own reservations with some of the language. 

I agree that language if very off-putting. A healthy community should not be a means to an end. 

Suppose, hypothetically, that every individual EA would be just as effective, do just as much good, without an EA community as with one. In that case, how many resources should CEA and other EA orgs devote to community  building? My answer is exactly 0. That implies that the EA community is a means to an end, the end of making EAs more effective.

That said, I wouldn't necessarily generalize to other communities. And I agree that assessing a particular case of alleged wrongdoing should not depend on the perceived value of the accused's contributions to EA causes, and I do not read CEA's language as implying otherwise.

I agree that meta work as a whole can only be justified from an EA framework on consequentialist grounds -- any other conclusion would result in partiality, holding the interests of EAs as more weighty than the interests of others.

However, I would argue that certain non-consequentialist moral duties come into play conditioned on certain choices. For example, if CEA decides to hold conferences, that creates a duty to take reasonable steps to prevent and address harassment and other misconduct at the conference. If an EA organization chooses to give someone power, and the person uses that power to further harassment (or to retaliate against a survivor), then the EA organization has a duty to take appropriate action.

Likewise, I don't have a specific moral duty to  dogs currently sitting in shelters. But having adopted my dog, I now have moral duties relating to her well-being. If I choose to drive and negligently run over someone with my car, I have a moral duty to compensate them for the harm I caused. I cannot get out of those moral duties by observing that my money would be more effectively spent on bednets than on basic care for my dog or on compensating the accident victim.

So if -- for example -- CEA knows that someone is a sufficiently bad actor,  its obligation to promote a healthy community by banning that person from CEA events is not only based on consequentialist logic. It is based on CEA's obligation to take reasonable steps to protect people at its events.

Why not? In consequentialism/utilitarian philosophy basically everything except utility itself is a means to an end.

lilly
1y29
15
4

I think it would be a bad idea for the Community Health Team to view their goal as promoting the EA community's ends, rather than the well-being of community members. Here is a non-exhaustive list of reasons why:

  1. The Community Health Team can likely best promote the ends of the EA community by promoting the well-being of community members. I suspect doing more involved EV calculations will lead to worse community health, and thus a less impactful EA community. (I think the TIME story provides some evidence for this.)
  2. Harassment is intrinsically bad (i.e., it is an end we should avoid).
  3. Treating instances of harassment as bad only (or primarily) for instrumental reasons risks compounding harms experienced by victims of harassment. It is bad enough to be harassed, but worse to know that the people you are supposed to be able to turn to for support will do an EV calculation to decide what to do about it (even if they believe you).
  4. If I know that reporting bad behavior to the Community Health Team may prompt them to, e.g., assess the accused's and my relative contributions to EA, then I may be less inclined to report. Thus, instrumentalizing community health may undermine community health.
  5. Suggesting that harassment primarily matters because it may make the community less impactful is alienating to people. (I have strong consequentialist leanings, and still feel alienated by this language.)
  6. If the Community Health Team thinks that repercussions should be contingent upon, e.g., the value of the research the accused party is doing, then this renders it difficult to create clear standards of conduct. For instance, this makes it harder to create rules like: "If someone does X, the punishment will be Y" because Y will depend on who the "someone" is. In the absence of clear standards of conduct, there will be more harmful behavior.
  7. It's intuitively unjust to make the consequences of bad behavior contingent upon someone's perceived value to the community. The Community Health Team plays a role that is sort of analogous to a university's disciplinary committee, and most people think it's very bad when a university gives a lighter punishment to someone who commits rape because they are a star athlete, or their dad is a major donor, etc. The language the Community Health Team uses on their website (and here) feels worryingly close to this.

I’m not fully sold on utilitarianism myself, but it seems like your main argument here is that harassment/negative community norms are ends to pursue in themselves, which again is against a strictly consequentialist framework.

I broadly agree with you, but I think this is one of those messy areas where EAs strong commitment to utilitarian reasoning makes things complicated. As you say, from a utilitarian perspective it’s better to not treat community health instrumentally because it will lead to less trust. However, if the community health team is truly utilitarian, then they would have strong reason to treat the community instrumentally but simply keep that part of their reasoning a secret.

Building trust in a utilitarian community seems extremely difficult for this reason. For instance, see Singer’s paper on secrecy in utilitarianism:

https://betonit.substack.com/p/singer-and-the-noble-lie

First of all, because you can't actually predict and quantify the aggregate effect of choices regarding community health on the movement's impact. You're better off taking it as a rule of thumb that people need to feel safe in the community, no matter what.

Second, because not everyone here is a utilitarian, and even those who partly are also want to feel safe in their own lives.

Having a healthy community better than having an unhealthy community, all else being equal, because people being harmed is bad. This is a consequence we care about under consequentialism, even if it had zero effect on the other things we care about. 

As it happens, a healthy community almost certainly  has a positive effect on the other things we care about as well. But emphasizing this aspect makes it look like we don't care about the first thing as well. 

Sure but then you need to make a case for why you would prioritise this over anything else that you think has good consequences, I think the com health statement tries to make that argument (though it's not fully specified) whereas a statement like "we want to do x because x is bad" doesn't really help me understand why they want to prioritise x.

Okay, I feel like we need to rewind a bit. The problem is that people who have experienced behaviour like harrassment are getting the impression from that document that EA health might ignore their complaint depending on how "effective" the bad actor in question is, based on some naive EV calculation. 

Now I'm assuming this impression is mistaken,  in which case literally all they need to do is update the document to make it clear they don't tolerate bad behaviour, whoever it comes from. This costs 0$. 

I don't think that impression would be unfounded. In Julia Wise's post from last August, she mentioned these trade-offs (among others):

Encourage the sharing of research and other work, even if the people producing it have done bad stuff personally Don’t let people use EA to gain social status that they’ll use to do more bad stuff
Take the talent bottleneck seriously; don’t hamper hiring / projects too much Take culture seriously; don’t create a culture where people can predictably get away with bad stuff if they’re also producing impact

This means, on the one hand, that the team is well aware of the potential consequences of doing naive impact calculations to decide on their actions. On the other hand, it means the impact of any decided policy for handling complaints, in terms of the work accused people are doing, is certainly taken into account.

More generally, it seems that the team does think of their endgoal as making the most positive impact (which fits what other CEA higher ups have said about the goals of the org as a whole), and creating a safe community is indeed just a means to that end.

This all makes me somewhat distrustful of the Community Health team.

I don’t really know what we’re going to find, and I’m very uncertain about what actions we’ll want to take at the end of this. We’re open to the possibility that things are really bad and that improving the experiences of women and gender minorities should be a major priority for our team. But we’re also open to finding out that things aren’t – on the whole – all that bad, or aren’t all that tractable, and there are no significant changes we want to prioritise.

 

I appreciated this. I really want EA to understand its problems and deal with them, but that's not going to happen if everyone is starting with an agenda. I value someone going in with a truth seeking goal to understand the situation. 

I'm happy to see the community health team analyze existing data. Will any of this be shared with the rest of the EA community in any way, e.g. after deanonymizing? 

I'd also love to see the community health team eventually address some of the most severe allegations that have surfaced recently, specifically this thread and its unresolved questions. While I'm happy to see Julia say she's taking a "renewed look at possible steps to take here", how this comes across is that the renewed look was in response to the case going public. If true, this does raise questions around whether similar cases (egregious and unambiguous abuse of power by senior EAs) were known to the community health team and how they were handled, and whether a similar renewed look might be warranted for these other cases, even if they haven't been made public or caused any other issues yet.

In general my impression is that while most are very grateful for the community health team's work, and happy to place their trust accordingly, the current model does require mainly taking the community health team's word at face value. However,  recent events have indicated that this dependence may not be the most sustainable or accountable model going forwards. This is true also from the community health team's perspective, in terms of being easier to lose the community's trust if individual team members make human errors, as well as being more susceptible to allegations that may suffer from missing information.

current working model is the community health team seems good at what they do once they talk to people face to face (source: mostly word of mouth, people might have other experiences), some members are maybe temperamentally somewhat conflict averse and in general they are used to ~rat culture levels of charitability when it comes to public communications

regrettably this means that people who are less temperamentally charitable/newer to the movement might find it more difficult to trust them.

seems important to distinguish 'are people happy with the results of talking to them/are they worthy of trust' to 'are they good at comms'

Thank you for writing this post and for your important work, Catherine, Anu, and Łukasz.

We (me and the rest of the EA DC team) are always trying to learn and make our community more inclusive. If I can somehow support you or your work, please do let me know. 

Thanks to all the commenters asking us about whether our response is different depending on the person’s perceived value to the community and world. The community health team discussed responding to these questions when this post was first written, but we wanted all relevant team members to be able to carefully check and endorse our statements, and it was a very busy time. So we put our response on hold for a bit. Apologies for the delay.  

First, I want to say that our team cares a lot about the culture of EA. It would be a terrible loss to EA’s work if bad behaviour were tolerated here, both because of the harm that would do to individuals and because of the effect on people’s interest in getting involved and staying involved with EA projects. We care about the experience of individuals who go through any kind of harm, but there’s a reason we focus on people in EA. We do this work in EA and not in some other community because we think EA has a real chance at making a difference on very serious problems in the world, and we think it’s especially important that this community be a healthy one that doesn’t lose people because they don’t feel safe. We’ve changed some wording on our website to better reflect this.

I’ll give some examples of how this looks in practice. I don’t want to convey that we’ve developed the ideal policy here - it’s definitely a work in progress and I think it is likely that we’ve made mistakes. 

I do want to be clear on one thing: If we believe someone had committed a violent crime then we would take serious action (including, if the victim wished) helping the victim navigate the police and justice system. It doesn’t matter how valuable the person’s work is. No one is above the law. Tolerating this kind of behaviour would erode basic norms that allow people to cooperate.

If we had good reason to think someone had committed a serious offence against another person (e.g. assault) it wouldn’t matter the value of their work, we would not want them at CEA events.

Exceptions we have made a handful of times (~5 times in thousands of conference applications over 7 years):

  • If the victim/survivor did not want action taken, for example because they believed it would increase danger to themselves.
  • if the assault happened a long time ago and there is strong reason to believe the person is not at risk of causing problems now (e.g. if the victim doesn’t believe other people are at risk and doesn’t want the person banned.)

In situations where the action was minor (e.g. by being quite argumentative with people, or making a person feel uncomfortable by flirting but without a significant power difference) or when we can’t get much information (i.e. the reports are hearsay/rumour) then our approach has been: 

  • If the grantmaker/events admissions team already think it is borderline whether the person should be given the opportunity, we might recommend the person not get the opportunity.
  • But if the grantmaker/events admissions team think there is a lot of value from this person getting the opportunity and still want to proceed knowing our concerns, we’ll aim to do harm reduction e.g. by
    • talking to the person about how their actions weren’t received well and giving them suggestions to prevent this happening. We try to do this in cases where we have reason to think the person is well intentioned but unaware of the effect they sometimes have on others.
    • suggesting some alterations to the project (e.g. by suggesting a different person working on their project does some of the tasks)
    • trying to find more information about the person or incident. For example, we might talk to the person directly, the people who reported the concern, or ask one of their colleagues or the organiser of their EA group if they have any concerns about this person in other contexts. If we’re not able to share the identity of the person, we might just ask how their group/workplace is going and if there are any worries they have - which is something we commonly do whether or not there are concerns about a member of their group/workplace).
  • If we are only concerned about someone’s in-person actions, we generally don’t try to block remote or intellectual work like research funding.

Thanks for all the suggestions and comments on this post! I have read them and will respond. 

I know some commenters have been trying to square the uncertainty I express in this post with the allegations in TIME. I’ve added a new comment where I’ve shared the Community Health team’s understanding about most (not all) of the cases:

https://forum.effectivealtruism.org/posts/JCyX29F77Jak5gbwq/ea-sexual-harassment-and-abuse?commentId=jKJ4kLq8e6RZtTe2P 

I'm heartened to hear that this project is underway, and I'm looking forward to being able to use this information to make our communities (local and global) better. Thank you, Catherine, Anu, and Łukasz!

Please feel free to reach out to me if I can be helpful. I don't have data to share at this time, but I want to support and encourage you in this work if I can.

I’m very glad you’re undertaking this project. When collecting and analyzing the data, I hope you’ll look for ways to account for survivor bias. For example, EA Survey data will systematically exclude people who leave the community due to negative experiences. Seeking out data from people who used to answer the EA Survey but no longer do (or used to attend EAG, etc.) could be one way of addressing this bias.

The problem with this, as well as the moral issues that Lilly speaks to, is the difficulty in gathering accurate data on the rate of sexual harassment or sexual assault:

(1) 80% of sexual assault and sexual harassment is not reported. 

(2) When something is reported to an org, over 90% of the time, the person who made the initial reports to the organization "drops off" before it can be investigated (my rate is about 60%, as I work as a third party reporter, but it's taken me 4 - 5 years to get it that "low")

(3) Unless EA wants to improve its reporting systems...how can you expect to get accurate data? Literally, startups (I've partnered with and work with) have raised tens of millions to solve this problem (underreporting of sexual harassment). As someone with many years of education, experience, and expertise, CEA not be willing to see expert or outside counsel on this, but looking inward/working with others in the EA ecosystem is short-sighted at best.

Why is this not listed as a 'Community' post? (And thereby blocked by default?) 

Sorry to the authors, it's not their faults presumably, I'm just tired of this insular/naval-gazing stuff, was excited to see this more out of my feed

I've added the tag.

Currently, moderators and Forum facilitators will be applying and monitoring this tag. Other users cannot modify the tag (including the authors of this post).

Filed a feature suggestion to allow authors to add the "community" tag to their posts

Thanks Lorenzo!

I have no idea. I couldn't work out how to list it as "Community ". I'm guessing the mods haven't categorised it yet.

Users can't tag their own posts in general, but CEA can, and this is a CEA post, so that doesn't seem like it should be the answer. Perhaps CEA community posts have an exception because they are more like official announcements than discussions? (Seems bad if so).

Forum mods can add tags, not anyone who works at CEA (and not all forum mods work at CEA)

I'd like for you to do a survey and a listening exercise and see if the results differ. I guess that a big quick survey would show most women are pretty happy with EA, but that a listening exercise might find that a small % really aren't and perhaps that people have left EA for this reason.

I'm curious about methods that might reveal things we don't expect or don't want to find out about what women really think.

That is true , but what happens to also those who feel marginalized and socially excluded from EA, who feel that EA has noble causes but somehow  it chooses who to work with and who not to work with?

I don’t really know what we’re going to find, and I’m very uncertain about what actions we’ll want to take at the end of this. We’re open to the possibility that things are really bad and that improving the experiences of women and gender minorities should be a major priority for our team. But we’re also open to finding out that things aren’t – on the whole – all that bad, or aren’t all that tractable, and there are no significant changes we want to prioritise.

 This statement is revelatory, and reveals a lot about the attitude of not thinking a little rape is bad enough to expend energy toward.

It's hopeless. I previously thought maybe EA could be made safer for women. I don't think it can anymore.

This may be unhelpful… I don’t think it’s possible to get to 0 instances of harassment in any human community.

I think a healthy community will do lots of prevention and also have services in place for addressing the times that prevention fails, because it will. This is a painful change of perspective from when I hoped for a 0% harm utopia.

I think EA really may have a culture problem and that we’re too passive on these issues because it’s seen as “not tractable” to fix problems like gender/power imbalances and interpersonal violence. We should still work on some of those issues because a culture of prevention is safer.

But while it’s fine to aim for 0 instances of harm as an inspiring stretch goal, it may also be honest to notice when that goal isn’t possible to achieve.

Curated and popular this week
Relevant opportunities