I’m Catherine from CEA’s Community Health and Special Projects Team.
I’ve been frustrated and angered by some of the experiences some women and gender minorities have had in this community, ranging from feeling uncomfortable being in an extreme minority at a meeting through to sexual harassment and much worse. And I’ve been saddened by the lost impact that resulted from these experiences. I’ve tried to make things a bit better (including via co-founding Magnify Mentoring before I started at CEA), and I hope to do more this year.
In December 2022, after a couple of very sad posts by women on the EA Forum, Anu Oak and I started working on a project to get a better understanding of the experiences of women and gender minorities in the EA community. Łukasz Grabowski is now also helping out. Hopefully this information will help us form effective strategies to improve the EA movement.
I don’t really know what we’re going to find, and I’m very uncertain about what actions we’ll want to take at the end of this. We’re open to the possibility that things are really bad and that improving the experiences of women and gender minorities should be a major priority for our team. But we’re also open to finding out that things aren’t – on the whole – all that bad, or aren’t all that tractable, and there are no significant changes we want to prioritise.
We are still in the early stages of our project. The things we are doing now are:
- Gathering together and analysing existing data (EA Survey data, EAG(x) event feedback forms, incoming reports to the Community Health team, data from EA community subgroups, etc).
- Talking to others in the community who are running related projects, or who have relevant expertise.
- Planning our next steps.
If you have existing data you think would be helpful and that you’d like to share please get in touch by emailing Anu on firstname.lastname@example.org.
If you’re running a related project, feel free to get in touch if you’d like to explore coordinating in some way (but please don’t feel obligated to).
Consider hiring an outside firm to do an independent review.
I think collecting data is a great idea, and I'm really glad this is happening. Thank you for doing this! Because one of your goals is to "better [understand] the experiences of women and gender minorities in the EA community," I wanted to relay one reaction I had to the Community Health Team's website.
I found some of the language offputting because it seems to suggest that instances of (e.g.) sexual misconduct will be assessed primarily in terms of their impact on EA, rather than on the people involved. Here's an example:
My basic reaction is: it is important to prevent sexual harassment (etc) because harassment is bad for the people who experience it, regardless of whether it affects the EA community's ability to have a positive impact.
This language is potentially alienating in and of itself, but also risks contributing to biased reporting by suggesting that the Community Health Team's response to the same kind of behavior might depend, for instance, on the perceived importance to EA of the parties involved. People are often already reluctant to disclose bad experiences, and I worry that framing the Community Health Team's work in this way will compound this, particularly in cases where accusations are being made against more established members of the community.
I read this with the knowledge that "we don't do smartass trolley problem calculations when it comes to shit like this, it never helps" is something reasonably well ingrained in the community, but this might be a good moment to make this clear to people who may perhaps be newer
That this is reasonably well-ingrained in the community is less clear to me, especially post FTX. If the Community Health Team does see their goal as simply “support the community by supporting community members,” why not just plainly state that?
I’d actually love the Community Health Team to clarify:
Holding fixed the facts of a case, would the Community Health Team endorse a policy of considering the value of the accused/their work to EA when deciding how forcefully to respond? For example, if someone did something bad at an EAG, would “how valuable is this person’s work to the community?” be considered when deciding whether to ban them from future EAGs?
If the Community Health Team does endorse (1), how much weight does the “value to the community” criterion get relative to other criteria in determining a response?
If the Community Health Team does not endorse (1), are there any policies or procedures on the books to prevent (1) from happening?
This is especially important to get some clarity on since most people's priors about how a community or community health team makes these decisions is based on their experiences from other communities they may be a part of like their universities, workplaces, social groups. If the Community Health team's values or weights in this area are different to those of non-EA communities, it is absolutely essential for people to know this.
I would go far enough to say that depending on the difference in values and the difference in approaches to sexual harassment (etc) policy, not offering clarity here can be considered as being deceptive because it prevents people from making their own decisions based on how they value their personal safety and well-being.
I appreciate your attention to the language here. Having personal experience of not being believed or supported (outside of EA), I know how challenging it can be to try to keep going, let alone consider relative impact. I was quick to endorse the spirit of the overall message (which was, at least in part, informed by my knowledge of those involved) and should have noted my own reservations with some of the language.
I agree that language if very off-putting. A healthy community should not be a means to an end.
Suppose, hypothetically, that every individual EA would be just as effective, do just as much good, without an EA community as with one. In that case, how many resources should CEA and other EA orgs devote to community building? My answer is exactly 0. That implies that the EA community is a means to an end, the end of making EAs more effective.
That said, I wouldn't necessarily generalize to other communities. And I agree that assessing a particular case of alleged wrongdoing should not depend on the perceived value of the accused's contributions to EA causes, and I do not read CEA's language as implying otherwise.
I agree that meta work as a whole can only be justified from an EA framework on consequentialist grounds -- any other conclusion would result in partiality, holding the interests of EAs as more weighty than the interests of others.
However, I would argue that certain non-consequentialist moral duties come into play conditioned on certain choices. For example, if CEA decides to hold conferences, that creates a duty to take reasonable steps to prevent and address harassment and other misconduct at the conference. If an EA organization chooses to give someone power, and the person uses that power to further harassment (or to retaliate against a survivor), then the EA organization has a duty to take appropriate action.
Likewise, I don't have a specific moral duty to dogs currently sitting in shelters. But having adopted my dog, I now have moral duties relating to her well-being. If I choose to drive and negligently run over someone with my car, I have a moral duty to compensate them for the harm I caused. I cannot get out of those moral duties by observing that my money would be more effectively spent on bednets than on basic care for my dog or on compensating the accident victim.
So if -- for example -- CEA knows that someone is a sufficiently bad actor, its obligation to promote a healthy community by banning that person from CEA events is not only based on consequentialist logic. It is based on CEA's obligation to take reasonable steps to protect people at its events.
Why not? In consequentialism/utilitarian philosophy basically everything except utility itself is a means to an end.
I think it would be a bad idea for the Community Health Team to view their goal as promoting the EA community's ends, rather than the well-being of community members. Here is a non-exhaustive list of reasons why:
I’m not fully sold on utilitarianism myself, but it seems like your main argument here is that harassment/negative community norms are ends to pursue in themselves, which again is against a strictly consequentialist framework.
I broadly agree with you, but I think this is one of those messy areas where EAs strong commitment to utilitarian reasoning makes things complicated. As you say, from a utilitarian perspective it’s better to not treat community health instrumentally because it will lead to less trust. However, if the community health team is truly utilitarian, then they would have strong reason to treat the community instrumentally but simply keep that part of their reasoning a secret.
Building trust in a utilitarian community seems extremely difficult for this reason. For instance, see Singer’s paper on secrecy in utilitarianism:
First of all, because you can't actually predict and quantify the aggregate effect of choices regarding community health on the movement's impact. You're better off taking it as a rule of thumb that people need to feel safe in the community, no matter what.
Second, because not everyone here is a utilitarian, and even those who partly are also want to feel safe in their own lives.
Having a healthy community better than having an unhealthy community, all else being equal, because people being harmed is bad. This is a consequence we care about under consequentialism, even if it had zero effect on the other things we care about.
As it happens, a healthy community almost certainly has a positive effect on the other things we care about as well. But emphasizing this aspect makes it look like we don't care about the first thing as well.
Sure but then you need to make a case for why you would prioritise this over anything else that you think has good consequences, I think the com health statement tries to make that argument (though it's not fully specified) whereas a statement like "we want to do x because x is bad" doesn't really help me understand why they want to prioritise x.
Okay, I feel like we need to rewind a bit. The problem is that people who have experienced behaviour like harrassment are getting the impression from that document that EA health might ignore their complaint depending on how "effective" the bad actor in question is, based on some naive EV calculation.
Now I'm assuming this impression is mistaken, in which case literally all they need to do is update the document to make it clear they don't tolerate bad behaviour, whoever it comes from. This costs 0$.
I don't think that impression would be unfounded. In Julia Wise's post from last August, she mentioned these trade-offs (among others):
This means, on the one hand, that the team is well aware of the potential consequences of doing naive impact calculations to decide on their actions. On the other hand, it means the impact of any decided policy for handling complaints, in terms of the work accused people are doing, is certainly taken into account.
More generally, it seems that the team does think of their endgoal as making the most positive impact (which fits what other CEA higher ups have said about the goals of the org as a whole), and creating a safe community is indeed just a means to that end.
This all makes me somewhat distrustful of the Community Health team.
I appreciated this. I really want EA to understand its problems and deal with them, but that's not going to happen if everyone is starting with an agenda. I value someone going in with a truth seeking goal to understand the situation.
I'm happy to see the community health team analyze existing data. Will any of this be shared with the rest of the EA community in any way, e.g. after deanonymizing?
I'd also love to see the community health team eventually address some of the most severe allegations that have surfaced recently, specifically this thread and its unresolved questions. While I'm happy to see Julia say she's taking a "renewed look at possible steps to take here", how this comes across is that the renewed look was in response to the case going public. If true, this does raise questions around whether similar cases (egregious and unambiguous abuse of power by senior EAs) were known to the community health team and how they were handled, and whether a similar renewed look might be warranted for these other cases, even if they haven't been made public or caused any other issues yet.
In general my impression is that while most are very grateful for the community health team's work, and happy to place their trust accordingly, the current model does require mainly taking the community health team's word at face value. However, recent events have indicated that this dependence may not be the most sustainable or accountable model going forwards. This is true also from the community health team's perspective, in terms of being easier to lose the community's trust if individual team members make human errors, as well as being more susceptible to allegations that may suffer from missing information.
current working model is the community health team seems good at what they do once they talk to people face to face (source: mostly word of mouth, people might have other experiences), some members are maybe temperamentally somewhat conflict averse and in general they are used to ~rat culture levels of charitability when it comes to public communications
regrettably this means that people who are less temperamentally charitable/newer to the movement might find it more difficult to trust them.
seems important to distinguish 'are people happy with the results of talking to them/are they worthy of trust' to 'are they good at comms'
Thank you for writing this post and for your important work, Catherine, Anu, and Łukasz.
We (me and the rest of the EA DC team) are always trying to learn and make our community more inclusive. If I can somehow support you or your work, please do let me know.
Thanks for all the suggestions and comments on this post! I have read them and will respond.
I know some commenters have been trying to square the uncertainty I express in this post with the allegations in TIME. I’ve added a new comment where I’ve shared the Community Health team’s understanding about most (not all) of the cases:
I'm heartened to hear that this project is underway, and I'm looking forward to being able to use this information to make our communities (local and global) better. Thank you, Catherine, Anu, and Łukasz!
Please feel free to reach out to me if I can be helpful. I don't have data to share at this time, but I want to support and encourage you in this work if I can.
I’m very glad you’re undertaking this project. When collecting and analyzing the data, I hope you’ll look for ways to account for survivor bias. For example, EA Survey data will systematically exclude people who leave the community due to negative experiences. Seeking out data from people who used to answer the EA Survey but no longer do (or used to attend EAG, etc.) could be one way of addressing this bias.
Would highly recommend Ozy's piece on the subject:
The problem with this, as well as the moral issues that Lilly speaks to, is the difficulty in gathering accurate data on the rate of sexual harassment or sexual assault:
(1) 80% of sexual assault and sexual harassment is not reported.
(2) When something is reported to an org, over 90% of the time, the person who made the initial reports to the organization "drops off" before it can be investigated (my rate is about 60%, as I work as a third party reporter, but it's taken me 4 - 5 years to get it that "low")
(3) Unless EA wants to improve its reporting systems...how can you expect to get accurate data? Literally, startups (I've partnered with and work with) have raised tens of millions to solve this problem (underreporting of sexual harassment). As someone with many years of education, experience, and expertise, CEA not be willing to see expert or outside counsel on this, but looking inward/working with others in the EA ecosystem is short-sighted at best.
Why is this not listed as a 'Community' post? (And thereby blocked by default?)
Sorry to the authors, it's not their faults presumably, I'm just tired of this insular/naval-gazing stuff, was excited to see this more out of my feed
I've added the tag.
Currently, moderators and Forum facilitators will be applying and monitoring this tag. Other users cannot modify the tag (including the authors of this post).
Filed a feature suggestion to allow authors to add the "community" tag to their posts
I have no idea. I couldn't work out how to list it as "Community ". I'm guessing the mods haven't categorised it yet.
Users can't tag their own posts in general, but CEA can, and this is a CEA post, so that doesn't seem like it should be the answer. Perhaps CEA community posts have an exception because they are more like official announcements than discussions? (Seems bad if so).
Forum mods can add tags, not anyone who works at CEA (and not all forum mods work at CEA)
I'd like for you to do a survey and a listening exercise and see if the results differ. I guess that a big quick survey would show most women are pretty happy with EA, but that a listening exercise might find that a small % really aren't and perhaps that people have left EA for this reason.
I'm curious about methods that might reveal things we don't expect or don't want to find out about what women really think.
That is true , but what happens to also those who feel marginalized and socially excluded from EA, who feel that EA has noble causes but somehow it chooses who to work with and who not to work with?
This statement is revelatory, and reveals a lot about the attitude of not thinking a little rape is bad enough to expend energy toward.
It's hopeless. I previously thought maybe EA could be made safer for women. I don't think it can anymore.
This may be unhelpful… I don’t think it’s possible to get to 0 instances of harassment in any human community.
I think a healthy community will do lots of prevention and also have services in place for addressing the times that prevention fails, because it will. This is a painful change of perspective from when I hoped for a 0% harm utopia.
I think EA really may have a culture problem and that we’re too passive on these issues because it’s seen as “not tractable” to fix problems like gender/power imbalances and interpersonal violence. We should still work on some of those issues because a culture of prevention is safer.
But while it’s fine to aim for 0 instances of harm as an inspiring stretch goal, it may also be honest to notice when that goal isn’t possible to achieve.