I’m Catherine from CEA’s Community Health and Special Projects Team.
I’ve been frustrated and angered by some of the experiences some women and gender minorities have had in this community, ranging from feeling uncomfortable being in an extreme minority at a meeting through to sexual harassment and much worse. And I’ve been saddened by the lost impact that resulted from these experiences. I’ve tried to make things a bit better (including via co-founding Magnify Mentoring before I started at CEA), and I hope to do more this year.
In December 2022, after a couple of very sad posts by women on the EA Forum, Anu Oak and I started working on a project to get a better understanding of the experiences of women and gender minorities in the EA community. Łukasz Grabowski is now also helping out. Hopefully this information will help us form effective strategies to improve the EA movement.
I don’t really know what we’re going to find, and I’m very uncertain about what actions we’ll want to take at the end of this. We’re open to the possibility that things are really bad and that improving the experiences of women and gender minorities should be a major priority for our team. But we’re also open to finding out that things aren’t – on the whole – all that bad, or aren’t all that tractable, and there are no significant changes we want to prioritise.
We are still in the early stages of our project. The things we are doing now are:
- Gathering together and analysing existing data (EA Survey data, EAG(x) event feedback forms, incoming reports to the Community Health team, data from EA community subgroups, etc).
- Talking to others in the community who are running related projects, or who have relevant expertise.
- Planning our next steps.
If you have existing data you think would be helpful and that you’d like to share please get in touch by emailing Anu on anubhuti.oak@centreforeffectivealtruism.org.
If you’re running a related project, feel free to get in touch if you’d like to explore coordinating in some way (but please don’t feel obligated to).
I agree that meta work as a whole can only be justified from an EA framework on consequentialist grounds -- any other conclusion would result in partiality, holding the interests of EAs as more weighty than the interests of others.
However, I would argue that certain non-consequentialist moral duties come into play conditioned on certain choices. For example, if CEA decides to hold conferences, that creates a duty to take reasonable steps to prevent and address harassment and other misconduct at the conference. If an EA organization chooses to give someone power, and the person uses that power to further harassment (or to retaliate against a survivor), then the EA organization has a duty to take appropriate action.
Likewise, I don't have a specific moral duty to dogs currently sitting in shelters. But having adopted my dog, I now have moral duties relating to her well-being. If I choose to drive and negligently run over someone with my car, I have a moral duty to compensate them for the harm I caused. I cannot get out of those moral duties by observing that my money would be more effectively spent on bednets than on basic care for my dog or on compensating the accident victim.
So if -- for example -- CEA knows that someone is a sufficiently bad actor, its obligation to promote a healthy community by banning that person from CEA events is not only based on consequentialist logic. It is based on CEA's obligation to take reasonable steps to protect people at its events.