Hide table of contents

Alt. title: "If EA isn't feminist, let me out of it"

TW / CW: discussion of sexual violence, assault

Written in a state of: Hurry. Anger. A high degree of expertise. 

This post does not include a sufficient discussion of the uniqueness of gender identity, and tends to oversimplify what it means to be a woman. I also would like to see an EA community-based discussion about supporting and caring for nonbinary people, as well as one that more carefully centers trans experiences. Even more crucially, it terrifies me to think about the poor quality of discussion that might result from addressing intersectionality

I am so, so tired. I haven't even been here that long and I am so, so tired. I can't imagine how other people feel. 

I would be extremely surprised to meet a woman who does not go through her life fearing violence from men, or that violence will be perpetrated against them because of some aspect of their gender, at least some of the time.

This should not cause controversy. This should not even remotely surprise you. This should not elicit any thought that is in any way related to "but what about-".  I'm not saying it does; if it doesn't, that would be great. We (everyone) are allowed to say things that aren't surprising. And I shouldn't have to clarify that. 

But apparently, it is surprising to some people. Therein lies the problem. 

Usually, I try to make men feel better by saying things like "oh, well I know you're not like that," or "I'm sure that's not what you meant," or "I'm sure he just forgot," or by making jokes. I'm not doing that today. I don't know that you're not like that. I actually do know what you mean, because you said it. Because you forgot half the world's population, half your family and friends and coworkers and classmates (roughly, potentially) exist in a state of constant fear. 

 

This attitude absolutely disgusts me. 

If you're not aware of the backlash against feminism by now, you have been intentionally ignoring it. If you are intentionally ignoring the fight for gender liberation, in the context of my life you are a malicious actor, and I'm tired of pretending you're not. 

Besides that, it's not about this one tweet. It's about seeing gender liberation as somehow anything other than completely integral. I don't understand it, and I don't want to. 

Believe me, I would love to be able to trust men and assume that 99.9% of them/you actually wish me no harm and move through the world relatively unencumbered by the power differential that characterizes society, but I can't. Because I'm confident someone is going to openly relate this problem to the animal rights movement and not see a problem with that. 

This is very clearly a high-pitched wail given words. You have to understand. One of you has to understand. One more person has to understand. I want to scream. I've wanted to scream for years, and I haven't, and this is as close as I've ever gotten. 

I want to cry. I want to cry every time I see a story about a missing woman, or a missing girl. I want to cry every time I hear about women impregnated with sperm from their OB/GYN and not their partner or chosen donor. I want to cry every time someone sends me a news article that a woman in the middle of a C-section has been raped by her anesthesiologist. 

I want to cry every day. 

This discussion needs to be in the open. And you need to have it right now. 

Comments9


Sorted by Click to highlight new comments since:

I'm confused. What are you trying to say here? You linked a proposal to prioritize violence against women and girls as an EA cause area (which I assume you don't object to?) and a tweet by some person unknown to me saying that critics of EA hold it to a standard they don't apply to feminism (which probably depends a lot on what kind of critics, and on their political background in particular). What do you expect the readers to learn from this or do about it?

The link to the post on VAWG was my mistake - I intended to link to the comments specifically, which got noticeably heated after someone followed up what I thought was an incredibly well-researched and persuasive post with "but what about men's rights." What I thought were pretty charitable responses explaining how that's not actually relevant to the discussion got downvoted beyond belief. In my (limited, yet colorful) experience, EA seems to have a recurring problem allowing gender issues to be prioritized.

Thanks for this clarification - I had the same response to the comments on that post. 

I'm very sympathetic to your feelings, but I also don't understand what you're asking for from your post.

Are all posts asking for something explicitly? (A real question.) To the extent that they are, I think the takeaway is a greater commitment to understanding gender-based violence, including the everyday, as a goal with its own end. I was hoping someone could really help me figure out what exactly isn't clicking here, because clearly there is a recurring problem. 

It seemed like you were asking for something, some urgent action, because you ended your post by saying, "This discussion needs to be in the open. And you need to have it right now."

It made me feel like you were in distress and asking for help, or maybe demanding change, but I couldn't really tell what it was you needed. I guess from your comment you don't know either.

I have some thoughts that I think are better to share privately - DMing you.

Lots of strong downvotes for not a lot of explanation. 

[comment deleted]1
0
0
Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 2m read
 · 
In my opinion, we have known that the risk of AI catastrophe is too high and too close for at least two years. At that point, it’s time to work on solutions (in my case, advocating an indefinite pause on frontier model development until it’s safe to proceed through protests and lobbying as leader of PauseAI US).  Not every policy proposal is as robust to timeline length as PauseAI. It can be totally worth it to make a quality timeline estimate, both to inform your own work and as a tool for outreach (like ai-2027.com). But most of these timeline updates simply are not decision-relevant if you have a strong intervention. If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place.  I think people are psychologically drawn to discussing timelines all the time so that they can have the “right” answer and because it feels like a game, not because it really matters the day and the hour of… what are these timelines even leading up to anymore? They used to be to “AGI”, but (in my opinion) we’re basically already there. Point of no return? Some level of superintelligence? It’s telling that they are almost never measured in terms of actions we can take or opportunities for intervention. Indeed, it’s not really the purpose of timelines to help us to act. I see people make bad updates on them all the time. I see people give up projects that have a chance of working but might not reach their peak returns until 2029 to spend a few precious months looking for a faster project that is, not surprisingly, also worse (or else why weren’t they doing it already?) and probably even lower EV over the same time period! For some reason, people tend to think they have to have their work completed by the “end” of the (median) timeline or else it won’t count, rather than seeing their impact as the integral over the entire project that does fall within the median timeline estimate or
Recent opportunities in Building effective altruism
49
Ivan Burduk
· · 2m read