This may be unhelpful… I don’t think it’s possible to get to 0 instances of harassment in any human community.
I think a healthy community will do lots of prevention and also have services in place for addressing the times that prevention fails, because it will. This is a painful change of perspective from when I hoped for a 0% harm utopia.
I think EA really may have a culture problem and that we’re too passive on these issues because it’s seen as “not tractable” to fix problems like gender/power imbalances and interpersonal violence. We should still work on...
Point of confusion/disagreement: I don’t think EA is big (15k globally?). I don’t think EA has domain level experts in most fields to work with to find neglected solutions. EAs typically have (far) less than 15 years work experience in any field and in my experience, they don’t have extensive professional networks outside of EA.
We have a lot more than we did ten years ago! And I agree ITN has flaws regardless, but I wanted to point out that if those are someone’s 2 main objections to using ITN today, it might not apply.
On the Forum? Or IRL?
In real life, I’ve selected to be around very compassionate people in EA and outside EA.
On the Forum… more men who “translate” experiences into ones that other men understand and don’t feel threatened by might help. I’ve noticed Will Bradshaw does this sometimes. Ozzie too. AGB sometimes.
Kirsten, Ivy, and Julia Wise do it often too. I know that for a lot of women, it’s really frustrating to be treated so skeptically when we raise personal experiences or views that vary from men’s experiences.
When I’m 1:1 with my hyper-rational or autis...
[Edited to distinguish between “you” the individual and the general “you/us/people.”]
“People have a personal responsibility to tell others to stop what they're doing if they don't feel like they want others to do those things. Don't expect others to read your mind.”
Correction: “[I believe that] People have a personal responsibility to tell [me] to stop what [I’m] doing if they don't feel like they want [me] to do those things. Don't expect [me] to read your mind.”
You can take totally that stance. I personally even like that stance sometimes and have found ...
This comment seems willfully obtuse. The person is referring to a pattern of behavior, ergo a series of comments and bad experiences. A comment that comes at the end of a series and culminates in someone trying to take corrective action is not “a single comment that led to” their action.
Please reflect on how much you might be mad/sad/hurt/fearful and saying foolish things. Maybe don’t say them, or at least come back and fix them later.
I’m really happy to see you asking this question and doing an investigation of a charity and a cause yourself. It makes intuitive sense to me that moving from a very dangerous place to a very safe one would have long term benefits to well-being and seems worth doing additional investigation on the intervention.
It’s hard to know how much risk people are facing and how much improvement people will experience by moving; migration has upsides (eg better economic opportunity) and downsides (eg isolation from family). I’m not an expert on either but would be ex...
Re: “But I read this paragraph and it seems alien to me. What % of women+nb folks have this experience in EA?
‘I could tell you how tears streamed down my face as I read through accounts of women who have been harmed by people within the Effective Altruism community.’”
In the interest of reducing alienation, here’s some anecdata and context. Maya’s reaction wasn’t alien to me at all.
Among my female friends, having this type of reaction at some point was basically a developmental milestone. It wasn’t unique to EA. I expect such a survey would be more useful i...
Thanks for writing this! I appreciate this conversation. I think if I had been aware of your assertion that dads are typically more on the fence about having kids but still happy to have them, I would have been more excited to have kids with my partner earlier, so I especially valued that point. I want to reinforce your message that it’s important to think about this and maybe weight the “have kids” option more heavily than the average EA might do by default.
Anecdata: I am a woman who planned not to have kids. I allowed for the possibility I’d change my mi...
This seems like one of those things that might be best for the movement but not best for the individual.
A uni organizer who recruits 5 excellent future performers might have just had the most impactful portion of their whole career. But the general marketing skills they got might be less useful to them personally. Becoming an expert in X object level issue would probably be more rewarding and open more doors over the course of their career than being a generalist in marketing, and have lower earning potential than learning consulting, programming, or some research skills.
I feel more uncertain about this if they’re actually doing project management and people management.
I really like this post. That said, I don’t think this is true: “dedicates don’t have bullshit jobs.” We might have different definitions of bullshit though.
Dedicates don’t take jobs without doing an impact analysis, agreed.
However, dedicates may choose to sacrifice the chance to work 10 hour days on interesting problems, to take strategic jobs in non-EA orgs or government agencies that involve a lot of day-to-day bullshit. They do this in the hopes that they might have a shot at impact when the time is right. I think it’s good that they’re willing to do this and wouldn’t want their sacrifice mistaken for being a non-dedicate.
I agree that for a lot of people, this won’t be a problem. A lot of EA roles are professionalizing, so people can switch over to traditional careers if they want. (As in, community building is enough like management, event planning, or outreach roles at a lot of traditional orgs that the skills may transfer).
One piece of good advice for most people:
That issue seems inconveni...
Are you hoping to appeal to people who don’t think very analytically, or just to explain clearly that this is a very analytical community and it might not be as accessible or useful or fun for them if they are not also very analytical?
I actually think that some of the offputting words might help prevent bycatch.
I, for one, am really glad you raised this.
It seems plausible that some people caught the “AI is cool” bug along with the “EA is cool and nice and well-resourced” bug, and want to work on whatever they can that is AI-related. A justification like “I’ll go work on safety eventually” could be sincere or not.
Charity norms can swing much too far.
I’d be glad to see more 80k and forum talk about AI careers that point to the concerns here.
And I’d be glad to endorse more people doing what Richard mentioned — telling capabilities people that he thinks their work could be harmful while still being respectful.
Are we too cocky with EA funding or EA jobs; should EAs prepare for economic instability?
EA feels flush with cash, jobs, and new projects. But we have mostly “grown up” as a movement after the Great Recession of 2008 and may not be prepared for economic instability.
Many EAs come from very economically and professionally stable families. Our donor base may be insulated from economic shocks but not all orgs or individuals will be in equally secure positions.
I think lower- to -middle performers or newer EAs may overestimate stability and be overly optimistic about their stability and opportunities for future funding.
If that’s true, what should we be doing differently?
You can usually relatively straightforwardly divide your monetary resources into a part that you spend on donations and a part that you spend for personal purposes.
By contrast, you don't usually spend some of your time at work for self-interested purposes and some for altruistic purposes. (That is in principle possible, but uncommon among effective altruists.) Instead you only have one job (which may serve your self-interested and altruistic motives to varying degrees). Therefore, I think that analogies with donations are often a stretch and sometimes misleading (depending on how they're used).
Throwaway account to give a vague personal anecdote. I agree this has gotten better for some, but I think this is still a problem (a) that new people have to work out for themselves, going through the stages on their own, perhaps faster than happened 5 years ago; (b) that hits people differently if they are “converted” to EA but not as successful in their pursuit of impact. These people are left in a precarious psychological position.
I experienced both. I think of myself as “EA bycatch.” By the time I went through the phases of thinking through all of thi...
I agree with you. Yet I bristle when people who I don’t know well start putting forth arguments to me about what is good/bad for me, especially in a context where I wasn’t expecting it.
I’m much more accustomed to people thinking that moral relativism is polite, at least at first.
Moral relativism can be annoying, but putting forth strong moral positions at eg a fresher’s fair does feel like something that missionaries do.
Appreciate your comments, Aaron.
You say: But I am confident that leaders' true desire is "find people who have great epistemics [and are somewhat aligned]", not "find people who are extremely aligned [and have okay epistemics]".
I think that’s true for a lot of hires. But does that hold equally true when you think of hiring community builders specifically?
In my experience (5 ish people), leaders’ epistemic criteria seem less stringent for community building. Familiarity with EA, friendliness, and productivity seemed more salient.
Would you have this same reaction if you saw Luke and Max or GWWC/CEA as equals and peers? Maybe so! It seems like you saw this as the head of CEA talking down to the OP. Max and Luke seem to know each other though; I read Max’s comment as a quick flag between equals that there’s a disagreement here, but writing it on the forum instead of an email means the rest of us get to participate a bit more in the conversation too.
This isn’t my experience in the US anymore! Most major cities have an EA meetup or it feels inevitable to me that they soon will. EA is still small overall, but increasingly ubiquitous. It’s a credit to the success of movement growth. It’s also a bit overwhelming for me. See comment below; even Tulsa is likely to have an EA group soon!
This is true. I appreciate you taking a minute to make a supportive comment!
I got downvoted but I’m not hating on the community at all.
Even more than my previous IRL communities, EAs are consistently kind, interesting, share my values, and offer events that I think are likely to do some actual good. I am drawn to the real life EA community like moth to flame whenever there is one available. But it’s also not sustainable for me to be so involved in EA. That’s not the community’s fault. It’s a quirk of my own psychology.
I’d like to live somewhere sunny where IRL EA hangouts is not an option so I’m incentivized to make other connections.
If you like the location you're currently in, it seems pretty worth it to try to hang out with other people in your current community first. Join a sports team or games club or something. If you're worried about incentives, then ask a friend for accountability. Say you'll pay them $20 if you don't actually go to the event and ask them to follow up on it.
I'm a bit worried you're underestimating how difficult it would be to move to an entirely different continent on your own. Life as an expat can be expensive and alienating.
I think ~99.9% of cities don't have in-person EA hangouts.
Maybe you can just find the best cities for you and only later filter out the few ones with an EA group?
You can also check https://forum.effectivealtruism.org/community for places to avoid
You asked about translation. I feel tired trying to explain this and I know that’s not your fault! But it’s why I just don’t think the Forum works well for this topic.
My guess is that talking about “women’s issues” on the Forum feels as similarly taxing to me as it does for most AI safety researchers to respond to people whose reaction to AGI concerns is, “ugh, tech bros are at it again” or even a well-intentioned, “I bet being in Silicon Valley skews your perspective on this. How many non-SV people have the kinds of concerns you mention?”
Most of us are ti... (read more)