Fair.
Having run through the analogy, EA becoming more like an academic field or a profession rather than a movement seems very improbable.
I agree that “try to reduce abuses common within the church” seems a better analogy.
JWS, do you think EA could work as a professional network of “impact analysts” or “impact engineers” rather than as a “movement”?
Ryan, do you have a sense of what that would concretely look like?
If we look at other professionals, for example, engineers have in common some key ideas, values, and broad goals (like ‘build things that work’). Senior engineers recruit young engineers and go to professional conferences to advance their engineering skills and ideas. Some engineers work in policy or politics, but they clearly aren’t a political movement. They don’...
I understand the usage of “should” in this context. I was noting that it reads oddly to me, like a possible typo, and could be written to read more clearly.
For context on my own vote: I’d give the same answer for talking about monogamy.
People should clearly be able to say “my partner(s) and I are celebrating my birthday tonight” and “it’s my anniversary!” and look at this cute picture of my metamour’s dog!” and then answer questions if a colleague says, “what’s a metamour?” Just like all colleagues should be able to talk about their families at work.
People should be aware that it’s risky to spend work time nerding out about dating, romantic issues, sex, hitting on people, etc. People should be aware tha
That seems much less good than appearing in the SwapCard list of attendees where everyone is scheduling 1:1s already, but I agree that a cheap version of the thing here is very doable even without SwapCard
A hard thing here: For any project where “learn to work with external partners and train them to work with us” might be a good goal, there is usually a clear, higher priority and time-sensitive outcome in play, like “Make a hire for this role.” The time trade-offs are real, so the lower-priority goal doesn’t happen.
This may be the wrong long-term play. I am inclined to agree with you that more successful external partnerships would be valuable, but I see why orgs take the more obvious win in the short-term.
I think about optimization and scale of impact for my donations, but not for my day to day work (anymore). I am most productive and useful when I’m focused on helping the people I encounter on a given day, however I can help them. When I’m looking for general opportunities to help my neighbors, friends, colleagues, and family on an individual level, by offering whatever bit of helpful energy I have to give at a given moment, I get consistently positive feedback about giving useful help, and I am energized.
When I used to let my peers or managers or myself push me to justify how I help people, Optimization mindset led me to burn tons of energy trying to find “the most good” I could to, but actually doing almost nothing useful.
Seconding this: In my city, A TRO (temporary restraining order) is very easy to get:
“If the judge is convinced that a temporary restraining order is necessary*, he or she may issue the order immediately, without informing the other parties and without holding a hearing.”
*IMO, local judges are very lenient with TROs, issuing them “just in case” the complaint is valid, and reserving more conservative judgements for the actual hearing, 14+ days later.
Typo? “I believe there is a reasonable risk should EAs:”
Do you mean “a reasonable risk if EAs” or “a reasonable risk that EAs should not…”
The wording is confusing to me
I assume this trust difference is due to perceived or real value differences among different EAs, not rampant mistrust of CH among all EAs. Trust would only be shifted around rather than “solved” by having different people in CH roles.
I was not interviewed or involved in this situation but I have asked Julia and Catherine for support on other issues and felt supported. While Chris would share more things with Ben than he would share with CH, I would share more things with the current CH team than I would share with Ben. Chris trusts Ben more; I trust CH mo...
I wouldn't surprise me if active Less Wrong members were more favourable disposed towards Ben than other people.
First, CEA definitely have access to legal counsel.
Second, I don’t think these issues are that relevant, after reading Ben’s posts.
Regardless of legal risk, the reasons for not making claims public are clear -
(A) It took Ben hundreds of hours to feel confident and clear enough to make a useful public statement while also balancing the potential harms to Alice and Chloe. This is not uncommon in such situations and I think people should not expect CH to be able to do this in most cases.
(B) CEA is not in charge of Nonlinear or most other EA orgs. Just like Be...
I share Holly’s appreciation for you all, and also the concern that Lightcone’s culture and your specific views of these problem don’t necessarily scale nor translate well outside of rat spheres of influence. I agree that’s sad, but I think it’s good for people to update their own views and with that in mind.
My takeaways from all this are fairly confidently the following:*
EA orgs could do with following more “common sense” in their operations.
For example,
hire “normie” staff or contractors early on who are expected to know and enforce laws, financial reg
You asked about translation. I feel tired trying to explain this and I know that’s not your fault! But it’s why I just don’t think the Forum works well for this topic.
My guess is that talking about “women’s issues” on the Forum feels as similarly taxing to me as it does for most AI safety researchers to respond to people whose reaction to AGI concerns is, “ugh, tech bros are at it again” or even a well-intentioned, “I bet being in Silicon Valley skews your perspective on this. How many non-SV people have the kinds of concerns you mention?”
Most of us are ti...
I’ve been away from the Forum and just saw this comment. When you say “that figure”, what are you referring to?
This may be unhelpful… I don’t think it’s possible to get to 0 instances of harassment in any human community.
I think a healthy community will do lots of prevention and also have services in place for addressing the times that prevention fails, because it will. This is a painful change of perspective from when I hoped for a 0% harm utopia.
I think EA really may have a culture problem and that we’re too passive on these issues because it’s seen as “not tractable” to fix problems like gender/power imbalances and interpersonal violence. We should still work on...
Point of confusion/disagreement: I don’t think EA is big (15k globally?). I don’t think EA has domain level experts in most fields to work with to find neglected solutions. EAs typically have (far) less than 15 years work experience in any field and in my experience, they don’t have extensive professional networks outside of EA.
We have a lot more than we did ten years ago! And I agree ITN has flaws regardless, but I wanted to point out that if those are someone’s 2 main objections to using ITN today, it might not apply.
+1 But also, lowering stress for community members is part of advancing the discourse, in my view.
I actually endorse the idea of polls on this but don’t want to make one. Why? I’m in several text and real life conversations with women right now and none of them are commenting here because we’re sad and annoyed and frustrated. So they’re not voting.
On the Forum? Or IRL?
In real life, I’ve selected to be around very compassionate people in EA and outside EA.
On the Forum… more men who “translate” experiences into ones that other men understand and don’t feel threatened by might help. I’ve noticed Will Bradshaw does this sometimes. Ozzie too. AGB sometimes.
Kirsten, Ivy, and Julia Wise do it often too. I know that for a lot of women, it’s really frustrating to be treated so skeptically when we raise personal experiences or views that vary from men’s experiences.
When I’m 1:1 with my hyper-rational or autis...
[Edited to distinguish between “you” the individual and the general “you/us/people.”]
“People have a personal responsibility to tell others to stop what they're doing if they don't feel like they want others to do those things. Don't expect others to read your mind.”
Correction: “[I believe that] People have a personal responsibility to tell [me] to stop what [I’m] doing if they don't feel like they want [me] to do those things. Don't expect [me] to read your mind.”
You can take totally that stance. I personally even like that stance sometimes and have found ...
This comment seems willfully obtuse. The person is referring to a pattern of behavior, ergo a series of comments and bad experiences. A comment that comes at the end of a series and culminates in someone trying to take corrective action is not “a single comment that led to” their action.
Please reflect on how much you might be mad/sad/hurt/fearful and saying foolish things. Maybe don’t say them, or at least come back and fix them later.
I’m really happy to see you asking this question and doing an investigation of a charity and a cause yourself. It makes intuitive sense to me that moving from a very dangerous place to a very safe one would have long term benefits to well-being and seems worth doing additional investigation on the intervention.
It’s hard to know how much risk people are facing and how much improvement people will experience by moving; migration has upsides (eg better economic opportunity) and downsides (eg isolation from family). I’m not an expert on either but would be ex...
Re: “But I read this paragraph and it seems alien to me. What % of women+nb folks have this experience in EA?
‘I could tell you how tears streamed down my face as I read through accounts of women who have been harmed by people within the Effective Altruism community.’”
In the interest of reducing alienation, here’s some anecdata and context. Maya’s reaction wasn’t alien to me at all.
Among my female friends, having this type of reaction at some point was basically a developmental milestone. It wasn’t unique to EA. I expect such a survey would be more useful i...
Thanks for writing this! I appreciate this conversation. I think if I had been aware of your assertion that dads are typically more on the fence about having kids but still happy to have them, I would have been more excited to have kids with my partner earlier, so I especially valued that point. I want to reinforce your message that it’s important to think about this and maybe weight the “have kids” option more heavily than the average EA might do by default.
Anecdata: I am a woman who planned not to have kids. I allowed for the possibility I’d change my mi...
Cool idea. Are you working on this in a dedicated way? If this is useful, you could try it at a retreat or take 3-12 months to promote its use I bet, and see how it fleshes out.
This seems like one of those things that might be best for the movement but not best for the individual.
A uni organizer who recruits 5 excellent future performers might have just had the most impactful portion of their whole career. But the general marketing skills they got might be less useful to them personally. Becoming an expert in X object level issue would probably be more rewarding and open more doors over the course of their career than being a generalist in marketing, and have lower earning potential than learning consulting, programming, or some research skills.
I feel more uncertain about this if they’re actually doing project management and people management.
I don’t think (3) is that bad. New members are not always better than shooting experienced members into good projects.
I wonder if 2- 3 year cohort models of fellows would be better in established campuses.
I really like this post. That said, I don’t think this is true: “dedicates don’t have bullshit jobs.” We might have different definitions of bullshit though.
Dedicates don’t take jobs without doing an impact analysis, agreed.
However, dedicates may choose to sacrifice the chance to work 10 hour days on interesting problems, to take strategic jobs in non-EA orgs or government agencies that involve a lot of day-to-day bullshit. They do this in the hopes that they might have a shot at impact when the time is right. I think it’s good that they’re willing to do this and wouldn’t want their sacrifice mistaken for being a non-dedicate.
I agree that for a lot of people, this won’t be a problem. A lot of EA roles are professionalizing, so people can switch over to traditional careers if they want. (As in, community building is enough like management, event planning, or outreach roles at a lot of traditional orgs that the skills may transfer).
One piece of good advice for most people:
That issue seems inconveni...
I think conceptualizing job hunts like this for very competitive positions is often accurate and healthy fwiw
I’ve said “helping other beings” before. It sounds a bit odd to some people but is more accurate.
Are you hoping to appeal to people who don’t think very analytically, or just to explain clearly that this is a very analytical community and it might not be as accessible or useful or fun for them if they are not also very analytical?
I actually think that some of the offputting words might help prevent bycatch.
I’d check with Giving What We Can or One for the World, to see if you can take the giving pledge as a company.
I, for one, am really glad you raised this.
It seems plausible that some people caught the “AI is cool” bug along with the “EA is cool and nice and well-resourced” bug, and want to work on whatever they can that is AI-related. A justification like “I’ll go work on safety eventually” could be sincere or not.
Charity norms can swing much too far.
I’d be glad to see more 80k and forum talk about AI careers that point to the concerns here.
And I’d be glad to endorse more people doing what Richard mentioned — telling capabilities people that he thinks their work could be harmful while still being respectful.
Are we too cocky with EA funding or EA jobs; should EAs prepare for economic instability?
EA feels flush with cash, jobs, and new projects. But we have mostly “grown up” as a movement after the Great Recession of 2008 and may not be prepared for economic instability.
Many EAs come from very economically and professionally stable families. Our donor base may be insulated from economic shocks but not all orgs or individuals will be in equally secure positions.
I think lower- to -middle performers or newer EAs may overestimate stability and be overly optimistic about their stability and opportunities for future funding.
If that’s true, what should we be doing differently?
You can usually relatively straightforwardly divide your monetary resources into a part that you spend on donations and a part that you spend for personal purposes.
By contrast, you don't usually spend some of your time at work for self-interested purposes and some for altruistic purposes. (That is in principle possible, but uncommon among effective altruists.) Instead you only have one job (which may serve your self-interested and altruistic motives to varying degrees). Therefore, I think that analogies with donations are often a stretch and sometimes misleading (depending on how they're used).
Throwaway account to give a vague personal anecdote. I agree this has gotten better for some, but I think this is still a problem (a) that new people have to work out for themselves, going through the stages on their own, perhaps faster than happened 5 years ago; (b) that hits people differently if they are “converted” to EA but not as successful in their pursuit of impact. These people are left in a precarious psychological position.
I experienced both. I think of myself as “EA bycatch.” By the time I went through the phases of thinking through all of thi...
I agree with you. Yet I bristle when people who I don’t know well start putting forth arguments to me about what is good/bad for me, especially in a context where I wasn’t expecting it.
I’m much more accustomed to people thinking that moral relativism is polite, at least at first.
Moral relativism can be annoying, but putting forth strong moral positions at eg a fresher’s fair does feel like something that missionaries do.
Appreciate your comments, Aaron.
You say: But I am confident that leaders' true desire is "find people who have great epistemics [and are somewhat aligned]", not "find people who are extremely aligned [and have okay epistemics]".
I think that’s true for a lot of hires. But does that hold equally true when you think of hiring community builders specifically?
In my experience (5 ish people), leaders’ epistemic criteria seem less stringent for community building. Familiarity with EA, friendliness, and productivity seemed more salient.
I agree with you, and I think this somewhat supports the OPs concern.
Are most uni groups capable of producing or critiquing empirical work about their group, or about EA or about their cause areas of choice? Are they incentivized to do so at all?
Sometimes yes, but mostly no.
Re: “there have been cases of really great organizers springing up after just an intro fellowship.”
I definitely believe this can happen and am glad you allow for that. What makes someone seem really great — epistemics, alignment/buy-in, skill in a relevant area of study, __?
I agree and think this is an argument for investing in cause specific groups rather than generalized community building.
No problem, thanks for the wiki link