All of nananana.nananana.heyhey.anon's Comments + Replies

Fair.

Having run through the analogy, EA becoming more like an academic field or a profession rather than a movement seems very improbable.

I agree that “try to reduce abuses common within the church” seems a better analogy.

JWS, do you think EA could work as a professional network of “impact analysts” or “impact engineers” rather than as a “movement”?

Ryan, do you have a sense of what that would concretely look like?

If we look at other professionals, for example, engineers have in common some key ideas, values, and broad goals (like ‘build things that work’). Senior engineers recruit young engineers and go to professional conferences to advance their engineering skills and ideas. Some engineers work in policy or politics, but they clearly aren’t a political movement. They don’... (read more)

8
JWS
7mo
I guess I still don't have a clear idea of what Ryan's 'network of networks' approach would look like without the 'movement' aspect broadly defined. How definitely would that be practically from current EA but with more decentralisation of money and power, and more professional norms? But would this be a set of rigid internal norms that prevent people from the philanthropy space connecting with those in specific cause areas? Are we going to split AI technical and governance fields strictly? Is nobody meant to notice the common philosophical ideas which underline the similar approaches to all these cause areas? It's especially the latter I'm having trouble getting my head around. I don't think that 'field of engineering' is the right level of analogy here. I think the best analogies for EA are other movements, like 'Environmentalism' or 'Feminism' or 'The Enlightenment'. Social movements have had a lot of consequences in the human history, some of them very positive and some very negative. It seems to me that you and Ryan think that there's a way to structure EA so that we can cleanly excise the negative parts of a movement and keep the positive parts without being a movement, and I'm not sure that's really possible or even a coherent idea. *** [to @RyanCarey I think you updated your other comment as I was thinking of my response, so folding in my thoughts on that here] I'm completely with you here, but to me this is something that ends up miles away from 'winding down EA', or EA being 'not a movement'. I think abuse might be a bit strong as an analogy but directionally I think this is correct, and I'd agree we need to do things differently. But in this analogy I don't think the answer is end 'Christianity' as a movement and set up an overlapping network of tithing, volunteering, Sunday schools etc, which is what I take you to be suggesting. I feel like we're closer to agreement here, but on reflection the details of your plan here don't sum up to 'end EA as a
4
RyanCarey
7mo
Well I'm not sure it makes sense to try to fit all EAs into one professional community that is labelled as such, since we often have quite different jobs and work in quite different fields. My model would be a patchwork of overlapping fields, and a professional network that often extends between them. It could make sense for there to be a community focused on "effective philanthropy", which would include OpenPhil, Longview, philanthropists, and grant evaluators. That would be as close to "impact analysis" as you would get, in my proposal. There would be an effective policymaking community too. And then a bevy of cause-specific research communities: evidence-based policy, AI safety research, AI governance research, global priorities research, in vitro meat, global catastrophic biorisk research, global catastrophic risk analysis, global health and development, and so on. Lab heads and organisation leaders in these research communities would still know that they ought to apply to the "effective philanthropy" orgs to fund their activities. And they would still give talks at universities to try to attract top talent. But there wouldn't be a common brand or cultural identity, and we would frown upon the risk-increasing factors that come from the social movement aspect.

I understand the usage of “should” in this context. I was noting that it reads oddly to me, like a possible typo, and could be written to read more clearly.

2
Guy Raveh
7mo
Sorry for the assumption then (I'm not a native speaker myself). But it looks fine to me.

For context on my own vote: I’d give the same answer for talking about monogamy.

  • People should clearly be able to say “my partner(s) and I are celebrating my birthday tonight” and “it’s my anniversary!” and look at this cute picture of my metamour’s dog!” and then answer questions if a colleague says, “what’s a metamour?” Just like all colleagues should be able to talk about their families at work.

  • People should be aware that it’s risky to spend work time nerding out about dating, romantic issues, sex, hitting on people, etc. People should be aware tha

... (read more)

That seems much less good than appearing in the SwapCard list of attendees where everyone is scheduling 1:1s already, but I agree that a cheap version of the thing here is very doable even without SwapCard

A hard thing here: For any project where “learn to work with external partners and train them to work with us” might be a good goal, there is usually a clear, higher priority and time-sensitive outcome in play, like “Make a hire for this role.” The time trade-offs are real, so the lower-priority goal doesn’t happen.

This may be the wrong long-term play. I am inclined to agree with you that more successful external partnerships would be valuable, but I see why orgs take the more obvious win in the short-term.

I think about optimization and scale of impact for my donations, but not for my day to day work (anymore). I am most productive and useful when I’m focused on helping the people I encounter on a given day, however I can help them. When I’m looking for general opportunities to help my neighbors, friends, colleagues, and family on an individual level, by offering whatever bit of helpful energy I have to give at a given moment, I get consistently positive feedback about giving useful help, and I am energized.

When I used to let my peers or managers or myself push me to justify how I help people, Optimization mindset led me to burn tons of energy trying to find “the most good” I could to, but actually doing almost nothing useful.

Seconding this: In my city, A TRO (temporary restraining order) is very easy to get:

“If the judge is convinced that a temporary restraining order is necessary*, he or she may issue the order immediately, without informing the other parties and without holding a hearing.”

*IMO, local judges are very lenient with TROs, issuing them “just in case” the complaint is valid, and reserving more conservative judgements for the actual hearing, 14+ days later.

Typo? “I believe there is a reasonable risk should EAs:”

Do you mean “a reasonable risk if EAs” or “a reasonable risk that EAs should not…”

The wording is confusing to me

2
Guy Raveh
7mo
https://en.wiktionary.org/wiki/should Look at definition 3 and at the usage notes.

I assume this trust difference is due to perceived or real value differences among different EAs, not rampant mistrust of CH among all EAs. Trust would only be shifted around rather than “solved” by having different people in CH roles.

I was not interviewed or involved in this situation but I have asked Julia and Catherine for support on other issues and felt supported. While Chris would share more things with Ben than he would share with CH, I would share more things with the current CH team than I would share with Ben. Chris trusts Ben more; I trust CH mo... (read more)

I wouldn't surprise me if active Less Wrong members were more favourable disposed towards Ben than other people.

First, CEA definitely have access to legal counsel.

Second, I don’t think these issues are that relevant, after reading Ben’s posts.

Regardless of legal risk, the reasons for not making claims public are clear -

(A) It took Ben hundreds of hours to feel confident and clear enough to make a useful public statement while also balancing the potential harms to Alice and Chloe. This is not uncommon in such situations and I think people should not expect CH to be able to do this in most cases.

(B) CEA is not in charge of Nonlinear or most other EA orgs. Just like Be... (read more)

I share Holly’s appreciation for you all, and also the concern that Lightcone’s culture and your specific views of these problem don’t necessarily scale nor translate well outside of rat spheres of influence. I agree that’s sad, but I think it’s good for people to update their own views and with that in mind.

My takeaways from all this are fairly confidently the following:*

EA orgs could do with following more “common sense” in their operations.

For example,

  • hire “normie” staff or contractors early on who are expected to know and enforce laws, financial reg

... (read more)

You asked about translation. I feel tired trying to explain this and I know that’s not your fault! But it’s why I just don’t think the Forum works well for this topic.

My guess is that talking about “women’s issues” on the Forum feels as similarly taxing to me as it does for most AI safety researchers to respond to people whose reaction to AGI concerns is, “ugh, tech bros are at it again” or even a well-intentioned, “I bet being in Silicon Valley skews your perspective on this. How many non-SV people have the kinds of concerns you mention?”

Most of us are ti... (read more)

I’ve been away from the Forum and just saw this comment. When you say “that figure”, what are you referring to?

This may be unhelpful… I don’t think it’s possible to get to 0 instances of harassment in any human community.

I think a healthy community will do lots of prevention and also have services in place for addressing the times that prevention fails, because it will. This is a painful change of perspective from when I hoped for a 0% harm utopia.

I think EA really may have a culture problem and that we’re too passive on these issues because it’s seen as “not tractable” to fix problems like gender/power imbalances and interpersonal violence. We should still work on... (read more)

Point of confusion/disagreement: I don’t think EA is big (15k globally?). I don’t think EA has domain level experts in most fields to work with to find neglected solutions. EAs typically have (far) less than 15 years work experience in any field and in my experience, they don’t have extensive professional networks outside of EA.

We have a lot more than we did ten years ago! And I agree ITN has flaws regardless, but I wanted to point out that if those are someone’s 2 main objections to using ITN today, it might not apply.

+1 But also, lowering stress for community members is part of advancing the discourse, in my view.

I actually endorse the idea of polls on this but don’t want to make one. Why? I’m in several text and real life conversations with women right now and none of them are commenting here because we’re sad and annoyed and frustrated. So they’re not voting.

On the Forum? Or IRL?

In real life, I’ve selected to be around very compassionate people in EA and outside EA.

On the Forum… more men who “translate” experiences into ones that other men understand and don’t feel threatened by might help. I’ve noticed Will Bradshaw does this sometimes. Ozzie too. AGB sometimes.

Kirsten, Ivy, and Julia Wise do it often too. I know that for a lot of women, it’s really frustrating to be treated so skeptically when we raise personal experiences or views that vary from men’s experiences.

When I’m 1:1 with my hyper-rational or autis... (read more)

1
Tristan Williams
1y
Kind of both here.  Do you think that general activities towards building compassion are also helping here if this seems to be the thing you value in people towards making it a more comfortable environment for you? Like I'm wondering if this might be an unintended effect of general compassion building for animal welfare, and if interventions there might have some overlap in those best meant for this.  Sorry for my unfamiliarity, but could you explain what you mean by "translate experiences"? I feel as if I've probably interacted with what you're talking about, but am not sure what exactly that is mapping onto. I also hear you and am sorry that that has been your experience of the forum. But I really think it might be worth reconsidering that stance, because I think there's a real chance it has changed in a significant way here, so maybe try it out again and see how it goes, in some small way that wouldn't be too inconvenient for you if it hasn't improved? I motivate it only because I think having as many perspectives on here as possible is a great thing, and that generally the emotional things we really care about are some of the best things to interact over. Also don't feel the need to respond to questions (even like the ones I'm asking) if it helps increase bandwidth for other things that don't drain you similarly.

[Edited to distinguish between “you” the individual and the general “you/us/people.”]

“People have a personal responsibility to tell others to stop what they're doing if they don't feel like they want others to do those things. Don't expect others to read your mind.”

Correction: “[I believe that] People have a personal responsibility to tell [me] to stop what [I’m] doing if they don't feel like they want [me] to do those things. Don't expect [me] to read your mind.”

You can take totally that stance. I personally even like that stance sometimes and have found ... (read more)

This comment seems willfully obtuse. The person is referring to a pattern of behavior, ergo a series of comments and bad experiences. A comment that comes at the end of a series and culminates in someone trying to take corrective action is not “a single comment that led to” their action.

Please reflect on how much you might be mad/sad/hurt/fearful and saying foolish things. Maybe don’t say them, or at least come back and fix them later.

I’m really happy to see you asking this question and doing an investigation of a charity and a cause yourself. It makes intuitive sense to me that moving from a very dangerous place to a very safe one would have long term benefits to well-being and seems worth doing additional investigation on the intervention.

It’s hard to know how much risk people are facing and how much improvement people will experience by moving; migration has upsides (eg better economic opportunity) and downsides (eg isolation from family). I’m not an expert on either but would be ex... (read more)

6
David D
1y
Thank you so much for this thoughtful and encouraging answer. These are good things to think about. I'll see if I can find any research on migration in general. I get the sense most of Trans Rescue's clients didn't have good family relationships before moving, which does change the equation some, but it's a starting point for which research probably exists. I'll try to post if I do any analysis that's worth posting. I'll also look deeper into EA forum and see if I can find advice for approaching small, young organizations like this. (On my to-do list is asking them if they'd benefit from a donation specifically earmarked for administrative use, a savings buffer for emergencies, or other things that would help the mission but look unappealing to the average donor.) It occurs to me that an important question I haven't yet asked, is if the organization's accounting of their funds includes everything they spend on helping people relocate, or if board members and/or volunteers are also paying out of pocket for things related to the organization's mission. I need to figure out how to ask that tactfully. I may have switched from replying to thinking out loud somewhere in there. Thank you again for your advice and for taking the time to read and offer encouragement.

Re: “But I read this paragraph and it seems alien to me. What % of women+nb folks have this experience in EA?

‘I could tell you how tears streamed down my face as I read through accounts of women who have been harmed by people within the Effective Altruism community.’”

In the interest of reducing alienation, here’s some anecdata and context. Maya’s reaction wasn’t alien to me at all.

Among my female friends, having this type of reaction at some point was basically a developmental milestone. It wasn’t unique to EA. I expect such a survey would be more useful i... (read more)

1
Tristan Williams
1y
What specific sort of things would you like to see that would make you feel like you were in a more compassionate environment?

Thanks! I’ve edited my comment substantially. I’ll have a look at these resources.

Thanks for writing this! I appreciate this conversation. I think if I had been aware of your assertion that dads are typically more on the fence about having kids but still happy to have them, I would have been more excited to have kids with my partner earlier, so I especially valued that point. I want to reinforce your message that it’s important to think about this and maybe weight the “have kids” option more heavily than the average EA might do by default.

Anecdata: I am a woman who planned not to have kids. I allowed for the possibility I’d change my mi... (read more)

3
Geoffrey Miller
2y
Thanks for a very valuable, thoughtful, and insightful comment. I agree with almost all of it, and I appreciate your effort in turning a painful personal disappointment into some specific and useful advice for others. I especially appreciated your points about the strong cultural forces (e.g. in US, UK, etc) that make the single-house nuclear family arrangement very hard to escape over the long term -- no matter how expert one is at living in EA group houses, polycules, or other coliving arrangements.  Ideally, it would be possible for EAs (or people in any like-minded subculture) to set up their own neighborhoods or streets, with a dozen or so houses, restricted to people who share their values and life-goals. But that kind of 'freedom of association' is not actually legal in most countries (it would violate  various anti-discrimination laws). And trying to do coliving on a smaller scale within a single property raises very thorny problems in terms of the home ownership, shared equity, and what happens if couples get divorced or inhabitants get into too much conflict.  Like it or not, the single-family nuclear house seems a pretty strong 'focal point' in the space of possible living arrangements, especially for parents with kids (and maybe elderly parents), and especially given the current economic, legal, and cultural context.
3
purplefern
2y
Looks like your last sentence got cut off. I mentioned it briefly in my first comment, but cohousing seems to be growing in popularity as an antidote for the lack of systemic support in the nuclear family and also for people who are just generally interested in living in a more connected and cooperative environment with a chosen family. Here are some interesting links including Atlantic articles (paywalled after reading two for free) and the website for the Foundation for Intentional Community:  https://www.ic.org  https://www.theatlantic.com/magazine/archive/2020/03/the-nuclear-family-was-a-mistake/605536/  https://www.theatlantic.com/family/archive/2020/01/generation-x-women-are-facing-caregiving-crisis/604510/  https://www.theatlantic.com/business/archive/2016/09/millennial-housing-communal-living-middle-ages/501467/

The title is Global WarNing :) I misread that too at first.

Why not also strive to be the better replacement?

1
Dvir Caspi
2y
I think this is a very valuable comment.  As someone who loves sports, the main reason we have such incredible talent is due to the players desire to replace each other all the time. Every minute they compete, another player must sit on the bench. Their desire is so deep, that we end up with extremely talented leagues,  which makes it so fun to watch. An "I want to be replaced" mindset might not motivate them to wake up at 6:00 to hit the gym. But what is true for professional athletes is also true for us. We also "hit the gym", all the time. To outperform others in exams, in job interviews, in dating, etc. We also strive to replace. Maybe the middle ground is striving to replace, but willing to let go and be replaced, when someone is just more fit than us for that one particular thing, like a certain job.  

Cool idea. Are you working on this in a dedicated way? If this is useful, you could try it at a retreat or take 3-12 months to promote its use I bet, and see how it fleshes out.

2
Harrison Durland
2y
It's funny you should ask, I just finished a post on a related project: https://forum.effectivealtruism.org/posts/9RCFq976d9YXBbZyq/research-reality-graphing-to-support-ai-policy-and-more  Although that is about a different project, many of the same points apply: I just haven't gotten a sufficient demand/interest signal to feel justified (let alone motivated) to work on the project.

This seems like one of those things that might be best for the movement but not best for the individual.

A uni organizer who recruits 5 excellent future performers might have just had the most impactful portion of their whole career. But the general marketing skills they got might be less useful to them personally. Becoming an expert in X object level issue would probably be more rewarding and open more doors over the course of their career than being a generalist in marketing, and have lower earning potential than learning consulting, programming, or some research skills.

I feel more uncertain about this if they’re actually doing project management and people management.

I don’t think (3) is that bad. New members are not always better than shooting experienced members into good projects.

I wonder if 2- 3 year cohort models of fellows would be better in established campuses.

I really like this post. That said, I don’t think this is true: “dedicates don’t have bullshit jobs.” We might have different definitions of bullshit though.

Dedicates don’t take jobs without doing an impact analysis, agreed.

However, dedicates may choose to sacrifice the chance to work 10 hour days on interesting problems, to take strategic jobs in non-EA orgs or government agencies that involve a lot of day-to-day bullshit. They do this in the hopes that they might have a shot at impact when the time is right. I think it’s good that they’re willing to do this and wouldn’t want their sacrifice mistaken for being a non-dedicate.

I agree that for a lot of people, this won’t be a problem. A lot of EA roles are professionalizing, so people can switch over to traditional careers if they want. (As in, community building is enough like management, event planning, or outreach roles at a lot of traditional orgs that the skills may transfer).

One piece of good advice for most people:

  • Issue-specific expertise and professional networks don’t transfer well. I’d advise that a good backup plan should include spending time networking with EA-adjacent, and non-EA orgs.

That issue seems inconveni... (read more)

I think conceptualizing job hunts like this for very competitive positions is often accurate and healthy fwiw

“Help” sounds paternalistic or presumptuous to progressives.

I’ve said “helping other beings” before. It sounds a bit odd to some people but is more accurate.

Are you hoping to appeal to people who don’t think very analytically, or just to explain clearly that this is a very analytical community and it might not be as accessible or useful or fun for them if they are not also very analytical?

I actually think that some of the offputting words might help prevent bycatch.

I’d check with Giving What We Can or One for the World, to see if you can take the giving pledge as a company.

I, for one, am really glad you raised this.

It seems plausible that some people caught the “AI is cool” bug along with the “EA is cool and nice and well-resourced” bug, and want to work on whatever they can that is AI-related. A justification like “I’ll go work on safety eventually” could be sincere or not.

Charity norms can swing much too far.

I’d be glad to see more 80k and forum talk about AI careers that point to the concerns here.

And I’d be glad to endorse more people doing what Richard mentioned — telling capabilities people that he thinks their work could be harmful while still being respectful.

Are we too cocky with EA funding or EA jobs; should EAs prepare for economic instability?

EA feels flush with cash, jobs, and new projects. But we have mostly “grown up” as a movement after the Great Recession of 2008 and may not be prepared for economic instability.

Many EAs come from very economically and professionally stable families. Our donor base may be insulated from economic shocks but not all orgs or individuals will be in equally secure positions.

I think lower- to -middle performers or newer EAs may overestimate stability and be overly optimistic about their stability and opportunities for future funding.

If that’s true, what should we be doing differently?

3
Jay Bailey
2y
I've definitely thought about this. EA is a relatively young movement. Its momentum is massive at the moment, but even so, creating a career out of something like EA community building is far from certain, even for people who can reasonably easily secure funding for a few months or years.  I think that a good thing to do would be to ask "What would happen if EA ceased to exist in ten years?" when making career plans. If the answer is "Well, I would have been better off had I sought traditional career capital, but I think I'll land on my feet anyway" that's a fine answer - it would be unreasonable to expect that devoting years of your life to a niche movement has zero costs. If the answer is "I'd be completely screwed, I have no useful skills outside of this ecosystem and would still need to work for a living", I would be more concerned and suggest people alter plans accordingly. That said, I think for many or most EA's, this will not be the case. Many EA cause areas require highly valuable skills such as software engineering, research ability, or operations/management skills that can be useful in the private or public sector outside of effective altruism. I also  feel like this mainly applies to very early-career individuals. For instance, I have a few years of SWE experience and want to  move into AI safety. If EA disbanded in ten years...well, I'd still want to work on the problem, but what if we solved the alignment problem or proved it actually wasn't a major cause area somehow? And then EA said "Okay, thanks for all your hard work, but we don't really need AI alignment experts any more". I would be okay - I could go back to SWE work. I'd be worse off than if I spent ten years working for strong non-EA tech companies, but I would hardly be destitute. It's not that hard to have a backup plan in place, but we should encourage people to have one. This may also help with mental health - leaving a line of retreat from EA should it be too overwhelming for some peop

You can usually relatively straightforwardly divide your monetary resources into a part that you spend on donations and a part that you spend for personal purposes.

By contrast, you don't usually spend some of your time at work for self-interested purposes and some for altruistic purposes. (That is in principle possible, but uncommon among effective altruists.) Instead you only have one job (which may serve your self-interested and altruistic motives to varying degrees). Therefore, I think that analogies with donations are often a stretch and sometimes misleading (depending on how they're used).

Throwaway account to give a vague personal anecdote. I agree this has gotten better for some, but I think this is still a problem (a) that new people have to work out for themselves, going through the stages on their own, perhaps faster than happened 5 years ago; (b) that hits people differently if they are “converted” to EA but not as successful in their pursuit of impact. These people are left in a precarious psychological position.

I experienced both. I think of myself as “EA bycatch.” By the time I went through the phases of thinking through all of thi... (read more)

I agree with you. Yet I bristle when people who I don’t know well start putting forth arguments to me about what is good/bad for me, especially in a context where I wasn’t expecting it.

I’m much more accustomed to people thinking that moral relativism is polite, at least at first.

Moral relativism can be annoying, but putting forth strong moral positions at eg a fresher’s fair does feel like something that missionaries do.

Appreciate your comments, Aaron.

You say: But I am confident that leaders' true desire is "find people who have great epistemics [and are somewhat aligned]", not "find people who are extremely aligned [and have okay epistemics]".

I think that’s true for a lot of hires. But does that hold equally true when you think of hiring community builders specifically?

In my experience (5 ish people), leaders’ epistemic criteria seem less stringent for community building. Familiarity with EA, friendliness, and productivity seemed more salient.

8
Aaron Gertler
2y
This is a tricky question to answer, and there's some validity to your perspective here.  I was speaking too broadly when I said there were "rare exceptions" when epistemics weren't the top consideration. Imagine three people applying to jobs: * Alice: 3/5 friendliness, 3/5 productivity, 5/5 epistemics * Bob: 5/5 friendliness, 3/5 productivity, 3/5 epistemics * Carol: 3/5 friendliness, 5/5 productivity, 3/5 epistemics I could imagine Bob beating Alice for a "build a new group" role (though I think many CB people would prefer Alice), because friendliness is so crucial.  I could imagine Carol beating Alice for an ops role. But if I were applying to a wide range of positions in EA and had to pick one trait to max out on my character sheet, I'd choose "epistemics" if my goal were to stand out in a bunch of different interview processes and end up with at least one job.   One complicating factor is that there are only a few plausible candidates (sometimes only one) for a given group leadership position. Maybe the people most likely to actually want those roles are the ones who are really sociable and gung-ho about EA, while the people who aren't as sociable (but have great epistemics) go into other positions. This state of affairs allows for "EA leaders love epistemics" and "group leaders stand out for other traits" at the same time.   Finally, you mentioned "familiarity" as a separate trait from epistemics, but I see them as conceptually similar when it comes to thinking about group leaders. Common questions I see about group leaders include "could this person explain these topics in a nuanced way?" and "could this person successfully lead a deep, thoughtful discussion on these topics?" These and other similar questions involve familiarity, but also the ability to look at something from multiple angles, engage seriously with questions (rather than just reciting a canned answer), and do other "good epistemics" things.

I agree with you, and I think this somewhat supports the OPs concern.

Are most uni groups capable of producing or critiquing empirical work about their group, or about EA or about their cause areas of choice? Are they incentivized to do so at all?

Sometimes yes, but mostly no.

Strong +1. This feels much more like the correct use of student groups to me.

Re: “there have been cases of really great organizers springing up after just an intro fellowship.”

I definitely believe this can happen and am glad you allow for that. What makes someone seem really great — epistemics, alignment/buy-in, skill in a relevant area of study, __?

I agree and think this is an argument for investing in cause specific groups rather than generalized community building.

Load more