I'm currently based in Wellington, New Zealand. I have a background in policy/politics, including working as a policy advisor and campaigning for 2 candidates in the 2015 Canadian election. I'm primarily interested in the intersection of longtermism, policy/politics, and community building.
I am open to opportunities in the above spaces, and am always keen to hear from community builders, particularly those located in Australasia.
Reach out to me if you have questions about EA community building.
We already have tons of implicit norms that ask different behaviours of men and women, and these norms are the reason why it's women coming forward to say they feel uncomfortable rather than men. There are significant differences in how men and women approach dating in professional contexts, see power dynamics, and in the ratio of men in powerful positions versus women (as well as the gender ratio in EA generally). Drawing attention to these differences and discussing new norms that ask for different behaviours of men in these contexts (and different behaviours from the institutions/systems that these men interact with) is necessary to prevent these situations from happening in the future.
Something about this comment rubbed me the wrong way. EA is not meant to be a dating service, and while there are many people in the community who are open to the idea of dating someone within EA or actively searching for this, there are also many people who joined for entirely different reasons and don't consider this a priority/don't want this.
I think that viewing the relationship between men and women in EA this way - eg. men competing for attention, where lonely and desperate men will do what it takes to to get with women - does a disservice to both genders. It sounds like a) an uncomfortable environment for women to join, because they don't want to be swarmed by a bunch of desperate men, and b) an uncomfortable environment for men, because to some extent it seems to justify men doing more and more to get the attention of women, often at the cost of women being made to feel uncomfortable. (And many men in EA do not want women to feel uncomfortable!)
Let's zoom out a bit. To me, it's not that important that everyone in EA gets a match. I find the gender imbalance concerning for lots of reasons, but ‘a lack of women for men to match with’ is not on my list of concerns. Even if there was a perfect 50/50 balance of men and women, I think there would still be lonely men willing to abuse their power. (Like you said, many women come into the movement already in relationships, some men/women do not want to date within the movement, and some people are unfortunately just not people others want to date.) So the problem is not the lack of women, but rather the fact that men in powerful positions are either blind to their own power, or can see their power and are willing to abuse that power, and there are not sufficient systems in place to prevent this from happening, or even to stop it once it has happened.
I disagree-voted on this because I think it is overly accusatory and paints things in a black-and-white way.
There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would've been glad to see and enthusiastically supported and considered concrete progress for the community.
Who says we can't have both? I don't get the impression that EA NYC wants this to be the only action taken on anti-racism and anti-sexism, nor did I get the impression that this is the last action EA NYC will take on this topic.
But by just saying "hey, [thing] is bad! We're going to create social pressure to be vocally Anti-[thing]!" you are making the world worse, not better. Now, there is a List Of Right-Minded People Who Were Wise Enough To Sign The Thing, and all of the possible reasons to have felt hesitant to sign the thing are compressible to "oh, so you're NOT opposed to bigotry, huh?"
I don't think this is the case - I, for one, am definitely not thinking that anyone who chose not to sign didn't do so because they are not opposed to bigotry. (Confusing double-negative - but basically, I can think of other reasons why people might not have wanted to sign this.)
The best possible outcome from this document is that everybody recognizes it as a basically meaningless non-thing, and nobody really pays attention to it in the future, and thus having signed it means basically nothing.
I can think of better outcomes than that - the next time there is a document or initiative with a bit more substance, here's a big list of people who will probably be on board and could be contacted. The next time a journalist looks through the forum to get some content, here's a big list of people who have publicly declared their commitment to anti-racism and anti-sexism. The next time someone else makes a post delving into this topic, here's some community builders they can talk to for their stance on this. There's nothing inherently wrong with symbolic gestures as long as they are not in place of more meaningful change, and I don't get the sense from this post that this will be the last we hear about this.
People choose whom they date and befriend - no-one is forcing EAs to date each other, live together, or be friends. EAs associate socially because they share values and character traits.
To an extent, but this doesn't engage with the second counterpoint you mentioned:
2. The work/social overlap means that people who are engaged with EA professionally, but not part of the social community, may miss out on opportunities.
I think it would be more accurate to say that, there are subtle pressures that do heavily encourage EAs to date each other, live together, and be friends (I removed the word 'force' because 'force' feels a bit strong here). For example, as you mentioned, people working/wanting to work in AI safety are aware that moving to the Bay Area will open up opportunities. Some of these opportunities are quite likely to come from living in an EA house, socialising with other EAs, and, in some cases, dating other EAs. For many people in the community, this creates 'invisible glass ceilings,' as Sonia Joseph put it. For example, a woman is likely to be more put-off by the prospect of living in an EA house with 9 men than another man would be (and for good reasons, as we saw in the Times article). It is not necessarily the case that everyone's preference is living in an EA house, but that some people feel they will miss opportunities if they don't. Likewise, this creates barriers for people who, for religious/cultural reasons, can't or don't want to have roommates who aren't the same gender, people who struggle with social anxiety/sensory overload, or people who just don't want to share a big house with people that they also work and socialise with.
If you're going to talk about the benefits of these practices, you also need to engage with the downfalls that affect people who, for whatever reason, choose not to become a part of the tight-knit community. I think this will disproportionately be people who don't look like the existing community.
I think the usefulness of deferring also depends on how established a given field is, how many people are experts in that field, and how certain they are of their beliefs.
If a field has 10,000+ experts that are 95%+ certain of their claims on average, then it probably makes sense to defer as a default. (This would be the case for many medical claims, such as wearing masks, vaccinations, etc.) If a field has 100 experts and they are more like 60% certain of their claims on average, then it makes sense to explore the available evidence yourself or at least keep in mind that there is no strong expert consensus when you are sharing information.
We can't know everything about every field, and it's not reasonable to expect everyone to look deeply into the arguments for every topic. But I think there can be a tendency of EAs to defer on topics where there is little expert consensus, lots of robust debate among knowledgeable people, and high levels of uncertainty (eg. many areas of AI safety). While not everyone has the time to explore AI safety arguments for themselves, it's helpful to keep in mind that, for the most part, there isn't a consensus among experts (yet), and many people who are very knowledgeable about this field still carry high levels of uncertainty about their claims.
As with any social movement, people disagree about the best ways to take action. There are many critiques of EA which you should read to get a better idea of where others are coming from, for example, this post about effective altruism being an ideology, this post about someone leaving EA, this post about EA being inaccessible, or this post about blindspots in EA/rationalism communities.
Even before SBF, many people had legitimate issues with EA from a variety of standpoints. Some people find the culture unwelcoming (eg. too elitist/not enough diversity), some people take issue with longtermism (eg. too much uncertainty), others disagree with consequentialism/utilitarianism, and still others are generally on board but find more specific issues in the way that EA approaches things.
Post-SBF it's difficult to say what the full effects will be, but I think it's fair to say that SBF represents what many people fear/dislike about EA (eg. elitism, inexperience, ends-justifies-the-means reasoning, tech-bro vibes, etc). I'm not saying these things are necessarily true, but most people won't spend hundreds of hours engaging with EA to find out for themselves. Instead, they'll read an article on the New York Times about how SBF committed fraud and is heavily linked to EA and walk away with a somewhat negative impression. That isn't always fair, but it also happens to other social movements like feminism, Black Lives Matter, veganism, environmentalism, etc. EA is no exception, and FTX/SBF was a big enough deal that a lot of people will choose not to engage with EA going forward.
Should you care? I think to an extent, yes - you should engage with criticisms, think through your own perspective, decide where you agree/disagree, and work on improving things where you think they should be improved going forward. We should all do this. Ignoring criticisms is akin to putting your fingers in your ears and refusing to listen, which isn't a particularly rational approach. Many critics of EA will have meaningful things to say about it and if we truly want to figure out the best ways to improve the world, we need to be willing to change (see: scout mindset). That being said, not all criticisms will be useful or meaningful, and we shouldn't get so caught up in the criticism that we stop standing for something.
Thinking that 'the ends justifies the means' (in this case, making more donations justifies tax evasion) is likely to lead to incorrect calculations about the trade-offs involved. It's very easy to justify almost anything with this type of logic, which means we should be very hesitant.
As another commenter pointed out, tax money isn't 'your' money. Tax evasion (as opposed to 'tax avoidance' - which is legal) is stealing from the government. It would not be ethical to steal from your neighbour in order to donate the money, and likewise it is not ethical to steal from the government to donate money.
Thanks for posting this! I agree, and one thing I've noticed while community building is that it's very easy to give career direction to students and very early-career professionals, but much more challenging to mid/late-career professionals. Early-career people seem more willing to experiment/try out a project that doesn't have great support systems, whereas mid/late-career people have much more specific ideas about what they want out of a job.
Entrepreneurship is not for everyone, and being advised to start your own project with unclear parameters and outcomes often has low appeal to people who have been working for 10+ years in professions with meaningful structure, support, and reliable pay. (It often has low appeal to students/early-career professionals too, but younger people seem more willing to try.) I would love to see EA orgs implement some of the suggestions you mentioned.