JKE

Jenny K E

458 karmaJoined Mar 2022

Comments
25

This is a big part of why I used to basically not go to official in-person EA events (I do somewhat more often nowadays after having gotten more involved in EA, though still not a ton). It makes sense that EA events are like this, because after all, EA is the topic that all the people there have in common, but it does seem a bit unfortunate for those of us who like hanging out with EAs but aren't interested in talking about EA all the time. Maybe community event organizers should consider occasionally hosting EA events where EA is outright banned as a discussion topic, or if that's too extreme, maybe just events where there's some effort to create/amplify other discussion topics?

I think the separate Community tab is a great idea, thanks for implementing that!

Not about the current changes, but a bit of unrelated site feedback: The "library" button at the bottom of mobile leads to what seem to be a set of curated essays and sequences, which is good, but the sequences listed at the top are overwhelmingly on the topic of AI safety, which seems pretty unbalanced -- I'd like to see this tab contain a mix of curated reading recommendations on global poverty, animal welfare, biorisk, AI safety, and other cause areas.

Thank you for writing this! I've been somewhat skeptical that ATLAS is a good use of EA funding myself, but also don't know very much about it, so I appreciate someone who's more familiar with it and its fellows starting this conversation.

My fairly uninformed best guess is that the rumors listed here are a bit misleading / suggestive of problems being more extreme than they actually are, but that these problems do exist. But this is just a guess.

Thanks for writing this! I had that "eugh" feeling up until not that long ago, and it's nice to know other people have felt the same way. 

I'm particularly enthusiastic about more educational materials being created. The AGISF curriculum is good, and would have been very helpful to me if I'd encountered it at the right time. I'd be delighted to see more in that vein.

I learned about this ten months ago, personally, and (in an informal peer context) spoke to one of the people involved about it. The person in question defended the decision by saying they intended to run retreats and ask "Hamming questions". They added that the £15m was an investment, since the castle ("technically it's not a castle") wouldn't depreciate in value. Also, they opined that the EA community as a whole shouldn't have a veto on every large purchase, because consensus decision-making is infeasible on that scale and is likely to result in vetos for tons of potentially valuable proposals.

I think I agree with the third point at least to some extent, but that's a meta level point, and the object level points did not seem like good arguments for buying a £15m castle. I came away from the conversation believing this was not a reasonable use of funds and with my opinion of CEA* lowered.

I didn't, and still don't, know what to do about this sort of thing. Changing how an EA org acts is hard; maybe public pressure helps, but I suspect a lot of the difficulties are in changing organizational norms and policies, and I don't have a good sense of what would be useful policies or what wouldn't be. I do have the intuition that having a larger number of distinct EA orgs would be good, so CEA has less influence individually.

*I understand CEA to be an umbrella organization housing a number of sub-orgs, and so I remain unsure how far this negative update should propagate; certainly I'm sure there are folks who work in other branches of CEA who had nothing to do with this and no say in it.

[ETA: Changed "their decision" to "the decision" upon receiving a reminder that the person in question was (probably) not the person who had actually made the original decision to buy the castle.]

The gender identity question includes options that aren't mutually exclusive; I believe it should either be a checkbox question or should list something along the lines of "cisgender woman, transgender woman, cisgender man, transgender man, nonbinary, other." If you have more questions, feel free to PM me and I'm happy to do my best (as an ally) to answer them.

As someone in a somewhat similar position myself (donating to Givewell, vegetarian, exploring AI safety work), this was nice to read. Diversifying is a good way to hedge against uncertainty and to exploit differing comparative advantages in different aspects of one's life.

Kelsey clarified in a tweet that if someone asks for a conversation to be off the record and she isn't willing to keep it off the record, she explicitly tells them so.

Presumably he made some unfounded assumptions about how sympathetic she'd be and whether she'd publish the screenshots, but never asked her not to.

[ETA: Whoops, realized this is answering a different question than the one the poster actually asked -- they wanted to know what individual community members can do, which I don't address here.]

Some concrete suggestions:

-Mandatory trainings for community organizers. This idea is lifted directly from academia, which often mandates trainings of this sort. The professional versions are often quite dumb and involve really annoying unskippable videos; I think a non-dumb EA version would encourage the community organizer to read the content of the community health guide linked in the above post and then require them to pass a quiz about its contents (but if they can pass the quiz without reading the guide that's fine, the point is to check they understand the contents of the guide, not to make them read it). I imagine that better but higher-effort/more costly versions of this test would involve short answer questions ("How would you respond to X situation?"); less useful but lower-effort versions would involve multiple choice questions. To elaborate, the short answers version forces people to think more about their answer but also probably requires a team of people to read all these answers and check if they're appropriate or not, which is costly.

-some institution (the community health team? I dunno) should come up with and institute codes of conduct for EA events and make sure organizers know about them. There'd presumably need to be multiple codes of conduct for different types of events. This ties in to the previous bullet since it's the sort of thing you'd want to make sure organizers understand. This is a bit of a vague and uninformed proposal -- maybe something like this already exists, although if so I don't know about it, which at minimum implies that if it exists it ought to be more widely advertised.

-maybe a centralized page of resources for victims and allies, with advice, separate from the code of conduct? Don't know how useful this is

-every medium/large EA event/group should have a designated community health point person, preferably female though not necessarily, who makes a public announcement that if someone makes you uncomfortable you can talk to the point person and with your permission they'll do what's necessary to help, and then follows through if people do report issues to them. They should also remind/inform everyone of the role of Julia Wise, and, if someone comes to them with an issue and gives permission to pass it on to her and her team, do that. (You might ask, if this point person is probably just gonna pass things on to Julia Wise, why even have a point person? The answer is that reporting is scary and it can be easier to report to someone you know who has some context on the situation/group.)

Furthermore, making these things happen has to explicitly be someone's job, or the job of a group of someones. It's much likelier to actually happen in practice if it is someone's specific responsibility than if it's just an idea some people talk about on the Forum.

Something I don't think helps much is: trying to tell all EAs that they should improve their behavior and stop being terrible. This won't work because unfortunately, self-identifying EAs aren't all cooperative nice individuals who care about not harming others personally. They don't have incentives to change just because someone tells them to, and worse offenders on these sorts of issues are also very likely to not be the sorts of people who want to read posts like this one about how to do better. That said, I think that posts on this subject that are more helpful are posts that include lots of specific examples or advice, especially advice for bystanders.

I think maybe that the balance I'd strike here is as follows: we always respect nonintervention requests by victims. That is if the victim says "I was harmed by X, but I think the consequences of me reporting this should not include consequence Y" then we avoid intervening in ways that will cause Y. This is a good practice generally, because you never want to disincentivize people from reporting by making it so that them reporting has consequences they don't want. Usually the sorts of unwanted consequences in question are things like "I'm afraid of backlash if someone tells X that I'm the one who reported them" or "I'm just saying this to help you establish a pattern of bad behavior by X, but I don't want to be involved in this so don't do anything about it just based on my report." But this sort of nonintervention request might also be made by victims whose point of view is "I think X is doing really impactful work, and I want my report to at most limit their engagement with EA in certain contexts (e.g., situations where they have significant influence over young EAs), not to limit their involvement in EA generally." In other words, leave impact considerations to the victim's own choice.

I'm not sure this the right balance. I wrote it with one specific real example from my own life in mind, and I don't know how well it generalizes. But it does seem to me like any less victim-friendly positions than that would probably indeed be worse even from a completely consequentialist perspective, because of the likelihood of driving victims away from EA.

Load more