Scott Alexander's response is the first time I see that there is someone I can contact who has substantial claims of evidence that Kathy Forth's accusations were false. I've heard of Kathy twice in the last month (don't remember hearing about her at all before then), as have others in my local community. Many find Scott Alexander's response valuable, which is why it is the top comment. A large part of the EA community appears to only recently be learning about Kathy Forth.
it makes me sad that the top comment on this post is adversarial/argumentative and showing little emotional understanding/empathy (particularly the line "getting called out in posts like this one"). I think it unfortunately demonstrates well the point the author made about EA having an emotions problem
I personally think Scott shows immense emotional maturity responding to this in a context where he is opening himself up to huge scrutiny, including the criticism of being told he is being too adversarial and lacking empathy. He removed the sentence in question after some reflection, updating immediately and explaining his thought process, empathising with another's perspective and recognizing his own emotional state that led to him to include that sentence. To me these seem the hallmarks of being a well emotionally regulated individual. If it isn't, what does a person with emotional understanding/empathy do differently in this situation?
Before you answer that question, let's take a moment to actually highlight what the situation even is:
Kathy Forth was a human, a member of our community, that committed suicide. Given the serious implications of Kathy Forth's accusations if they were to be true, it seems that we should place a lot of value on anything that can confirm or deny the veracity of Kathy Forth's story. Do you disagree?
This is the sentence you don't like that Scott brought up. He removed it.
But they wouldn't do that, I'm guessing because they were all terrified of getting called out in posts like this one.
OP wrote the following words that, for lack of a better word, triggered Scott. She also has the opportunity to amend or qualify these words:
I read about Kathy Forth, a woman who was heavily involved in the Effective Altruism and Rationalist communities. She committed suicide in 2018, attributing large portions of her suffering to her experiences of sexual harassment and sexual assault in these communities. She accuses several people of harassment, at least one of whom is an incredibly prominent figure in the EA community. It is unclear to me what, if any, actions were taken in response to (some) of her claims and her suicide. What is clear is the pages and pages of tumblr posts and Reddit threats, some from prominent members of the EA and Rationalist communities, disparaging Kathy and denying her accusations.
Nowhere in what OP writes above does she even seem to entertain the possibility that at least some of Kathy's major accusations could be false (she says "It is unclear to me what, if any, actions were taken in response to (some) of her claims and her suicide" but nothing akin to "It is unclear to me whether Kathy Forth's accusations were true").Kathy Forth's story is a really really serious accusation. One in which whether we believe it is true or false would and should significantly update our priors around how the EA community treats the concerns of women. If viewed to be true, it would frame the experiences of women and the EA community's prior response to them in a very sinister light. If viewed to be false, then concerns 1+2 don't have a broader sinister context and could be more optimistically corrected with improved managment and a culture shift in EA. For example...
These issues seem like something we can come together and fix. But if Kathy Forth's accusations are true, then it is implied that EA as a community is much more sinister and not interested in addressing the concerns of women. If Kathy Forth's accusations are true, then there is a deep rot within the EA community. There is no "facepalm" joke I'd be able to light-heartedly say about it, as if it were a thing we can reasonably fix. If Kathy Forth's accusations are true, OP is right to be scared.
OP's framing of Kathy Forth's experience strongly implies they view Kathy Forth's accusations as more true than false. I don't know if she believes this because either:
a) She had bad experiences in EA and that made her update more towards "Kathy Forth's story is probably true"b) She thought Kathy Forth's story is probably more true than false, and that made her update her experiences in EA as more bad (in a broader sinister context) than they otherwise would beBut the epistemic particulars are beside the point, because either way, if Scott is able to provide compelling evidence that parts of Kathy Forth's story is false, this could help OP (and everyone else) feel less sad, disappointed and scared and that can only be a good thing.
OP, if you are reading this, I realize the EA community can seem intimidating. I empathize strongly with you when you expressed concern about how others will judge your writing style and take you less seriously if it did not conform to forum standards (why hello there... my still haven't-made-one-post self... something I am still embarrassed about given how long I've been a part of EA). That said, I am confident if you made a very short post just like "Please, what happened to Kathy Forth and why? I need to know, I cant sleep. I feel sad, disappointed, and scared." the EA community would have responded with compassion - not caring that the post didn't abide by some set of forum norms. But, regardless, you did put effort into a longer post here so I just want to say I'm glad you posted this.
Good lord... am I the only one shedding a tear? This post is one helluva heavy dose of fuzzies and if you are a hug person I'd love to give you a virtual hug for making my morning, if you consent. Thank you so much for all the effort put into this warm post. It is appreciated. Ganna go off to do my earn to give coding with a smug smile on my face now.
Your response to perpetual foundations seems to have surprising and suspicious convergence. At some base-rate we cannot just keep saying "oh, another failed past example of perpetual foundation. But that is ok, this is actually consistent with the conclusion that they are still a good idea."
The dis-analogy here with business startups is that we actually have clear evidence that some business startups do drastically succeed to make up for all the failures. Granted, this could just be because there haven't been enough perpetual foundations for us to finally hit on the great result that will make up for the failures.
So, although I agree your response on perpetual foundations is warranted, it still makes me raise my eyebrow.
I couldn't attend the interpretability hackathon and was hoping to get acquainted with LLM interpretability research as a sofware dev with no experience in interpretability or transformers. So here's a starting point following in the footsteps of this submission (see their writeup here):
Basically I am thinking we can use the hackathon as a collaborative study session to become more familiar with transformers and interpretability, ultimately culminating in replicating the results in the linked submission (it took them 3 days but since we have a starting point, possibly we can replicate their project and grok what they did much quicker).
Not shoehorned to this idea though. If you think there is a better avenue to using the hackathon to upskill in LLM interpretability and transformers, do share.
Count me in as someone interested in joining this project! Something I miss from the old GWWC before the website redesign was different donations had "badges" and it felt like a collectible game to donate to each EA charity at least once. Maybe something like this could be added back.
A bit of donation gamification can go a long way.
How do you pick between FWI and SWP? I'm looking between those myself.
This post's philosophical implications - which has remained foundational to informing my donations - is explored further in OP's new charity research that he subsequently conducted:
So just FYI:
I've had members in my community point out concern that the post being taken down is evidence of censorship in EA.
The message "Sorry, you don't have access to this page" should probably read something like "Sorry, this post has been removed by the author." This is not even "only about optics". This is just updating a message so that it says something true rather than false.
Just want to let you know there are Georgist EAs out there. In my head I put something like 80% odds that Georgism (or maybe just LVT) will be a cause area (i.e. similar to how at first dismissed areas like mental health, wild animal suffering or great power conflict are now considered major cause areas in EA) within the next ten years.A second reason I'm writing this comment is so I can look back on it in ten years.
I'm trying to more succinctly understand what you're saying since your second last paragraph has confusing wording. You're saying that Nonlinear can scale as EA scales (as opposed to scaling by their ability) and thereby attract competent clout like Emerson (since EA has become more famous as a whole it attracts big-shots), but that as an organization they don't yet produce enough value/output for someone like Emerson to be a good fit at their organization? And that this plausibly has a causal relationship to why there has been conflict? e.g. Emerson being a bad fit leads to him more easily getting frustrated with other employees?(PS: just a note that this doesn't excuse Emerson mistreating employees if he was indeed mistreating employees. My comment here is just trying to understand what the comment above is saying since it confused me, but I think it might be valuable to clarify)