Scott Alexander

1929Joined Aug 2021

Comments
24

Thanks for your thoughtful response.

I'm trying to figure out how much of a response to give, and how to balance saying what I believe vs. avoiding any chance to make people feel unwelcome, or inflicting an unpleasant politicized debate on people who don't want to read it. This comment is a bad compromise between all these things and I apologize for it, but:

I think the Kathy situation is typical of how effective altruists respond to these issues and what their failure modes are. I think "everyone knows" (in Zvi's sense of the term, where it's such strong conventional wisdom that nobody ever checks if it's true ) that the typical response to rape accusations is to challenge and victim-blame survivors. And that although this may be true in some times and places, the typical response in this community is the one which, in fact, actually happened - immediate belief by anyone who didn't know the situation, and a culture of fear preventing those who did know the situation from speaking out. I think it's useful to acknowledge and push back against that culture of fear.

(this is also why I stressed the existence of the amazing Community Safety team - I think "everyone knows" that EA doesn't do anything to hold men accountable for harm, whereas in fact it tries incredibly hard to do this and I'm super impressed by everyone involved)

I acknowledge that makes it sound like we have opposing cultural goals - you want to increase the degree to which people feel comfortable expressing out that EA's culture might be harmful to women, I want to increase the degree to which people feel comfortable pushing back against claims to that effect which aren't true. I think there is some subtle complicated sense in which we might not actually have opposing cultural goals, but I agree to a first-order approximation they sure do seem different. And I realize this is an annoyingly stereotypical situation  - I, as a cis man, coming into a thread like this and saying I'm worried about a false accusations and chilling effects. My only two defenses are, first, that I only got this way because of specific real and harmful false accusations, that I tried to do an extreme amount of homework on them before calling false, and that I only ever bring up in the context of defending my decision there.  And second, that I hope I'm possible to work with and feel safe around, despite my cultural goals, because I want to have a firm deontological commitment to promoting true things and opposing false things, in a way that doesn't refer to my broader cultural goals at any point. 

Predictably, I disagree with this in the strongest possible terms.

If someone says false and horrible things to destroy other people's reputation, the story is "someone said false and horrible things to destroy other people's reputation". Not "in some other situation this could have been true". It might be true! But discussion around the false rumors isn't the time to talk about that.

Suppose the shoe was on the other foot, and some man (Bob), made some kind of false and horrible rumor about a woman (Alice). Maybe he says that she only got a good position in her organization by sleeping her way to the top. If this was false, the story isn't "we need to engage with the ways Bob felt harmed and make him feel valid." It's not "the Bob lied lens is harsh and unproductive". It's "we condemn these false and damaging rumors". If the headline story is anything else, I don't trust the community involved one bit, and I would be terrified to be associated with it.

I understand that sexual assault is especially scary, and that it may seem jarring to compare it to less serious accusations like Bob's. But the original post says we need to express emotions more, and I wanted to try to convey an emotional sense of how scary this position feels to me. Sexual assault is really bad and we need strong norms about it. But we've been talking a lot about consequentialism vs. deontology lately, and where each of these is vs. isn't appropriate. And I think saying "sexual assault is so bad, that for the greater good we need to focus on supporting accusations around it, even when they're false and will destroy people's lives" is exactly the bad kind of consequentialism that never works in real life. The specific reason it never works in real life is that once you're known for throwing the occasional victim under the bus for the greater good, everyone is terrified of associating with you.

Perhaps I would feel differently if I knew of examples of the EA community publicly holding men accountable for harm to women.

This is surprising to me; I know of several cases of people being banned from EA events for harm to women. When I've tried to give grants to people, I have gotten unexpected emails from EA higher-ups involved in a monitoring system, who told me that one of those people secretly had a history of harming women and that I should reconsider the grant on that basis. I have personally, at some physical risk to myself, forced a somewhat-resistant person to leave one of my events because they had a history of harm to women (this was Giego Caleiro; I think it's valuable to name names in some of the most extreme clear-cut cases; I know most orgs have already banned him, and if your org hasn't then I recommend they do too - email me and I can explain why). I know of some other cases where men caused less severe cases of harm or discomfort to women, there were very long discussions by (mostly female members of) EA leadership about whether they should be allowed to continue in their roles, and after some kind of semi-formal proceeding, with the agreement of the victim, after an apology, it was decided that they should be allowed to continue in their roles, sometimes with extra supervision. There's an entire EA Community Health Team with several employees and a mid-six-figure budget, and a substantial fraction of their job is holding men accountable for harm to women.  If none of this existed, maybe I'd feel differently.  But right now my experience of EA is that they try really hard to prevent harm to women, so hard that the current disagreement isn't whether to ban some man accused of harming women, but whether it was okay for me to mention that a false accusation was false.

Again in honor of the original post saying we should be more open about our emotions: I'm sorry for bringing this up. I know everyone hates having to argue about these topics. Realistically I'm writing this because I'm triggered and doing it as a compulsion, and maybe you also wrote your post because you're triggered and doing it as a compulsion, and maybe Maya wrote her post because she's triggered and doing it as a compuIsion. This is a terrible topic where a lot of people have been hurt and have strong feelings, and I don't know how to avoid this kind of cycle where we all argue about horrible things in circles. But I am geninely scared of living in a community where nobody can save good people from false accusations because some kind of mis-aimed concern about the greater good has created a culture of fear around ever speaking out. I have seen something like this happen to other communities I once loved and really don't want it to happen here. I'm open to talking further by email if you want to continue this conversation in a way that would be awkward on a public forum.

EDIT: After some time to cool down, I've removed that sentence from the comment, and somewhat edited this comment which was originally defending it. 

I do think the sentence was true. By that I mean that (this is just a guess, not something I know from specifically asking them) the main reason other people were unwilling to post the information they had, was because they were worried that someone would write a public essay saying "X doesn't believe sexual assault victims" or "EA has a culture of doubting sexual assault victims". And they all hoped someone else would go first to mention all the evidence that these particular rumors were untrue, so that that person could be the one to get flak over this for the rest of their life (which I have, so good prediction!), instead of them. I think there's a culture of fear around these kinds of issues that it's useful to bring to the foreground if we want to model them correctly.

But I think you're gesturing at a point where if I appear to be implicitly criticizing Maya for bringing that up, fewer people will bring things like that up in the future, and even if this particular episode was false, many similar ones will be true, so her bringing it up is positive expected value, so I shouldn't sound critical in any way that discourages future people from doing things like that. 

Although it's possible that the value gained by saying this true thing is higher than the value lost by potential chilling effects, I don't want to claim to have an opinion on this, because in fact I wrote that comment feeling pretty triggered and upset, without any effective value calculations at all. Given that it did get heavily upvoted, I can see a stronger argument for the chilling effect part and will edit it out.

I read about Kathy Forth, a woman who was heavily involved in the Effective Altruism and Rationalist communities. She committed suicide in 2018, attributing large portions of her suffering to her experiences of sexual harassment and sexual assault in these communities. She accuses several people of harassment, at least one of whom is an incredibly prominent figure in the EA community. It is unclear to me what, if any, actions were taken in response to (some) of her claims and her suicide. What is clear is the pages and pages of tumblr posts and Reddit threats, some from prominent members of the EA and Rationalist communities, disparaging Kathy and denying her accusations.

 

I'm one of the people (maybe the first person?) who made a post saying that (some of) Kathy's accusations were false. I did this because those accusations were genuinely false, could have seriously damaged the lives of innocent people, and I had strong evidence of this from multiple very credible sources.

I'm extremely prepared to defend my actions here, but prefer not to do it in public in order to not further harm anyone else's reputation (including Kathy's). If you want more details, feel free to email me at scott@slatestarcodex.com and I will figure out how much information I can give you without violating anyone's trust.

I think I wrote that piece in 2010 (based on timestamp on version I have saved, though I'm not 100% sure that's the earliest draft). I would have been 25-26 then. I agree that's the first EA-relevant thing I wrote.

For what it's worth, I still don't feel like I understand CEA's model of how having extra people present hurts the most prestigious attendees.

If you are (say) a plant-based meat expert, you are already surrounded by many AI researchers, epidemiologists, developmental economists, biosecurity analyists, community-builders, PR people, journalists, anti-factory-farm-activists, et cetera. You are probably going to have to plan your conversations pretty deliberately to stick to people within your same field, or who you are likely to have interesting things to say to. If the conference were twice as big, or three times, and filled with eg people who weren't quite sure yet what they wanted to do with their careers, would that be the difference between that expert having a high chance of productive serendipitious conversations vs. not?

Thanks for your response. I agree that the goal should be trying to hold the conference in a way that's best for the world and for EA's goals. If I were to frame my argument more formally, it would be something like - suppose that you reject 1000 people per year (I have no idea if this is close to the right number). 5% get either angry or discouraged and drop out of EA. Another 5% leave EA on their own for unrelated reasons, but would have stayed if they had gone to the conference because of some good experience they had there. So my totally made up Fermi estimate is that we lose 100 people from EA each time we run a closed conference. Are the benefits of the closed conference great enough to compensate for that?

I'm not sure, because I still don't understand what those benefits are. I mentioned in the post that I'd be in favor of continuing to have a high admissions bar for the networking app  (or maybe just sorting networkers by promise level). You write that: 

Very involved and engaged EAs might be less eager to come to EAG if the event is not particularly selective. (This is a thing we sometimes get complaints about but it’s hard for people to voice this opinion publicly, because it can sound elitist). These are precisely the kinds of people we most need to come — they are the most in-demand people that attendees want to talk to (because they can offer mentorship, job opportunities, etc.).

I think maybe our crux is that I don't understand this impulse, beyond the networking thing I mentioned above. Is the concern that the unpromising people will force promising people into boring conversations and take up too much of their time? That they'll disrupt talks? 

We also have the EAGx conference series, which is more introductory and has a lower bar for admissions. If someone is excited to learn more about EA, they’d likely be better suited to an EAGx event (and they’d be more likely to get accepted, too).

My understanding is that people also sometimes get rejected from EAGx and there is no open admission conference, is this correct?

I'm having trouble figuring out how to respond to this. I understand that it's kind of an academic exercise to see how cause prioritization  might work out if you got very very rough numbers and took utilitarianism very seriously without allowing any emotional considerations to creep in. But I feel like that potentially makes it irrelevant to any possible question.

If we're talking about how normal people should prioritize...well, the only near-term cause close to x-risk here is animal welfare. If you tell a normal person "You can either work to prevent you and everyone you love from dying, or work to give chickens bigger cages, which do you prefer?", their response is not going to depend on QALYs.

If we're talking about how the EA movement should prioritize, the EA movement currently  spends more on global health than on animal welfare and AI risk combined. It clearly isn't even following near-termist ideas to their logical conclusion, let alone long-termist ones.

If we're talking about how a hypothetical perfect philosopher would prioritize, I think there would be many other things they worry about before they get to long-termism. For example, does your estimate for the badness of AI risk include that it would end all animal suffering forever? And all animal pleasure? Doesn't that maybe flip the sign, or multiply its badness an order of magnitude? You very reasonably didn't include that because it's an annoying question that's pretty far from our normal moral intuitions, but I think there are a dozen annoying questions like that, and that long-termism could be thought of as just one of that set, no more fundamental or crux-y than the others for most people.

I'm not even sure how to think about what these numbers imply. Should the movement put 100% of money and energy into AI risk, the cause ranked most efficient here? To do that up until the point where the low-hanging fruit have been picked and something else is most effective? Are we sure we're not already at that point, given how much trouble LTF charities report finding new things to fund? Does long-termism change this, because astronomical waste is so vast that we should be desperate for even the highest fruit? Is this just Pascal's Wager? These all seem like questions we have to have opinions on before concluding that long-termism and near-termism have different implications.

I find that instead of having good answers to any of these questions, my long-termism (such as it is) hinges on an idea like "I think the human race going extinct would be extra bad, even compared to many billions of deaths". If you want to go beyond this kind of intuitive reasoning into real long-termism, I feel like you need extra work to answer the questions above that in general isn't being done.

Yes, I'm sorry, I talked to Claire about it and updated, sorry for the mixed messages and any stress this caused.

Thanks for the link, which I had previously missed and which does contain some important considerations.

I've been assuming that the people who set up the first impact market will have the opportunity to affect the "culture" around certificates, especially since many people will be learning what they are for the first time after the market starts to exist, but I agree that eventually it will depend on what buyers and sellers naturally converge to.

One way that preference could be satisfied is to give each share a number. Funders will value the first shares most, because they are fully "counterfactual", but if half of the value comes from a thing that the founder would have done anyway

I don't understand this - if you need $1 million to do the project, isn't the one millionth one-dollar share purchased doing just as much good as the first? I think I'm missing something about the scenario you're thinking of.

Load More