This is a special post for quick takes by EricHerboso. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 2:33 PM

I was going to write a comment on the EA Munich/Hanson incident post, but I realized that what I really wanted to say was a bit more general and had a different intended audience: those people in EA who (1) believe banning topics is wrong in general, (2) think that people are not really all that harmed by open discussion of certain topics, and (3) don’t understand why a fellow effective altruist of all people would ever try to ‘cancel’ another EA just because some of their discussed ideas are controversial. I don’t believe that all (or even all that many) people in this forum correspond to all (1), (2), and (3). But I think I may have felt that way at one point in the past, and I wanted to explain to that group why I no longer solely feel that way. (I currently have cognitive dissonance with regard to this issue.)

(I say the following because it is how I feel. I’m not speaking on behalf of anyone else nor any organization I'm with.)

Some speech is harmful. Even speech that seems relatively harmless to you might be horribly upsetting for others. I know this firsthand because I’ve seen it myself.

I’m extremely privileged. I’m white, male, CIS, well-educated, and I don’t have to work to earn a living. Sure, I have a few non-privileged parts of me (hispanic, asexual, polyamorous), but ultimately I find that speech is rarely harmful to me. So it comes as no surprise that I have historically found no issue with the ideals of the enlightenment: open discussion of ideas, free speech, believing ideas based on argumentation and evidence. Yet we all know that not all speech is harmless. You can’t shout ‘fire’ in a crowded theater, nor command another to cause direct harm.

Racist speech might not affect me very much, but it can really and truly hurt people of the global majority that have to deal with it constantly. I have friends who I have watched first hand having to read through a racist Facebook thread who were subsequently unable to focus for hours afterward. The exhaustion from having to deal with even just questions about the legitimacy of systemic racism was equivalent to what I would feel if I had to dig a hole for two hours straight. It’s not because my friends are weak-willed, or that they cannot take criticism well. It’s because they live day-in and day-out in a system that actively oppresses their ability to succeed, and people who are supposedly their peers, people who have also decided to live their lives in the service of achieving effective altruism, people who otherwise claim to be dedicated to good — these people were just casually questioning whether blacks even had it that bad in today’s society.

I believe that if people were honestly trying to be rational, then open discussion and debate would eventually kill off racist memes in society. And since I’d like to believe that the EA community is trying to be both honest and rational, I naively thought that open discussion and debate in EA spaces, above all other spaces, would be the perfect way to deal with ordinarily divisive issues like the black lives matter movement. But I was wrong.

I know that I was wrong because people of the global majority continuously speak in safe spaces about they feel unsafe in EA spaces. They speak about how they feel harmed by the kinds of things discussed in EA spaces. And they speak about how there are some people — not everyone, but some people — who don’t seem to be just participating in open debate in order to get at the truth, but rather seem to be using the ideals of open discussion to be a cloak that can hide their intent to do harm.

We in the EA community need to figure out how to do better. We need a diverse set of people to at least feel safe in our community. That doesn’t mean we should quash discussion on weird issues. We need that. But we don’t need it in the places where everyone keeps doing it. It doesn’t need to be at local public EA events. It doesn’t need to be in the main Facebook group chat. It doesn’t need to be anywhere near places that are primarily or secondarily used as public ways to encourage people new to EA.

In the field of communications, a distinction is made between interactive, push, and pull communication. Push communication is exemplified by email; it’s stuff you send out to your audience. You generally don’t want the weird unattractive stuff to be in your push communications. That doesn’t mean it shouldn’t exist. To the contrary, I want there to be a space for people to legitimately debate in open, non-private discussion about even topics with Hansonian-levels of weirdness. But it needs to not be in our push or interactive-online communications. I want my friends to feel safe in public introductory EA spaces. Let diverse representative people join the movement first, and then let them choose of their own accord where they want to put in their efforts. Just as we let individual people work unmolested on longtermism, or animal suffering, or even mental health, so too should we allow people to work in spaces safe from constant racist questioning, even while others volunteer to work in less safe spaces where open debate and serious steel-manning of controversial ideas occur.

I implore others to consider the harm caused when bigotry is so casually discussed on the EA Facebook group or in a local EA meetup. This is real harm. Not just PR harm. Real harm. Let’s move that kind of discussion to pull-communications-only spaces. They needn’t be private; they should definitely be public and open (though I’d warn about trolls masquerading as devil’s advocates there). But they have no place in the spaces that we use for attracting new talent. The EA movement is too white and male as it is; if we are to succeed in truly achieving effective altruism at scale, then we need representativeness, equity, and inclusion in the movement, and that means, at a minimum, introductory EA spaces must be free from casual bigotry.

I have friends who I have watched first hand having to read through a racist Facebook thread who were subsequently unable to focus for hours afterward.

I just read through this thread and it just doesn't sit right with me to have called it a racist thread. In fact I would say that there are many people of colour who share the same views as the original poster in that thread.

All racists may deny systemic racism, but that doesn't mean that all those who deny systemic racism are racist. Perhaps the original poster is ignorant or misinformed (if he is in fact wrong), but I don't think there's enough in that thread to call him racist.

That's not to say we shouldn't take into account how threads like that make people feel and if they make people feel unwelcome or uncomfortable that is of course a bad thing. However I do think we are too quick to label things as racist.

FWIW I am a mixed-race person. Admittedly I haven't experienced racist discrimination in my life so can't properly understand what it must be like for people who do.

people of the global majority

Apologies for not having the time to engage more substantively with your post, but before this term starts spreading as the fashion of the day, can someone explain to me, given that there are countless ways to divide the global population into a majority and a minority, why does it makes sense to privilege the white/non-white divide and call non-whites "the" global majority, with the implication that whites are "the" global minority? Are you basically saying that of all the possible differences between people in the world, this is the most important one, and therefore deserving of "the"? And how is anyone supposed to know that's what you've decided to do, when first encountering this term?

No need to apologize. It's just a shortform, and I have enough cognitive dissonance on the topic to not be really sure what I think about it myself.

I agree with you that the phrase "people of the global majority" sounds weird and naively seems to divide people into unintuitive groups unnecessarily. But in my post I was talking about friends that I personally know who have been hurt by things the EA movement has said in some introductory social media spaces, and their preferred name as a group is "people of the global majority". By using it, I'm merely using the term they've taken for themselves, because it doesn't seem to hurt anything and it generally is nice to use the names that people have adopted for themselves.

Their reasoning for using "people of the global majority" is that:

  • "people of color" is too US-centric;
  • it centers whiteness as the norm;
  • it implies that white folks are devoid of race; &
  • many people may not identify as POC as it’s a U.S. social and cultural construct that does not translate universally.

I believe that they find it empowering to identify as a larger group. Personally, I've always felt more empowered by being considered part of a smaller group, but I believe they would say that that is a another example of my privilege. Since it does no obvious harm to call them what they want to call themselves, that's what I do.

You also asked "how is anyone supposed to know that's what you've decided to do, when first encountering this term?" To that, I don't really have a good answer. I found the term BIPGM (black, indigenous, or person of the global majority) to be really unusual when I first came across it. It seemed to me to be another example of this new generation naming things unnecessarily differently. But it's not any worse than what LessWrong does, where decades-old concepts are re-named in favor of whatever Yudkowsky titled it in the sequences. As weird as it may seem on first hearing the term PGM, I don't see why it shouldn't be used when talking about specific people who themselves prefer that term to be used.

I know that I was wrong because people of the global majority continuously speak in safe spaces about they feel unsafe in EA spaces. They speak about how they feel harmed by the kinds of things discussed in EA spaces. And they speak about how there are some people — not everyone, but some people — who don’t seem to be just participating in open debate in order to get at the truth, but rather seem to be using the ideals of open discussion to be a cloak that can hide their intent to do harm.

I'm not sure what to say to this.

Again, just because someone claims to feel harmed by some tread of discourse, that can't be sufficient grounds to establish a social rule against it.But I am most baffled by this...

. And they speak about how there are some people — not everyone, but some people — who don’t seem to be just participating in open debate in order to get at the truth, but rather seem to be using the ideals of open discussion to be a cloak that can hide their intent to do harm.

Um. Yes? Of course? It's pretty rare that people are in good faith and sincerely truth-seeking. And of course there are some bad-actors, in every group. And of course those people will be pretending to have good intentions. Is the claim that in order to feel safe, people need to know that there are no bad actors?(I think that is not a good paraphrase of you.)

We need a diverse set of people to at least feel safe in our community.

Yeah. So the details here matter a lot, and if we operationalize, I might change my stance here. But on the face of this, I disagree. I think that we want people to be safe in our community and that we should obviously take steps to insure that. But it seems to be asking to much to insure that people feel safe. People can have all kinds of standards regarding what they need to feel safe, and I don't think that we are obligated to carter to them because they are on the list of things that some segment of people need to feel safe.

Especially if one of the things on that list is "don't openly discuss some topics that are relevant to improving the world." That is what we do. That's what we're here to do. We should sacrifice pretty much none of the core point of the group to be more inclusive.

"How much systemic racism is there, what forms does it take, and how does it impact people?" are actually important questions for understanding and improving the world. We want to know if there is anything we can do about it, and how it stacks up against other interventions. Curtailing that discussion is not a small or trivial ask.

(In contrast, if using people's preferred pronouns, or serving vegan meals at events, or not swearing, or not making loud noises, etc. helped people feel safe and/or comfortable, and they are otherwise up for our discourse standards, I feel much more willing to accommodate them. Because none of those compromise the core point of the EA community.)


...Oh. I guess one thing that seems likely to be a crux:

...if we are to succeed in truly achieving effective altruism at scale..

I am not excited about scaling EA. If I thought that trying to do EA at scale was a good idea, then I would be much more interested in having different kinds of discussions in push and pull media.

Some speech is harmful. Even speech that seems relatively harmless to you might be horribly upsetting for others. I know this firsthand because I’ve seen it myself.

I want to distinguish between "harmful" and "upsetting". It seems to me that there is a big difference between shouting 'FIRE' in a crowed theater, "commanding others to do direct harm" on the one hand, and "being unable to focus for hours" after reading a facebook thread, being exhausted from fielding questions.

My intuitive grasp of these things has it that the "harm" of the first category is larger than that of the second. But even if that isn't true, and the harm of reading racist stuff is as bad as literal physical torture, there are a number of important differences.

For one thing, the speech acts in the first category have physical, externally legible bad consequences. This matters, because it means we can have rules around those kinds of consequences that can be socially enforced without those rules being extremely exploitable. If we adopt a set of discourse rules that say "we will ban any speech act that produce significant emotional harm", then anyone not in good faith can shut down and discourse that they don't like by claiming to be emotionally harmed by it. Indeed, they don't even need to be consciously malicious (though of course there will be some explicitly manipulative bad actors); this creates a subconscious incentive to be and act more upset than you might otherwise be by some speech-acts, because if you are sufficiently upset, the people saying things you don't like will stop.

Second, I note that both of the examples in the second category are much easier to avoid than the second category. If there are Facebook threads that drain someone’s ability to focus for hours, it seems pretty reasonable for that person to avoid such facebook threads. Most of us have some kind of political topics that we find triggering, and a lot of us find that browsing facebook at all saps our motivation. So we have workarounds to avoid that stuff. These workarounds aren't perfect, and occasionally you'll encounter material that triggers you. But it seems way better to have that responsibility be on the individual. Hence the idea of safe spaces in the first place.

Furthermore, there are lots of things that are upsetting (for instance, that there are people dying of preventable Malaria in the third world right now, and that this in principle, could be stopped if enough people and the first world knew and cared about it, or that the extinction of humanity is plausibly imminent), which are never the less pretty important to talk about.

If there are Facebook threads that drain your ability to focus for hours, it seems pretty reasonable for that person to avoid such facebook threads. ... [It]seems way better to have that responsibility be on the individual.

We agree here that if something is bad for you, you can just not go into the place where that thing is. But I think this is argument in favor of my position: that there should be EA spaces where people like that can go and discuss EA-related stuff.

For example, some people have to go to the EAA facebook thread as a part of their job. They are there to talk about animal stuff. So when people come into a thread about how to be antiracist while helping animals and decide to argue vociferously that racism doesn't exist, that is just needlessly inappropriate. It's not that the issue shouldn't ever be discussed; it's that the issue shouldn't be discussed there, in that thread.

We should allow people to be able to work on EA stuff without having to be around the kind of stuff that is bad for them. If they feel unable to discuss certain topics without feeling badly, let them not go into threads on the EA forum that discuss those topics. This we agree on. But then why say that we can't have a lesser EA space (like an EA facebook group) for them where they can interact without discussion on the topics that make them feel badly? Remember, some of these people are employees whose very job description may require them to be active on the EAA facebook group. They don't have a choice here; we do.

Just to clarify, are you arguing that the Hanson thread shouldn't have been posted, because it would qualify as casual discussion of bigotry? You reference the thread at the beginning, but I'm unclear whether it's an example of what you discuss later.

No, I don't think the discussion on the Hanson thread in this forum involved casual bigotry. In general, I think discussion here on the EA forum tends to be more acceptable than in other EA social media spaces. (Maybe this is because nested threads are supported here, or maybe it's because people consider this space more prestigious and so act more respectfully.) But much of the discussion of the Hanson incident on Twitter would certainly qualify as casual bigotry, and I've witnessed a few threads on EA Facebook groups that also involved clear casual bigotry.

I should stress here that I feel very conflicted about the status of this EA forum as it relates to new EAs. On the one hand, we clearly use this as a place for new EAs to discuss things. But I almost want to say that this is somehow a more formal space than the EA Facebook group, so I'd be far more comfortable with discussing divisive issues here than on Facebook. I said in my original post that we can have a less safe pull communication space where open debate and serious steel-manning of controversial ideas occur -- I think maybe the EA forum would be a good place for that to happen. But I'm not sure of this. And this whole line of thinking is something I'm still feeling conflicted about in the first place, so I don't know how seriously others should take what I'm saying here at all.

I think this is really nicely written but I'm not sure how many people see it if it's just on your short form

I have friends who I have watched first hand having to read through a racist Facebook thread who were subsequently unable to focus for hours afterward.

Wow, that's a shocking thread. This will definitely put off newcomers! I can understand why you might want to ban discussion of woke topics from introductory spaces if that sort of thing will be the result!

To be honest I'm surprised the moderators didn't block Blasian Diezo for being such a bully. It seems like he is clearly violating the group rules:

1) Be civil (e.g. don't insult other advocates, especially other group members)

for responding to perfectly reasonable advocacy for a colorblind society from Joachim with this sort of nasty vitriol:

you're a part of the problem if this is your mentality
you love white supremacy like that?
that's a white supremacist goal
if the only black person you know about who worked for anti-racism movements is mlk, you're worthless. ... if what you gathered from a snippet of his quotes is that he was trying to achieve a "color blind" society, you're worthless.
if you don't like being called a white supremacist, stop saying/doing white supremacist shit.
so fuck you and stop trying to police how oppressed ppl address the shit we have to deal with from you

However, while I understand your view, I don't think I agree with it. I think it is best to tolerate Blasian-style opinions and let them be discussed rationally; we should just make sure that people are civil and reasonable, without unnecessarily insulting other people. Just because he is behaving badly doesn't mean the same conversation couldn't be beneficial otherwise.

I downvoted this because it seems pretty clear that the author was referencing other aspects of the Facebook thread, and this felt belittling instead of engaging with the author's overall post.

While I agree there is a thing going on here that's kind of messy, I think Dale is making a fine point. I would however pretty strongly prefer it if he wouldn't feign ignorance and instead just say straightforwardly that he thinks possibly the biggest problem with the thread is not actually the people arguing against racism as a cause area, but the people violating various rules of civility in attacking the people who argue against it, and the application of (as I think he perceives it) a highly skewed double-standard in the moderation of those perspectives, which is an assessment I find overall reasonably compelling.

Like, I found Dale's comment useful, while also feeling kind of annoyed by it. Overall, that means I upvoted it, but I agree with you on the general algorithm that I prefer straightforward explicit communication over feigned ignorance, even if the feigned ignorance is obviously satirical, as it is in this case.

Thanks for sharing your thoughts! I guess part of the reason I feel more strongly that this kind of comment ought not to be upvoted is that EricHerboso seemed to bring up the Facebook thread not to open a debate on its content, but to point out that the behavior of some of the Facebook commentors harmed EAs or EA adjacent organizations through putting an emotional toll on people, and that this kind of behavior is explicitly costing EA. That seems like a really important thing to discuss - regardless of what you think of the content of the thread, the content EricHerboso refers to in it negatively impacted the movement.

Dale's comment feels unnecessarily trollish, but also tries to turn the thread into a conversation about what I see as an unrelated topic (the rules of conduct in a random animal rights Facebook group). It vaguely tries to tie back to the post, but mostly this seems like a weak disguise for trolling EricHerboso.

sky
4y10
0
0

Quick meta comment: Thanks for explaining your downvote; I think that's helpful practice in general

Curated and popular this week
Relevant opportunities