EricHerboso

Board secretary at Animal Charity Evaluators. Involved with the EA movement since 2011. More info at eahub.org/profile/eric-herboso. Blog at ericherboso.org.

Comments

Some thoughts on the EA Munich // Robin Hanson incident
It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.

We are agreed that truth is of paramount importance here. If a true conclusion alienates someone, I endorse not letting that alienation sway us. But I think we disagree on two points:

  1. I believe diversity is a serious benefit. Not just in terms of movement building, but in terms of arriving at truth. Homogeneity breeds blind spots in our thinking. If a supposed truth is arrived at, but only one group recognizes it as truth, doesn’t that make us suspect whether we are correct? To me, good truth-seeking almost requires diversity in several different forms. Not just philosophical diversity, but diversity in how we’ve come up in the world, in how we’ve experienced things. Specifically including BIPGM seems to me to very important in ensuring that we arrive at true conclusions.
  2. I believe the methods of how we arrive at true conclusions doesn’t need to be Alastair Moody-levels of constant vigilance. We don’t have to rigidly enforce norms of full open debate all the time.

I think the latter disagreement we have is pretty strong, given your willingness to bite the bullet on holocaust denial. Sure, we never know anything for sure, but when you get to a certain point, I feel like it’s okay to restrict debate on a topic to specialized places. I want to say something like “we have enough evidence that racism is real that we don’t need to discuss it here; if you want to debate that, go to this other space”, and I want to say it because discussing racism as though it doesn’t exist causes a level of harm that may rise to the equivalent to physical harm in some people. I’m not saying we have to coddle anyone, but if we can reduce that harm for almost no cost, I’m willing to. To me, restricting debate in a limited way on a specific Facebook thread is almost no cost. We already restrict debate in other, similar ways: no name calling, no doxxing, no brigading. In the EAA FB group, we take as a given that animals are harmed and we should help them. We restrict debate on that there because it’s inappropriate to debate that point there. That doesn’t mean it can’t be debated elsewhere. To me, restricting the denial of racism (or the denial of genocide) is just an additional rule of this type. It doesn’t mean it can’t be discussed elsewhere. It just isn’t appropriate there.

In what ways do people not feel safe? (Is it things like this comment?) … I want to know more about this. What kind of harm?

No, it’s not things like this comment. We are in a forum where discussing this kind of thing is expected and appropriate.

I don’t feel like I should say anything that might inadvertently out some of the people that I have seen in private groups talking about these harms. Many of these EAs are not willing to speak out about this issue because they fear being berated for having these feelings. It’s not exactly what you’re asking for, but a few such people are already public about the effects from those harms. Maybe their words will help: https://sentientmedia.org/racism-in-animal-advocacy-and-effective-altruism-hinders-our-mission

“[T]aking action to eliminate racism is critical for improving the world, regardless of the ramifications for animal advocacy. But if the EA and animal advocacy communities fail to stand for (and not simply passively against) antiracism, we will also lose valuable perspectives that can only come from having different lived experiences—not just the perspectives of people of the global majority who are excluded, but the perspective of any talented person who wants to accomplish good for animals without supporting racist systems.
I know this is true because I have almost walked away from these communities myself, disquieted by the attitudes toward racism I found within them.”
EricHerboso's Shortform
If there are Facebook threads that drain your ability to focus for hours, it seems pretty reasonable for that person to avoid such facebook threads. ... [It]seems way better to have that responsibility be on the individual.

We agree here that if something is bad for you, you can just not go into the place where that thing is. But I think this is argument in favor of my position: that there should be EA spaces where people like that can go and discuss EA-related stuff.

For example, some people have to go to the EAA facebook thread as a part of their job. They are there to talk about animal stuff. So when people come into a thread about how to be antiracist while helping animals and decide to argue vociferously that racism doesn't exist, that is just needlessly inappropriate. It's not that the issue shouldn't ever be discussed; it's that the issue shouldn't be discussed there, in that thread.

We should allow people to be able to work on EA stuff without having to be around the kind of stuff that is bad for them. If they feel unable to discuss certain topics without feeling badly, let them not go into threads on the EA forum that discuss those topics. This we agree on. But then why say that we can't have a lesser EA space (like an EA facebook group) for them where they can interact without discussion on the topics that make them feel badly? Remember, some of these people are employees whose very job description may require them to be active on the EAA facebook group. They don't have a choice here; we do.

Examples of loss of jobs due to Covid in EA

Animal Charity Evaluators suspended their paid internship program for the second half of 2020, but plans to resume it in early 2021. This didn't result in anyone losing a job; rather, it meant that temporary intern positions were not filled that otherwise likely would have been, had COVID-19 not happened. There are more details about this in ACE's Room for More Funding blogpost.

Some thoughts on the EA Munich // Robin Hanson incident

If you’re correct that the harms that come from open debate are only minor harms, then I think I’d agree with most of what you’ve said here (excepting your final paragraph). But the position of bipgms I’ve spoken to is that allowing some types of debate really does do serious harm, and from watching them talk about and experience it, I believe them. My initial intuition was closer to your point of view — it’s just so hard to imagine how open debate on an issue could cause such harm — but, in watching how they deal with some of these issues, I cannot deny that the harm from something like a casual denial of systemic racism caused them significant harm.

On a different point, I think I disagree with your final paragraph’s premise. To me, having different moderation rules is a matter of appropriateness, not a fundamental difference. I think that it would not be difficult to say to new EAs that “moderation in one space has different appropriateness rules than in some other space” without hiding the true nature of EA and/or being dishonest about it. This is relevant because one of the main EA Facebook groups is currently deciding how to implement moderation rules with regard to this stuff right now.

Some thoughts on the EA Munich // Robin Hanson incident

Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”

In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out. While it may be a useful thing to discuss (if only to show how absurd it is), we can (I argue) push future discussion of it into a smaller space so that the general EA space doesn’t have to be peppered with such arguments. This is the case even if none of the EAs talking about it actually believe it. Even if they are just steel-manning devil’s advocates, surely it is more effective for us to clean the space up so that our Jewish EA friends feel safe to come here and interact with us, at the cost of moving specific types of discussion to a smaller area.

I agree that one of the things that makes EA great is the quality of its epistemic discourse. I don’t want my words here to be construed that I think we should lower it unthinkingly. But I do think that a counterbalancing force does exist: being so open to discussion of any kind that we completely alienate a section of people who otherwise would be participating in this space.

I strongly believe that representation, equity, and inclusiveness is important in the EA movement. I believe it so strongly that I try to look at what people are saying in the safe spaces where they feel comfortable talking about EA norms that scare them away. I will report here that a large number of people I see talking in private Facebook groups, on private slack channels, in PMs, emails, and even phone calls behind closed doors are continuously saying that they do not feel safe in EA spaces. I am not merely saying that they are “worried” about where EA is heading; I’m saying that right here, right now, they feel uncomfortable fully participating in generalized EA spaces.

You say that “If people wouldn't like the discourse norms in the central EA spaces…I would prefer that they bounce off.” In principle, I think we agree on this. Casual demands that we are being alienating should not faze us. But there does exist a point at which I think we might agree that those demands are sufficiently strong, like the holocaust denial example. The question, then, is not one of kind, but of degree. The question turns on whether the harm that is caused by certain forms of speech outweighs the benefits accrued by discussing those things.

  • Q1: Do you agree that this is a question of degree, not kind? If not, then the rest of this comment doesn't really apply.
  • Q2: You mentioned having similar standards to academia. If it became standard for undergraduate colleges to disallow certain forms of racist speech to protect students, would you be okay with copying those norms over to EA? Or do you mean only having similar standards to what academics discuss amongst each other, setting aside completely how universities deal with undergraduate students' spaces.

I have significant cognitive dissonance here. I’m not at all certain about what I personally feel. But I do want to report that there are large numbers of people, in several disparate places, many of which I doubt interact between themselves in any significant way, who all keep saying in private that they do not feel safe here. I have seen people actively go through harm from EAs casually making the case for systemic racism not being real and I can report that it is not a minor harm.

I’m extremely privileged, so it’s hard for me to empathize here. I cannot imagine being harmed by mere speech in this way. But I can report from direct experience watching private Facebook chats and slack threads of EAs who aren’t willing to publicly talk about this stuff that these speech acts are causing real harm.

Is the harm small enough to warrant just having these potential EAs bounce off? Or would we benefit from pushing such speech acts to smaller portions of EA so that newer, more diverse EAs can come in and contribute to our movement? I hope that you'll agree that these are questions of degree, not of kind. After seeing the level of harm that these kinds of speech acts cause, I think my position of moving that discourse away from introductory spaces is warranted. But I also strongly agree with traditional enlightenment ideals of open discussion, free speech, and that the best way to show an idea is wrong is to seriously discuss it. So I definitely don’t want to ban such speech everywhere. I just want there to be some way for us to have good epistemic standards and also benefit from EAs who don’t feel safe in the main EA Facebook groups.

To borrow a phrase from Nora Caplan-Bricker, they’re not demanding that EA spaces be happy places where they never have to read another word of dissent. Instead, they’re asking for a level of acceptance and ownership that other EAs already have. They just want to feel safe.

Some thoughts on the EA Munich // Robin Hanson incident

I don't think this is pivotal to anyone, but just because I'm curious:

If we knew for a fact that a slippery slope wouldn't occur, and the "safe space" was limited just to the EA Facebook group, and there was no risk of this EA forum ever becoming a "safe space", would you then be okay with this demarcation of disallowing some types of discussion on the EA Facebook group, but allowing that discussion on the EA forum? Or do you strongly feel that EA should not ever disallow these types of discussion, even on the EA Facebook group?

(by "disallowing discussion", I mean Hansonian level stuff, not obviously improper things like direct threats or doxxing)

Do research organisations make theory of change diagrams? Should they?

On Q1: You mention only being aware of a few research orgs that have public ToC diagrams. I wanted to bring your attention to Animal Charity Evaluators, which uses ToC diagrams as a way of better communicating with the public how ACE thinks that a given recommended organization might be causing one of its animal advocacy outcomes.

ACE also uses a ToC diagram in its strategic plan, but this might not be easily searchable because it exists publicly only on pdf documents. (The webpage hosting the strategic plan doesn't use the phrase "theory of change" at all, even though a ToC diagram does exist within the linked pdf there.)

Some thoughts on the EA Munich // Robin Hanson incident

I agree that that was definitely a step too far. But there are legitimate middle grounds that don't have slippery slopes.

For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.

I refuse to defend something as ridiculous as the idea of cancel culture writ large. But I sincerely worry about the lack of racial representativeness, equity, and inclusiveness in the EA movement, and there needs to be some sort of way that we can encourage more people to join the movement without them feeling like they are not in a safe space.

Load More