6631 karmaJoined


Moral Anti-Realism


Okay, what does not tolerating actual racism look like to you? What is the specific thing you're asking for here?

Up until recently, whenever someone criticized rationality or EA for being racist or for supporting racists, I could say something like the following: 

"I don't actually know of anyone in these communities who is racist or supports racism. From what I hear, some people in the rationality community occasionally discuss group differences in intelligence, because this was discussed in writings by Scott Alexander, which a lot of people have read and so it gives them shared context. But I think this doesn't come from a bad place. I'm pretty sure people who are central to these communities (EA and rationality) would pretty much without exception speak up strongly against actual racists."

It would be nice if I could still say something like that, but it no longer seems like I can, because a surprising number of people have said things like "person x is quite racist, but [...] interesting ideas." 

I’m not that surprised we aren’t understanding one another, we have our own context and hang ups.

Yeah, I agree I probably didn't get a good sense of where you were coming from. It's interesting because, before you made the comments in this post and in the discussion here underneath, I thought you and I probably had pretty similar views. (And I still suspect that – seems like we may have talked past each other!) You said elsewhere that last year you spoke against having Hanania as a speaker. This suggested to me that even though you value truth-seeking a lot, you also seem to think there should be some other kinds of standards. I don't think my position is that different from "truth-seeking matters a ton, but there should be some other kinds of standards." That's probably the primary reason I spent a bunch of time commenting on these topics: the impression that the "pro truth-seeking" faction in my view seemed to be failing to make even some pretty small/cheap concessions. (And it seemed like you were one of the few people who did make such concessions, so, I don't know why/if it feels like we're disagreeing a lot.)

(This is unrelated, but it's probably good for me to separate timeless discussion about norms from an empirical discussion of "How likely is it that Hanania changed a lot compared to his former self?" I do have pessimistic-leaning intuitions about the latter, but they're not very robust because I really haven't looked into this topic much, and maybe I'm just prejudiced. I understand that, if someone is more informed than me and believes confidently that Hanania's current views and personality are morally unobjectionable, it obviously wouldn't be a "small concession" for them to disinvite or not platform someone they think is totally unobjectionable! I think that can be a defensible view depending on whether they have good reasons to be confident in these things. At the same time, the reason I thought that there were small/cheap concessions that people could make that they weirdly enough didn't make, was that a bunch of people explicitly said things like "yeah he's pretty racist" or "yeah he recently said things that are pretty racist" and then still proceeded to talk as though this is just normal and that excluding racists would be like excluding Peter Singer. That's where they really lost me.)

Just as a heads-up, I'm planning to get off the EA forum for a while to avoid the time-sink issues, so I may not leave more comments here anytime soon.

I feel like the controversy over the conference has become a catalyst for tensions in the involved communities at large (EA and rationality).

It has been surprisingly common for me to make what I perceive to be totally sensible point that isn't even particularly demanding (about, e.g., maybe not tolerating actual racism) and then the "pro truth-seeking faction" seem to lump me together with social justice warriors and present analogies that make no sense whatsoever. It's obviously not the case that if you want to take a principled stance against racism, you're logically compelled to have also objected to things that were important to EA (like work by Singer, Bostrom/Savulescu human enhancement stuff, AI risk, animal risk [I really didn't understand why the latter two were mentioned], etc.). One of these things is not like the others. Racism is against universal compassion and equal consideration of interests (also, it typically involves hateful sentiments). By contrast, none of the other topics are like that.

To summarize, it seems concerning if the truth-seeking faction seems to be unable to understand the difference between, say, my comments, and how a social justice warrior would react to this controversy. (This isn't to say that none of the people who criticized aspects of Manifest were motivated by further-reaching social justice concerns; I readily admit that I've seen many comments that in my view go too far in the direction of cancelling/censorship/outrage.)

Ironically, I think this is very much an epistemic problem. I feel like a few people have acted a bit dumb in the discussions I've had here recently, at least if we consider it "dumb" when someone repeatedly fails at passing Ideological Turing Tests or if they seemingly have a bit of black-and-white thinking about a topic. I get the impression that the rationality community has suffered quite a lot defending itself against cancel culture, to the point that they're now a bit (low-t) traumatized. This is understandable, but that doesn't change that it's a suboptimal state of affairs.

Offputting to whom?

If it bothers me, I can assume that some others will react similarly.

You don't have to be a member of the specific group in question to find it uncomfortable when people in your environment say things that are riling up negative sentiments against that group. For instance, twelve-year-old children are unlikely to attend EA or rationality events, but if someone there talked about how they think twelve-year olds aren't really people and their suffering matters less, I'd be pissed off too.

All of that said, I'm overall grateful for LW's existence; I think habryka did an amazing job reviving the site, and I do think LW has overall better epistemic norms than the EA forum (even though I think most of the people who I intellectually admire the most are more EAs than rationalists, if I had to pick only one label, but they're often people who seem to fit into both communities).

.I don't really think it's this. I think it is "I don't want people associating me with people or ideas like that so I'd like you to stop please".

It might be what you say for some people, but that doesn't ring true for my case (at all). (But also, compared to all the people who complained about stuff at Manifest or voiced negative opinions from the sidelines as forum users, I'm pretty sure I'm in the 33% that felt the least strongly and had fewer items to pick at.)

But let's take your case, that means you think that on the margin some notion of considerateness/kindness/agreeableness is more important than truth-seeking. Is that right? 

I don't like this framing/way of thinking about it.

For one thing, I'm not sure if I want to concede the point that it is the "maximally truth-seeking" thing to risk that a community evaporatively cools itself along the lines we're discussing.

 Secondly, I think the issues around Manifest I objected to weren't directly about "what topics are people allowed to talk about?." 

If some person with a history of considerateness and thoughtfulness wanted to do a presentation on HBD at Manifest, or (to give an absurd example) if Sam Harris (who I think is better than average at handling delicate conversations like that) wanted to interview Douglas Murray again in the context of Manifest, I'd be like "ehh, not sure that's a good idea, but okay..." And maybe also "Well, if you're going to do this, at least think very carefully about how to communicate about why you're talking about this/what the goal of the session is." (It makes a big difference whether the framing of the session is "We know this isn't a topic most people are interested in, but we've had some people who are worried that if we cannot discuss every topic there is, we might lose what's valuable about this community, so this year, we decided to host a session on this; we took steps x, y, and z to make sure this won't become a recruiting ground for racists;" or whether the framing is "this is just like every other session.") [But maybe this example is beside the point, I'm not actually sure whether the objectionable issue was sessions on HBD/intelligence differences among groups, or whether it was more just people talking about it during group conversations.]

By contrast, if people with a history of racism or with close ties to racists attend the conference and it's them who want to talk about HBD, I'm against it. Not directly because of what's being discussed, but because of how and by whom. (But again, it's not my call to make and I'm just stating what I would do/what I think would lead to better outcomes.)

(I also thought people who aren't gay using the word "fag" sounded pretty problematic. Of course, this stuff can be moderated case-by-case and maybe a warning makes more sense than an immediate ban. Also, in fairness, it seems like the conference organizers would've agreed with that and they simply didn't hear the alleged incident when it allegedly happened.)

"Influence-seeking" doesn't quite resonate with me as a description of the virtue on the other end of "truth-seeking."

What's central in my mind when I speak out against putting "truth-seeking" above everything else is mostly a sentiment of "I really like considerate people and I think you're driving out many people who are considerate, and a community full of disagreeable people is incredibly off-putting."

Also, I think considerateness axis is not the same as the decoupling axis. I think one can be very considerate and also great at decoupling; you just have to be able to couple things back together as well.

Good points! It seems good to take a break or at least move to the meta level.

I think one emotion that is probably quite common in discussions about what norms should be (at least in my own experience) is clinging. Quoting from Joe Carlsmith's post on it:

Clinging, as I think about it, is a certain mental flavor or cluster of related flavors. It feels contracted, tight, clenched, and narrow. It has a kind of hardness, a “not OK-ness,” and a (sometimes subtle) kind of desperation. It sees scarcity. It grabs. It sees threat. It pushes away. It carries seeds of resentments and complaints. [...]

Often, in my experience, clinging seems to hijack attention and agency. It makes it harder to think, weigh considerations, and respond. You are more likely to flail, or stumble around, or to “find yourself” doing something rather than choosing to do it. And you’re more likely, as well, to become pre-occupied by certain decisions — especially if both options involve things you’re clinging in relation to — or events. Indeed, clinging sometimes seems like it treats certain outcomes as “infinitely bad,” or at least bad enough that avoiding them is something like a hard constraint. This can cause consequent problems with reasoning about what costs to pay to avoid what risks.

Clinging is also, centrally, unpleasant. But it’s a particular type of unpleasant, which feels more like it grabs and restricts and distorts who you are than e.g. a headache.

In the midst of feeling like a lot is at stake and one's values are being threatened, we may often try to push the social pendulum in our desired direction as hard as possible. However, that will have an aggravating and polarizing effect on the debate because the other side will see your attitude and think, "this person is not making any concessions whatsoever, and it seems like even though the social pendulum is already favorable to them, they'll keep pushing against us!"

So, to de-escalate these dynamics, it seems valuable to acknowledge the values that are at stake for both sides, even just to flag that you're not in favor of pushing the pendulum as far as possible. 

For instance, maybe this would already feel more relaxed if the side that is concerned about losing what's valuable regarding "truth-seeking" can acknowledge that there is a bar also for them, that, if they thought they were dealing with people full of hate or people who advocate for views that predictably cause harm to others (while being aware of this but advocating for those views because of a lack of concern for the affected others), the "truth-seeking" proponents will indeed step in and not tolerate it. Likewise, the other side could maybe acknowledge that it's bad when people get shunned just based on superficial associations/vibes (to give an example of something that I think is superficial: saying "sounds like they're into eugenics" as though this should end the discussion, without pointing out any way in which what the person is discussing is hateful, lacks compassion, or is otherwise likely to cause harm). This is bad not just for well-intentioned individuals who might get unfairly ostracized, but also bad for discourse in general because people won't speak their minds any longer.

Well said.

I meant to say the exact same thing, but seem to have struggled at communicating.

I want to point out that my comment above was specifically reacting to the following line and phrasing in timunderwood's parent comment:

I also have a dislike for excluding people who have racist style views simply on that basis, with no further discussion needed, because it effectively is setting the prior for racism being true to 0 before we've actually looked at the data.

My point (and yours) is that this quoted passage would be clearer if it said "genetic group differences" instead of "racism."

I agree with this diagnosis of the situation. At the same time, I feel like it's the wrong approach to make it a scientific proposition whether racism is right or not. It should never be right, no matter the science. (I know this is just talking semantics, but I think it adds a bunch of moral clarity to frame it in this way, that science can never turn out to support racism.) As I said here, the problem I see with the HBD crowd is that they think their opinions on the science justifies certain other things or that it's a very important topic.

I agree the article was pretty bad and unfair, and I agree with most things you say about cancel culture.

But then you lose me when you imply that racism is no different than taking one of the inevitable counterintuitive conclusions in philosophy thought experiments. (I've previously had a lengthy discussion on this topic in this recent comment thread.)

If I were an organizer of a conference where I wanted having interesting and relevant ideas being discussed, I'd still want there to be a bar for attendees to avoid the problem Scott Alexander pointed out (someone else recently quoted this in this same context, so hat tip to them, but I forget the name of the person): 

The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.

I'd be in favor of having the bar be significantly lower than many outrage-prone people are going to be comfortable with, but I don't think it's a great idea to have a bar that is basically "if you're interesting, you're good, no matter what else."

In any case, that's just how I would do it. There are merits to having groups with different bars.

(In the case of going for a very low one, I think it could make sense to think about the branding and whether it's a good idea to associate forecasting in particular with a low filter.)

Basically, what I'm trying to say is I'd like to be on your side here because I agree with many things you're saying and see where you're coming from, but you're making it impossible for me to side with you if you think there's no difference between biting inevitable bullets in common EA thought experiments vs "actually being racist" or "recently having made incredibly racist comments."

I don't think I'm using the adjective 'racist' here in a sense that is watered down or used in an inflationary sort of way; I think I'm trying to be pretty careful about when I use that word. FWIW, I also think that the terminology "scientific racism" that some people are using is muddling the waters here. There's a lot of racist pseudoscience going around, but it's not the case that you can say that every claim about group differences is definitely pseudoscience (it would be a strange coincidence if all groups of all kinds had no statistical differences in intelligence-associated genes). However, the relevant point is group differences don't matter (it wouldn't make a moral difference no matter how things shake out because policies should be about individuals and not groups) and that a lot of people who get very obsessed with these questions are actually racist, and the ones who aren't (like Scott Alexander, or Sam Harris when he interviewed Charles Murray on a podcast) take great care to distance themselves from actual racists in what they say about the topic and what conclusions they want others to draw from discussion of it. So, I think if someone were to call Scott Alexander and Sam Harris "scientifically racist," then that seems like it's watering down racism discourse because I don't think those people's views are morally objectionable, even though it is the case that many people's views in that cluster are morally objectionable.

I think generally though it's easy to misunderstand people, and if people respond to clarify, you should believe what they say they meant to say, not your interpretation of what they said.

Depends on context. Not (e.g.) if someone has a pattern of using plausible deniability to get away with things (I actually don't know if this applies to Hanania) or if we have strong priors for suspecting that this is what they're doing (arguably applies here for reasons related to his history; see next paragraph).

If someone has a history of being racist, but they say they've changed, it's IMO on them to avoid making statements that are easily interpreted as incredibly racist. And if they accidentally make such an easily misinterpretable statement, it's also on them to immediately clarify what they did or didn't mean. 

Generally, in contexts that we have strong reason to believe that they might be adversarial, incompetence/stupidity cannot be counted continuously as a sufficient excuse, because adversaries will always try to claim it as their excuse, so if you let it go through, you give full coverage to all malefactors. You need adversarial epistemology. Worst-case scenario, you'll judge harshly some people who happen to merely be incompetent in ways that, unfortunately, exactly help provide cover to bad actors. But [1] even though many people make mistakes or can seem incompetent at times, it's actually fairly rare that incompetence looks exactly the same as what a bad actor would do for more sinister, conscious reasons (and then claim incompetence as an excuse), and [2], sadly enough, a low rate of false positives seems the lesser evil here for the utilitarian calculus because we're in an adversarial context where harms conditional on being right are asymmetrically larger than harms on being wrong. (Of course, there's also an option like "preserve option value and gather further info," which is overall preferable, and I definitely like that you reached out to Hanania in that spirit. I'm not saying we should all have made up our minds solely based on that tweet; I'm mostly just saying that I find it pretty naive to immediately believe the guy just because he said he didn't mean it in a racist way.) 

Load more