Here is an argument:

1. Eugenics and white supremacy are bad ideas, and endorsing or even engaging with these views could lead to very bad outcomes

2. Associating with and endorsing ideas that will lead to very bad outcomes is not a good thing to do for a community dedicated to making the world better.

3. Scott Alexander, in his blog Slate Star Codex, has openly supported eugenics and white supremacy

C. EA should do everything in its power to distance itself from these communities, ideas, and individuals, and should seriously reflect on its relationship with them.


TL;DR: I find it very strange that people who want to make the world better continue to engage with white supremacists and associated ideas. I’ve worked in EA for a long time, known that racism exists in the community for a long time, but the events of the last week surrounding the removal of Slate Star Codex by its author set a new low for the community. For me, this is a good opportunity to reflect on my own complicity in these ideas, and seems like it ought to be an inflection point where this community decides that it actually wants to try to make the world better, and stand against ideas that probably make it worse.

EA should whole-heartedly be against white supremacy. We aren’t going to make the world any better if we aren’t. That should be obvious, and the fact that this idea isn’t obvious to many people in EA is frightening. Right now, I am frightened that engaging with EA has made me complicit with racism, because the community seems very comfortable hosting people who are openly supportive of hate and prejudice. The response to the Slate Star Codex situation has been very disappointing and disheartening. A good rule of thumb might be that when InfoWars takes your side, you probably ought to do some self-reflection on whether the path your community is on is the path to a better world.

***

Earlier this week, the popular blog in the rationalist and EA communities Slate Star Codex took itself offline after the author, who writes under the pen name Scott Alexander, claimed that the New York Times was going to “dox” him “for clicks.” In his view, the NY Times, which he believed was going to publish an article which included his real name, was doxxing him by doing so, as he is a psychiatrist whose patients might learn about his personal life and views. He requested that his followers contact the editor of the article (a woman of color).

In response, the Slate Star Codex community basically proceeded to harass and threaten to dox both the editor and journalist writing the article. Multiple individuals threatened to release their addresses, or explicitly threatened them with violence. Several prominent EAs such as Peter Singer, Liv Boeree, Benjamin Todd, Anna Soloman, Baxter Bullock, Arden Koehler, and Anders Sandberg have signed a petition calling on the NY Times to not “dox” Scott Alexander, or have spoken out in support of Scott.

I’ve worked at EA organizations for several years, ran an EA organization, and over time, have become deeply concerned how complicit in racism, sexism, and violence the community is. When I first heard of the concept, I was excited by a large community of people dedicated to improving the world, and having the most impact they could on pressing problems. But the experience of watching the EA community endorse someone who openly provided platforms for white supremacists and publicly endorsed eugenics is incredibly disturbing.

A strange and disappointing tension has existed in EA for a longtime. The community is predominantly white and male. Anonymous submitters on the EA Forum have supported ideas like racial IQ differences, which are not only scientifically debunked, but deeply racist. (I know someone is considering arguing with me about whether or not they are debunked. I have nothing to say to you — other people have demonstrated this point more clearly elsewhere.).

Slate Star Codex appears to have been an even greater hotbed of these ideas then the typical level for EA. Scott Alexander has openly endorsed eugenics and Charles Murray, a prominent proponent of racial IQ differences (Alexander identifies with the “hereditarian left”). He was supported by alt-right provocateurs, such as Emil Kirkegaard and Steve Sailer. On the associated subreddits, talk about eugenics, white supremacy, and related topics were a regular feature. These weren’t just “open discussions of ideas,” as they were often framed. These forums included explicit endorsements of literal white supremicist slogans like the 14 words (some of which have been conveniently deleted in the last few days).

The United States is in the middle of unprecedented protests against police brutality towards people of color. The effective altruist community, which claims to be dedicated to making the world better, has mostly ignored this. Few EA organizations have taken even the minimal step of speaking out in support. And, when the NY Times decided to write an article including Scott Alexander’s real name, who again, seems to both endorse and protect white supremicists and is certainly a eugenicist, the community attacked a woman of color on his word.

The fact that people are not disturbed by this turn of events is downright frightening. Notable EAs such as Rob Wiblin of 80,000 Hours and Kelsey Piper of Vox are speaking out in support of Alexander or openly celebrating him. A value in the EA community has always been open engagement with controversial ideas. That is probably a good thing in many cases. That doesn’t mean EA should be giving platforms to people whose vision for the world is downright horrible. The EA community has demonstrated through this event that our current collective vision for the world is not a good one. It’s an oppressive, unkind, and possibly violent one.

To be fully fair, Slate Star Codex is probably more associated with the rationalist community than EA. This probably should make EA very wary of rationalism, and associating with it. And as a whole, this seems like a time for a lot of self-reflection on the part of the EA community. How do we ensure that our community is more accessible to more people? How do we distance ourselves from white supremacy? Is EA really building something good, or reinforcing some very bad harms? This seems like a very good opportunity for the EA community to genuinely reflect and grow. I personally will be taking time to do that reflection, both on my own actions, and whether I can continue to support a community that fails to do so.

-167

0
0

Reactions

0
0

More posts like this

Comments22
Sorted by Click to highlight new comments since: Today at 2:09 PM

Let's look at some of your references. You say that Scott has endorsed eugenics; let's look up the exact phrasing (emphasis mine):

Even though I like both basic income guarantees and eugenics, I don’t think these are two things that go well together – making the income conditional upon sterilization is a little too close to coercion for my purposes. Still, probably better than what we have right now.

"I don't like this, though it would probably be better than the even worse situation that we have today" isn't exactly a strong endorsement. Note the bit about disliking coercion which should already suggest that Scott doesn't like "eugenics" in the traditional sense of involuntary sterilization, but rather non-coercive eugenics that emphasize genetic engineering and parental choice.

Simply calling this "eugenics" with no caveats is misleading; admittedly Scott himself sometimes forgets to make this clarification, so one would be excused for not knowing what he means... but not when linking to a comment where he explicitly notes that he doesn't want to have coercive forms of eugenics.

Next, you say that he has endorsed "Charles Murray, a prominent proponent of racial IQ differences". Looking up the exact phrasing again, Scott says:

The only public figure I can think of in the southeast quadrant with me is Charles Murray. Neither he nor I would dare reduce all class differences to heredity, and he in particular has some very sophisticated theories about class and culture. But he shares my skepticism that the 55 year old Kentucky trucker can be taught to code, and I don’t think he’s too sanguine about the trucker’s kids either. His solution is a basic income guarantee, and I guess that’s mine too. Not because I have great answers to all of the QZ article’s problems. But just because I don’t have any better ideas1,2.

What is "the southeast quadrant"? Looking at earlier in the post, it reads:

The cooperatives argue that everyone is working together to create a nice economy that enriches everybody who participates in it, but some people haven’t figured out exactly how to plug into the magic wealth-generating machine, and we should give them a helping hand (“here’s government-subsidized tuition to a school where you can learn to code!”) [...] The southeast corner is people who think that we’re all in this together, but that helping the poor is really hard.

So Scott endorses Murray's claims that... cognitive differences may have a hereditary component, that it might be hard to teach the average trucker and his kids to become programmers, and that we should probably implement a basic income so that these people will still have a reasonable income and don't need to starve. Also, the position that he ascribes to both himself and Murray is the attitude that we should do our best to help everyone, and that it's basically good for everyone try to cooperate together. Not exactly ringing endorsements of white supremacy.

Also one of the foonotes to "I don't have any better ideas" is "obviously invent genetic engineering and create a post-scarcity society, but until then we have to deal with this stuff", which again ties to the part where to the extent that Scott endorses eugenics, he endorses liberal eugenics.

Finally, you note that Scott identifies with the "hereditarian left". Let's look at the article that Scott links to when he says that this term "seems like as close to a useful self-identifier as I’m going to get". It contains an explicit discussion of how the possibility of cognitive differences between groups does not in any sense imply that one of the groups would have more value, morally or otherwise, than the other:

I also think it’s important to stress that contemporary behavioral genetic research is — with very, very few exceptions — almost entirely focused on explaining individual differences within ancestrally homogeneous groups. Race has a lot to do with how behavioral genetic research is perceived, but almost nothing to do with what behavioral geneticists are actually studying. There are good methodological reasons for this. Twin studies are, of course, using twins, who almost always self-identify as the same race. And genome-wide association studies (GWASs) typically use a very large group of people who all have the same self-identified race (usually White), and then rigorously control for genetic ancestry differences even within that already homogeneous group. I challenge anyone to read the methods section of a contemporary GWAS and persist in thinking that this line of research is really about race differences.
Despite all this, racists keep looking for “evidence” to support racism. The embrace of genetic research by racists reached its apotheosis, of course, in Nazism and the eugenics movements in the U.S. After all, eugenics means “good genes”– ascribing value and merit to genes themselves. Daniel Kevles’ In the Name of Eugenics: Genetics and the Uses of Human Heredity should be required reading for anyone interested in both the history of genetic science and in how this research has been (mis)used in the United States. This history makes clear that the eugenic idea of conceptualizing heredity in terms of inherent superiority was woven into the fabric of early genetic science (Galton and Pearson were not, by any stretch, egalitarians) and an idea that was deliberately propagated. The idea that genetic influence on intelligence should be interpreted to mean that some people are inherently superior to other people is itself a racist invention.
Fast-forward to 2017, and nearly everyone, even people who think that they are radical egalitarians who reject racism and white supremacy and eugenic ideology in all its forms, has internalized this “genes == inherent superiority” equation so completely that it’s nearly impossible to have any conversation about genetic research that’s not tainted by it. On both the right and the left, people assume that if you say, “Gene sequence differences between people statistically account for variation in abstract reasoning ability,” what you really mean is “Some people are inherently superior to other people.” Where people disagree, mostly, is in whether they think this conclusion is totally fine or absolutely repugnant. (For the record, and this should go without saying, but unfortunately needs to be said — I fall in the latter camp.) But very few people try to peel apart those ideas. (A recent exception is this series of blog posts by Fredrik deBoer.) The space between, which says, “Gene sequence differences between people statistically account for variation in abstract reasoning ability” but also says “This observation has no bearing on how we evaluate the inherent value or worth of people” is astoundingly small. [...]
But must genetic research necessarily be interpreted in terms of superiority and inferiority? Absolutely not. To get a flavor of other possible interpretations, we can just look at how people describe genetic research on nearly any other human trait.
Take, for example, weight. Here, is a New York Times article that quotes one researcher as saying, “It is more likely that people inherit a collection of genes, each of which predisposes them to a small weight gain in the right environment.” Substitute “slight increase in intelligence” for “small weight gain” in that sentence and – voila! You have the mainstream scientific consensus on genetic influences on IQ. But no one is writing furious think pieces in reaction to scientists working to understand genetic differences in obesity. According to the New York Times, the implications of this line of genetic research is … people shouldn’t blame themselves for a lack of self-control if they are heavy, and a “one size fits all” approach to weight loss won’t be effective.
As another example, think about depression. The headline of one New York Times article is “Hunting the Genetic Signs of Postpartum Depression with an iPhone App.” Pause for a moment and consider how differently the article would be received if the headline were “Hunting the Genetic Signs of Intelligence with an iPhone App.” Yet the research they describe – a genome-wide association study – is exactly the same methodology used in recent genetic research on intelligence and educational attainment. The science isn’t any different, but there’s no talk of identifying superior or inferior mothers. Rather, the research is justified as addressing the needs of “mothers and medical providers clamoring for answers about postpartum depression.” [...]
1. The idea that some people are inferior to other people is abhorrent.
2. The mainstream scientific consensus is that genetic differences between people (within ancestrally homogeneous populations) do predict individual differences in traits and outcomes (e.g., abstract reasoning, conscientiousness, academic achievement, job performance) that are highly valued in our post-industrial, capitalist society.
3. Acknowledging the evidence for #2 is perfectly compatible with belief #1.
4. The belief that one can and should assign merit and superiority on the basis of people’s genes grew out of racist and classist ideologies that were already sorting people as inferior and superior.
5. Instead of accepting the eugenic interpretation of what genetic research means, and then pushing back against the research itself, people – especially people with egalitarian and progressive values — should stop implicitly assuming that genes==inherent merit.

So you are arguing that Scott is a white supremacist, and your pieces of evidence include:

  • A comment where Scott says that he doesn't want to have coercive eugenics
  • An essay where Scott talks about the best ways of helping people who might be cognitively disadvantaged, and suggests that we should give them a basic income guarantee
  • A post where Scott links to and endorses an article which focuses on arguing that considering some people as inferior to others is abhorrent, and that we should reject the racist idea of genetics research having any bearing to how inherently valuable people are
misc
4y30
0
0

I see discussion of the "eugenics" claim. I don't see any discussion of the "openly endorses white supremacy" claim. What's the evidence for that one?

misc
4y15
0
0

A week later no one seems to have anything. It's disappointing to see such a serious accusation used with no apparent backup.

Much of this argument could be short-circuited by pulling apart what Scott means by 'eugenics' - it's clear from the context (missing from the OP's post) that he's referring to liberal eugenics, which argues that parents should have the right to have some sort of genetic choice over their offspring (and has almost nothing in common with the coercive "eugenics" to which the OP refers).

Liberal eugenics is already widespread, in a sense. Take embryo selection, where parents choose which embryo to bring to term depending on its genetic qualities. We've had chorionic villus sampling to check an embryo for Down syndrome for decades; it's commonplace.

Just dropping the word "eugenics" again and again with no clarification or context is very misleading.

While I think it's important to understand what Scott means when Scott says eugenics, I think:

a. I'm not certain clarifying that you mean "liberal eugenics" will actually pacify the critics, depending on why they think eugenics is wrong,

b. if there's really two kinds of thing called "eugenics", and one of them has a long history of being practiced by horrible, racist people coercively to further their horrible, racist views, and the other one is just fine, I think Scott is reckless in using the word here. I've never heard of "liberal eugenics" before reading this post. I don't think it's unreasonable of me to hear "eugenics" and think "oh, you mean that racist, coercive thing".

I don't think Scott is racist or a white supremacist but based on stuff like this I don't get very surprised when I find people who do.

My response to (b): the word is probably beyond rehabilitation now, but I also think that people ought to be able to have discussions about bioethics without having to clarify their terms every ten seconds. I actually think it is unreasonable of someone to skim someone’s post on something, see a word that looks objectionable, and cast aspersions over their whole worldview as a result.

Reminds me of when I saw a recipe which called for palm sugar. The comments were full of people who were outraged at the inclusion of such an exploitative, unsustainable ingredient. Of course, they were actually thinking of palm oil (palm sugar production is largely sustainable) but had just pattern-matched ‘palm’ as ‘that bad food thing’.

[anonymous]4y18
0
1

Very disappointed to see down votes without comments on such an important topic. I had barely heard of SSC before this recent controversy, but my cursory initial fact-check seems to support the facts of this post. I would like to hear from the down-voters what, if anything, is untrue, or where they think this argument goes wrong.

(As a meta-level point, everyone, downvoting someone for asking for clarification on why you're downvoting someone is not a good look.)

Hi Michelle. I'm sorry you're getting downvotes for this comment. There are several reasons I strong-downvoted this post, but for the sake of "brevity" I'll focus on one: I think that the OP's presentation of the current SSC/NYT controversy – and especially of the community's response to that controversy – is profoundly biased and misleading.


The NYT plans to use Scott Alexander's real name in an article about him, against his express wishes. They have routinely granted anonymous or pseudonymous status to other people in the past, including the subjects of articles, but refused this in Alexander's case. Alexander gives several reasons why this will be very damaging for him, but they plan to do it anyway.

I think that pretty clearly fits the definition of "doxing", and even if it doesn't it's still clearly bad. The post is scathing towards these concerns, scare-quoting "doxing" wherever it can and giving no indication that it thinks the Times's actions are in any way problematic.

In his takedown post, Scott made it very clear that people should be polite and civil when complaining about this:

There is no comments section for this post. The appropriate comments section is the feedback page of the New York Times. You may also want to email the New York Times technology editor Pui-Wing Tam at pui-wing.tam@nytimes.com, contact her on Twitter at @puiwingtam, or phone the New York Times at 844-NYTNEWS.

(please be polite – I don’t know if Ms. Tam was personally involved in this decision, and whoever is stuck answering feedback forms definitely wasn’t. Remember that you are representing me and the SSC community, and I will be very sad if you are a jerk to anybody. Please just explain the situation and ask them to stop doxxing random bloggers for clicks. If you are some sort of important tech person who the New York Times technology section might want to maintain good relations with, mention that.)

The response has overwhelmingly followed these instructions. People have cancelled their subscriptions, wrote letters, organised a petition, and generally complained to the people responsible. These are all totally appropriate things to do when you are upset about something! The petition is polite and conciliatory; so are most of the letters I've seen. Some of the public figures I've seen respond on Twitter have used strong wording ("disgraceful", "shame on you")) but nothing that seems in any way out of place in a public discourse on a controversial decision.

The OP's characterisation of this? "Attack[ing] a woman of color on [Alexander's] word". Their evidence? Five tweets from random Twitter users I've never heard of, none of whom have more than a tiny number of followers. They provide no evidence of anyone prominent in EA (a high-karma Forum user, say, or a well-known public figure) doing anything that looks like harassment or ad hominem attacks on Ms Tam.

I hope it's obvious why this is bad practice: if the threshold for condemning the conduct of a group is "a few random people did something bad in support of the same position", you will never have to change your mind on anything. Somehow, I doubt the OP had much sympathy for people who were more interested in condemning the riots in Minneapolis than supporting the peaceful protesters; yet here they use a closely analogous tactic. If they want to persuade me the EA community has acted badly, they should cite bad conduct from the EA community; they do not.

The implicit claim that one shouldn't publicly criticise Pui-Wing Tam because she is a woman of colour is also profoundly problematic. Pui-Wing Tam is the technology editor of the NYT, the most powerful newspaper in the world. She is a powerful and influential person, and a public figure; more importantly, she is the powerful and influential public figure directly responsible for the thing all these people are mad about. Complaining to her about it, on Twitter and elsewhere, is entirely appropriate. Obviously personal harrassment is unacceptable; if you give me a link to that kind of behaviour, I will condemn it, wherever it comes from. But implying that you can't publicly complain about the conduct of a powerful person if that person is a member of a favoured group is incredibly dangerous.


That's my position on how the OP has presented the current controversy. I think the way they have misrepresented those who disagree with them on this is sufficient by itself for a strong downvote. I also disagree with their characterisation of Scott Alexander and the SSC project, but as I said, I don't want this comment to be any longer than it already is. :-)

I think people are quite reasonably deciding that this post isn't worth taking the time to engage with. I'll just make three points even though I could make more:

"A good rule of thumb might be that when InfoWars takes your side, you probably ought to do some self-reflection on whether the path your community is on is the path to a better world." - Reversed Stupidity is Not Intelligence

"In response, the Slate Star Codex community basically proceeded to harass and threaten to dox both the editor and journalist writing the article. Multiple individuals threatened to release their addresses, or explicitly threatened them with violence." - The author is completely ignoring the fact that Scott Alexander specifically told people to be nice, not to take it out on them and didn't name the journalist. This seems to suggest that the author isn't even trying to be fair.

"I have nothing to say to you — other people have demonstrated this point more clearly elsewhere" - I'm not going to claim that such differences exist, but if the author isn't open to dialog on one claim, it's reasonable to infer that they mightn't be open to dialog on other claims even if they are completely unrelated.

Quite simply this is a low quality post and "I'm going to write a low quality post on topic X and you have to engage with me because topic X is important regardless of the quality" just gives a free pass on low quality content. But doesn't it spur discussion? I've actually found that most often low quality posts don't even provide the claimed benefit. They don't change people's minds and tend to lead to low quality discussion.

Also the sleight of hand where the author implies that Scott is a white supremacist, and supports this not by referencing anything that Scott said, but by referencing things that unrelated people hanging out on the SSC subreddit have said and which Scott has never shown any signs of endorsing. If Scott himself had said anything that could be interpreted as an endorsement of white supremacy, surely it would have been mentioned in this post, so its absence is telling.

As Tom Chivers recently noted:

It’s part of the SSC ethos that “if you don’t understand how someone could possibly believe something as stupid as they do”, then you should consider the possibility that that’s because you don’t understand, rather than because they’re stupid; the “principle of charity”. So that means taking ideas seriously — even ones you’re uncomfortable with. And the blog and its associated subreddit have rules of debate: that you’re not allowed to shout things down, or tell people they’re racist; you have to politely and honestly argue the facts of the issue at hand. It means that the sites are homes for lively debate, rare on the modern internet, between people who actually disagree; Left and Right, Republican and Democrat, pro-life and pro-choice, gender-critical feminists and trans-activist, MRA and feminist.
And that makes them vulnerable. Because if you’re someone who wants to do a hatchet job on them, you can easily go through the comments and find something that someone somewhere will find appalling. That’s partly a product of the disagreement and partly a function of how the internet works: there’s an old law of the internet, the “1% rule”, which says that the large majority of online comments will come from a hyperactive 1% of the community. That was true when I used to work at Telegraph Blogs — you’d get tens of thousands of readers, but you’d see the same 100 or so names cropping up every time in the comment sections.
(Those names were often things like Aelfric225 or TheUnBrainWashed, and they were usually really unhappy about immigration.)
That’s why the rationalists are paranoid. They know that if someone from a mainstream media organisation wanted to, they could go through those comments, cherry-pick an unrepresentative few, and paint the entire community as racist and/or sexist, even though surveys of the rationalist community and SSC readership found they were much more left-wing and liberal on almost every issue than the median American or Briton. And they also knew that there were people on the internet who unambiguously want to destroy them because they think they’re white supremacists.

The downvotes are probably because, indeed, the claims only make sense if you look at the level of something like "has Scott ever said anything that could be construed as X". I think a complete engagement with SSC doesn't support the argument, and it's specifically the fact that SSC is willing to address issues in their whole without flinching away from topics that might make a person "guilty by association" that makes it a compelling blog.

Inda
4y11
0
0

I had written a good answer here, but it got deleted because I accidentally tapped a link. Comments should save drafts ... The TLDR of it is:

  • Censorship serves the elite and has historically been used to oppress and not empower.
  • It does not matter that people are evil [OUTGROUP HERE]. I have personally known people who openly said they were terrorists-if-opportunity-allows, nazis (literal Hitler supporters), thieves, etc. NONE OF THEM did anything out of the ordinary. Their incentives made them act just like others. See this book for a treatise on how mere capitalism mitigated apartheid racism.
  • Even if censorship worked, it is inherently wrong itself. It is a form of manipulation and oppression. I don’t say its benefits could not trump its costs, but there definitely are costs which are often neglected. Our society generally does not care about people’s intellectual integrity and dignity. That doesn’t mean those don’t matter.

I’m not advocating for censoring anyone. I’m interested in complicity with racism in the EA community.

I actually think it's true that the OP hasn't advocated for censoring anyone. They haven't said that SA or SSC should be suppressed, and if they think it's a good thing that SA has willingly chosen to delete it, well, I'd be lying if I said there weren't internet contributors I think we'd be better off without, even if I would strongly oppose attempts to silence them.

It's important to be able to say things are bad without saying they should be censored: that's basically the core of free-speech liberalism. "I don't think this should be censored, but I think it's bad, and I think it's worrying you don't think it's bad" is on its face a reasonable position, and it's important that it's one people can say.

I downvoted the post for several reasons, but I don't think pro-censorship is one of them. I might be wrong about this. But the horns effect is real and powerful, and we should all be wary of it.

I have heavily updated on you being a bad faith actor. If you seriously believe your argument is not significantly pro-censorship, I suggest studying censorship historically in cases it clashes with your political views. Then compare those historical cases with what you advocate. Political censorship always believes itself to be something else. As the theocracy I live in says on my textbooks, “Freedom is not to do what anyone wants. Freedom is doing what the divine leader says.” Or as famous fiction has it, “war is peace.”

Anonymous submitters on the EA Forum have supported ideas like racial IQ differences.

I found many responses to that survey odious for various reasons and share your concerns in that regard. It makes me uneasy to think that friends/fellow movement members may have said some of those things.

However, the post you linked features a survey that was reposted in quite a few different places. I wouldn't necessarily consider people who filled it out to be "submitters to the EA Forum." (For example, some of them seem to detest the EA movement in general, such that I hope they don't spend much time here for their own sake.) That said, it's impossible to tell for sure.

If the New York Times were to run a similar survey, I'd guess that many respondents would express similar views. But I don't think that would say much, if anything, about the community of people who regularly read the Times. I expect that people in the EA community overwhelmingly support racial equality and abhor white supremacy.

(Additional context: 75% of EA Survey respondents are on the political left or center-left; roughly 3% are right or center-right. That seems to make the community more politically left-leaning than the Yale student body, though the comparison is inexact.)

Curated and popular this week
Relevant opportunities