(content warning: discussion of racially motivated violence and coercion)

 

I wanted to share that I think it's not bad to think about the object level question of whether there are group differences in intelligence rooted in genetic differences. This is an empirical claim, and can be true or false.

My moral beliefs are pretty rooted in egalitarianism. I think as a matter of policy, but also as a matter of moral character, it is good and important to treat the experience of strangers as equally valuable, regardless of their class or race. I do not think more intelligent people are more worthy of moral consideration than less intelligent people. I think it can be complicated at the extremes, especially when considering digital people, animals, etc., but that this has little bearing on public policy when concerning existing humans.

I don't think genetic group differences in intelligence are likely to be that relevant given I have short AI timelines. If we assume longer timelines, I believe the most likely places they would be important in terms of policy would be in education and reproductive technology. Whether or not there are such differences between groups now, there could easily come to be large differences through the application of embryo selection techniques or other intelligence enhancing technologies. From an egalitarian moral framework, I suspect it would be important to subsidize this technology for disadvantaged groups or individuals so that they have the same options and opportunities as everyone else. Even if genes turn out to not be a major cause of inegalitarian outcomes today, they can definitely become a major cause in the future, if we don't exercise wisdom and thoughtfulness in how we wield these technologies. However, as I said, I don't expect this to be very significant in practice given short AI timelines.

Most importantly, from my perspective, it's important to be able to think about questions like this clearly, and so I want to encourage people to not feel constrained to avoid the question because of fear of social censure for merely thinking about them. For a reasonably well researched (not necessarily correct) discussion of the object level, see this post:

[link deleted at the author's request; see also AnonymousCommentator's note about the racial IQ gap]

I think it's important context to keep in view that some of the worst human behaviors have involved the enslavement and subjugation of whole groups of people, or attempts to murder entire groups—racial groups, national groups, cultural groups, religious groups. The eugenics movement in the United States and elsewhere attempted to significantly curtail the reproductive freedom of many people through extremely coercive means in the not-so-distant past.  Between 1907 and 1963, over 64,000 individuals were forcibly sterilized under eugenic legislation in the United States, and minority groups were especially targeted. Presently in China, tens of thousands of Uighurs are being sterilized, and while we don't have a great deal of information about it, I would predict that there is a major element of government coercion in these sterilizations.

Coercive policies like this are extremely wrong, and plainly so. I oppose and condemn them. I am aware that the advocates of these policies sometimes used genetic group differences in abilities as justification for their coercion. This does not cause me to think that I should avoid the whole subject of genetic group differences in ability.  Making this subject taboo, and sanctioning anyone who speaks of it, seems like a sure way to prevent people from actually understanding the underlying problems disadvantaged groups or individuals face. This seems likely to inhibit rather than promote good policy-making. I think the best ways to resist reproductive and other forms of coercion go hand in hand with trying to understand the world, do good science, and have serious discussions about hard topics. I think strict taboos around discussing an extremely broad scientific subject matter hurt the ability of people to understand things, especially when the fear of public punishment is enough to prevent people from thinking about a topic entirely.

Another reason people cite for not talking about genetically mediated group differences, even if they exist, is that bringing people's attention to this kind of inequality could make the disadvantaged feel terrible. I take this cost seriously, and think this is a good reason to be really careful about how we discuss this issue (the exact opposite of Bostrom's approach in the Extropians email), and a good reason to include content warnings so anyone can easily avoid this topic if they find it upsetting.

But I don't think forbidding discussion of this topic across the board is the right society-level response.

Imagine a society where knowledge of historical slavery is suppressed, because people worry it would make the descendants of enslaved people sad. I think such a society would be unethical, especially if the information suppression causes society to be unable to recognize and respond to ongoing harms caused by slavery's legacy.

Still, assuming that we were in a world like that: In that kind of world, we can imagine that the information leaks out and a descendant of slaves finds out about slavery and its legacy, and is (of course) tremendously horrified and saddened to learn about all this.

If someone pointed at this to say, "Behold, this information caused harm, so we were right to suppress it," I would think they're making a serious moral mistake.

If the individual themselves didn't want to personally know about slavery, or about any of the graphic details, that's fully within their right. This should be comparatively easy to achieve in online discussion, where it's easier to use content warnings, tags, and web browser apps to control which topics you want to read about.

But society-wide suppression of the information, for the sake of protecting people's feelings even though those individuals didn't consent to being protected from the truth this way, is frankly disturbing and wrong. This is not the way to treat peers, colleagues, or friends. It isn't the way to treat people who you view as full human beings; beyond just being a terrible way to carry out scientific practice, it's infantilizing and paternalistic in the extreme.

23

0
0

Reactions

0
0

More posts like this

Comments12
Sorted by Click to highlight new comments since: Today at 11:00 PM

Firstly, I will say that I'm personally not afraid to study and debate these topics, and have done so. My belief is that the data points to no evidence of significant genetic differences between races when it comes to matters such as intelligence, and i think one downside of being hush hush about the subject is that people miss out on this conclusion, which is the one even a basic wikipedia skim would get you to. (you're free to disagree, that's not the point of this comment). 

That being said, I think you have greatly understated the case for not debating the subject on this forum. Remember, this is a forum for doing the most good, not a debate club, and if shunting debate of certain subjects onto a different website does the most good, that's what we should do. This requires a cost/benefit analysis, and you are severely understating the costs here. 

Point 1 is that we have to acknowledge the obvious fact that when you make a group of people feel bad, some of them are going to leave your group. I do not think this is a moral failing on their part. We have a limited number of hours in the day, would you hang out in a place where people regularly discuss whether you are genetically inferior? And it doesn't just drive out minorities, it drives out other people who are uncomfortable with the discussion as well. 

Driving out minorities is bad on it's own, but it also has implications for cause areas. A homogenous group is going to going to lack diverse viewpoints, and miss things that would be obvious to people with different contexts/experiences. It also  limits the outreach to different countries, are we going to make inroads to India if we're constantly discussing the genetic makeup of indians? And that's not even talking about the bad PR of being a super-white, super-male group, which costs us both credibility and funding. 

Following on the PR point, I think people find it gauche to talk about the PR effect of discussions, as our opinions shouldn't be affected by public opinion. But if we are honestly discussing the costs of allowing these discussions, then PR undeniably is a cost, and a really bad one. People are already using this as an excuse to slam EA in general as racist on twitter, if this becomes a major news story, the narrative will spread. EA is already associated with fradulence thanks to SBF, do we really want to be associated with race science as well?

My last point is that while not everyone who believes in genetic group differences is far-right/neo-nazi, the vice versa is not true: pretty much every neo-nazi believes in this stuff, and they use every opportunity they can to use it as an excuse to spread their ideology. A continuing discussion could very well encourage a flood of nazis onto the site,  which is not exactly good for the wellbeing of the forum. 

Again, my point isn't that these discussions should be banned from the internet entirely. My point is merely that it shouldn't be discussed here

I completly that group genetic differences should not be discussed here. It is a good thing that I don't think I've ever encountered a discussion of it on the EA forum prior to this situation.

So we all agree: Talking about this on the forum is a bad idea. Then the remaining question is what attitude we should take towards Bostrom now that this email of his from the nineties has become the topic de jour.

Possibly the position you are trying to take is that the institutions of the community should distance themselves from him because continuing to treat him as a central intellectual voice might offecnd and drive out minorities, and might offend and drive away people who a very sensitive to the possibility that someone is accepted in a community who is racist.

I want to note that there are also huge negative consequences to the official community distancing itself from such an important figure over this. Notably it will show that it is adopting an attitude that people who honestly try to figure out the truth on controversial topics without being concerned about what is socially acceptable should not be here. It will be saying that we care more about PR than truth.  

The sorts of people who care about arguments, and will follow them wherever they go are and have been very central to the EA community, and they are unusual people who provide extremely important benefits, and the unique value of EA as an addition to the global portolio of ideas has probably come from how it was a place where those sorts of thinkers thought about how to do good.

I'd also note: We constantly talk about the PR effect of our decisions. The forum at least has become obsessed with it over the past years.

Bostroms email is a seperate matter. My problem with bostroms email is not about the opinions he holds on technical questions, but about the lack of empathy and astonishingly poor judgement of what he decided to include in there. For example, even if you agree with his two paragraph tangent on eugenics, there was absolutely no need to include it in an apology letter. There were many, many ways that he could have apologised without upsetting people or compromising his beliefs. 

Imagine if I called someone's mother overweight in a vulgar manner. When they get upset, I compose a long apology email where I apologize for the language, but then note that their mother does have a BMI substantially above average, as does their sister, father, and wife. All those statements might be true, but that would not excuse the email!

I think talking about PR is entirely appropriate, given that EA is in the charity business and was just embroiled in a massive fraud scandal, and that bad PR directly translates into less money for EA causes. I think it's important that the public faces of EA be good at PR, and find it very concerning that Bostrom is so astonishingly bad at it. 

It is constantly claimed, but never actually proven that bad PR (in the sense of being linked to things like SBF, racism, or an Emile Torres article) leads to fewer donations for EA causes.

I am not convinced this is actually true. Does bad PR actually lead twenty something people who want to do ai safety research to be less likely to get a grant for career development? Does it actually hurt MIRI's budget? Or the ai safety camp? Etc.

Does it actually make people decide to not support an organization that wants to hand out lots of anti factory farm pamphlets? Are AMF and Give directly and the worm initiatives actually receiving less money because of these bad PR moments?

And if they are, how do we collectively know that?

While I agree, this grew out of the Bostrom email thing which I found hard to avoid because EA or EA-adj people were saying things I disagreed with!  Luckily we have a single thread where this sort of discussion can be isolated.

I absolutely agree with this view, and I see this as one of the better takes.

What follows is a tangent, but it feels like a relevant tangent. Like, I do not claim this is quite the same conversation as the above, it's slightly in a different direction, but it's not fully a non-sequitur.

Forgive the slightly-not-normal-for-this-venue language; this was originally a personal Facebook comment.

Here is a point that I don't think gets made often enough:

It doesn't matter whether other people are inferior, when it comes to talking about their fundamental dignity and the rights that a civilized society should grant them.

Like, often nazis or misogynists or whatever will try to start demonstrating that [some group] is objectively inferior on [some axis], and often the opposition will come right back with NUH-UH, [group] IS EVERY BIT AS CAPABLE—

I think there's a mistake, there, and I think that mistake is *acting like that would matter,* even if true. Playing into the frame of the bigot, letting them set the terms of the debate, implicitly conceding that the question of two different group's equality or inequality is the *crux* of the issue.

It isn't.

I happen to think that it's *false* that [race] or [gender] or whatever is inferior; my sense is that even if the bell curves for different groups peak in slightly different places and have their tails in slightly different places, they basically cover the same ground and overwhelmingly overlap anyway, so whatever.

But even if it were *demonstrably true* that [group] were inferior, that wouldn't change my sense of moral obligation toward its members, and it wouldn't change my beliefs about what kinds of treatment are fair or unfair.

I know for a fact that I have more raw intelligence than most humans! Even in nerd circles, I'm more-than-half-the-time in the upper quartile of whatever room I'm in, and guess what! Doesn't matter! Practically every human outstrips me in some domain or other anyway! I can't step to someone's unique expertise, nor can I compete with them along domains orthogonal to intelligence (e.g. physical prowess), and even if I were superior to someone along 10 out of 10 of the *most* important axes ...

... EVEN THEN, I do not think that gives me the right to dictate the terms of their existence, cut them off from opportunity, or take a larger share of the social pie.

The whole *point* of civilization is moving away from a state of base natural anarchy, where your value is tied to your capability. The whole point of building a safe, stable, cooperative society is making it so that you *don't* have to pull your whole weight every second of every day or else be abandoned to the wolves or enslaved by strongmen.

The thing we're trying to build here is a world where the absolutely inferior—

(To the extent that's even a category that exists; a lot depends on your point of view and what axes you consider relevant)

The thing we're trying to build here is a world where *even the absolutely inferior* get to have the maximum achievable amount of sovereignty, and agency, and happiness, and health, and get to participate in society to the greatest possible degree permitted by their personal limitations and the technology we have available (both literal technology and social/metaphorical tech).

IDGAF if you can "prove" some group's inferiority. It means nothing to me. It changes nothing. It was never the key hinge of the conversation for me. Superiority is not the foundation of my sense of my fellow humans' dignity.

(And that's setting *aside* the fact that even if you've proven a difference between groups at the statistical level, you've done very little to demonstrate the relevance of that statistical difference on individual members; bell curves are not their averages.)

I think it's good to push back on bigots when they are spreading straightforward falsehoods. I'm not saying "don't fire back with facts" in these conversations.

But the *fire* with which people fire back seems to me to be counterproductive and wrong, and it worries me. Acting outraged at the mere possibility that some group might be inferior to another, as if that would be morally relevant in any way whatsoever—

I kind of fear that those people are closer to the bigots than I might wish. That they're responding with such fervor because they *do* believe, on some gut level, that if the groups are different, then the moral standards must necessarily also be different. They don't want to conclude that the moral standards should be different, and so they object with *desperation* to any evidence that threatens to show actual differences between groups.

Potential competence differences between groups don't matter on a moral level. Or at least, let me-and-my-philosophy be an existence proof to you: they don't HAVE to matter.

You can build a society that doesn't give a fuck if people are fundamentally inferior, and that does its best to be fair and moral toward them anyway.

That's the society you *should* be trying to build. If for no other reason than the fact that that's going to be you one day, when you break a leg or have a stroke or just succumb to the vicissitudes of time. If for no other reason than the fact that that could be your kid, or the kid of someone you care about.

(There are other reasons, too, but that's the one that's hopefully at least a little bit persuasive even to selfish egotists.)

Competence is not the measure of worth. Fundamental equality is *not* the justification for fair and moral treatment.

Build your ethics on firmer ground, please.

I definitely agree that competence is not the measure of worth, but I am also worried in this comment you are kind of shoving out of view a potentially pretty important question, which is the genuine moral and game-theoretic relevance of different minds (both human and artificial minds). 

I wrote up my thoughts here in this other comment, so I will mostly quote: 

it is easy to come up with examples where within the Effective Altruism framework two people do not count equally. Indeed most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus. 

Saying "all people count equally" is not a core belief of EA, and indeed I do not remember hearing it seriously argued for a single time in my almost 10 years in this community (which is not surprising, since it indeed doesn't really hold any water after even just a tiny bit of poking, and your only link for this assertion is a random article written by CEA, which doesn't argue for this claim at all and also just blindly asserts it). It is still the case that most EAs believe that the variance in the importance of different people's experience is relatively small, that variance almost certainly does not align with historical conceptions of racism, and that there are at least some decent game-theoretic arguments to ignore a good chunk of this variance, but this does not mean that "all people count equally" is a "core belief" which should clearly only be reserved for an extremely small number of values and claims. It might be a good enough approximation in almost all practical situations, but it is really not a deep philosophical assumption of any of the things that I am working on, and I am confident that if I were to bring it up at an EA meetup, someone would quite convincingly argue against it.

This might seem like a technicality, but in this context the statement is specifically made to claim that EA has a deep philosophical commitment to valuing all people equally, independently of the details about how their mind works (either because of genetics, or development environment, or education). This reassurance does not work. I (and my guess is also almost all extrapolations of the EA philosophy) value people approximately equally in impact estimates because it looks like the relative moral patienthood of different people, and the basic cognitive makeup of people, does not seem to differ much between different populations, not because I have a foundational philosophical commitment to impartiality. If it was the case that different human populations did differ on the relevant dimensions a lot, this would spell a real moral dilemma for the EA community, with no deep philosophical commitments to guard us from coming to uncomfortable conclusions (luckily, as far as I can tell, in this case almost all analyses from an EA perspective lead to the conclusion that it's probably reasonable to weigh people equally in impact estimates, which doesn't conflict with society's taboos, so this is not de-facto a problem).

In another comment: 

In all of these situations, I think we can still say people "count" equally.

I don't think this goes through. Let's just talk about the hypothetical of humanity's evolutionary ancestors still being around.

Unless you assign equal moral weight to an ape than you do to a human, this means that you will almost certainly assign lower moral weight to humans or nearby species earlier in our evolutionary tree, primarily on the basis of genetic differences, since there isn't even any clean line to draw between humans and our evolutionary ancestors.

Similarly, I don't see how you can be confident that your moral concern in the present day is independent of that genetic variation in the population. That genetic variation is exactly the same that over time made you care more about humans than other animals, amplified by many rounds of selection, and as such, it would be very surprising if there was absolutely no difference in moral patienthood among the present human population.

Again, I expect that variance to be quite small, since genetic variance in the human population is much smaller than the variance between different species, and also for that variance to really not align very well with classical racist tropes, but the nature of the variance is ultimately the same.

I think the conflation of capability with moral worth is indeed pretty bad in a bunch of different situations, but like, I also think different minds probably genuinely have different moral weights, and while I don't think the variance in human minds here rises to have much relevance in daily decision-making, I do think the broader questions around engineering beings capable of achieving heights of much greater experience, or self-modifying in that direction, as well as the construction of artificial minds where its a huge open question what moral consideration we should extend them, are quite important, and something about your comment feels like it's making that conversation harder. 

Like, the sentence: "Acting outraged at the mere possibility that some group might be inferior to another, as if that would be morally relevant in any way whatsoever—"

Like, I don't know, there are definitely dimensions of capacity (probably not intelligence, though honestly also not definitely not-intelligence) that play at least some role in the actual moral relevance of a person. It has to, otherwise I definitely no longer have a good answer to many moral questions around animal ethics and the ethics of artificial minds. And I think empirically, after thinking about this question a bunch, de-facto I think the variance among the human population here is pretty small, but I do actually think it was worth checking and thinking about, and I also feel like if someone was to show up and was skeptical of my position here, I wouldn't be particularly outraged or confused, it feels like a genuinely difficult question.

Yep, basically endorsed; this is like the next layer of nuance and consideration to be laid down; I suspect I was subconsciously thinking that one couldn't easily get the-audience-I-was-speaking-to across both inferential leaps at once?

There's also something about the difference between triaged and limited systems (which we are, in fact, in) and ultimate utopian ideals. I think that in the ultimate utopian ideal we do not give people less moral weight based on their capacity, but I agree that in the meantime scarce resources do indeed sometimes need dividing. 

IMO I think part of the issue is we live in the convenient world where differences do not matter so much as to make hard work irrelevant.

But I disagree with the general statement of Duncan Sabien that arbitrarily large capabilities differentials do not matter morally.

More generally, if capabilities differentials mattered much more through say genetic engineering or whole brain emulation or AI, then I wouldn't support the thesis that all sentient beings should be equal.

So I heavily disagree with this quoted section:

Competence is not the measure of worth. Fundamental equality is not the justification for fair and moral treatment.

Build your ethics on firmer ground, please.

mild tangent, but ultimately not really a tangent -

The whole *point* of civilization is moving away from a state of base natural anarchy, where your value is tied to your capability

yeah, maybe; but anarchy.works. non-authoritarianism, as the word was originally meant, is about forming stable multiscale bonds of non-dominating microsolidarity. non archy has worked very well before; in order to work well, there has to be a large cooperation bubble that prevents takeover by authority structures.

that isn't what you meant, of course - you meant destructive chaos, the meaning usually expected from the word. but I claim that it is worth understanding why the word anarchy has such strong detractors and supporters, and learning what the underlying principles of those ethics are.

Strongly agreed with the point actually being made by the word in this context, and with the entire comment to which I reply, I just wanted to comment on the word as used.

Curated and popular this week
Relevant opportunities