DM

David Mathers

3472 karmaJoined Dec 2021

Comments
340

I should probably stop posting on this or reading the comments, for the sake of my mental health (I mean that literally, this is a major anxiety disorder trigger for me.) But I guess I sort of have to respond to a direct request for sources. 

 

Scott's official position on this is agnosticism, rather than public endorsement*. (See here for official agnosticism: https://www.astralcodexten.com/p/book-review-the-cult-of-smart)

However, for years at SSC he put the dreaded neo-reactionaries on his blogroll. And they are definitely race/IQ guys. Meanwhile, he was telling friends privately at the time, that "HBD" (i.e. "human biodiversity", but generally includes the idea that black people are genetically less intelligent) is "probably partially correct or at least very non-provably non-correct": https://twitter.com/ArsonAtDennys/status/1362153191102677001 . That is technically still leaving some room for agnosticism, but it's pretty clear which way he's leaning. Meanwhile, he also was saying in private not to tell anyone he thinks this (I feel like I figured out his view was something like this anyway though? Maybe that's hindsight bias): 'NEVER TELL ANYONE I SAID THIS, not even in confidence'. And he was also talking about how publicly declaring himself to be a reactionary was bad strategy for PR reasons ("becoming a reactionary would be both stupid and decrease my ability to spread things to non-reactionary readers"). (He also discusses how he writes about this stuff partly because it drives blog traffic. Not shameful in itself, but I think people in EA sometimes have an exaggerated sense of Scott's moral purity and integrity that this sits a little awkwardly with.) Overall, I think his private talk on this paints a picture of someone who is too cautious to be 100% sure that Black people have genetically lower IQs, but wants other people to increase their credence in that to >50%, and is thinking strategically (and arguably manipulatively) about how to get them to do so. (He does seem to more clearly reject the anti-democratic and the most anti-feminist parts of Neo-Reaction.) 

I will say that MOST of what makes me angry about this, is not the object-level race/IQ beliefs themselves, but the lack of repulsion towards the Reactionaries as a  (fascist) political movement. I really feel like this is pretty damning (though obviously Scott has his good traits too). The Reactionaries are known for things like trolling about how maybe slavery was actually kind of good: https://www.unqualified-reservations.org/2009/07/why-carlyle-matters/  Scott has never seemed sufficiently creeped out by this (or really, at all creeped out by it in my experience). But he has been happy to get really, really angry about feminists who say mean things about nerds**, or in one case I remember, stupid woke changes to competitive debate. (I couldn't find that one by googling, so you'll have to trust my memory about it; they were stupid, just not worth the emotional investment.) Personally, I think fascism should be more upsetting than woke debate! (Yes, that is melodramatic phrasing, but I am trying to shock people out what I think is complacency on this topic.) 

I think people in EA have a big blind-spot about Scott's fairly egregious record on this stuff, because it's really embarrassing for the community to admit how bad it is, people (including me often; I feel like I morally ought to give up ACX, but I still check it from time to time) like his writing for other reasons. And frankly, there is also a certain amount of (small-r) reactionary white male backlash in the community. Indeed, I used to enjoy some of Scott's attacks on wokeness myself; I have similar self-esteem issues around autistic masculinity issues as I think many anti-woke rationalists do. The currently strongly negative position is one I've come to slowly over many years of thinking about this stuff, though I was always uncomfortable with his attitude towards the Reactionaries. 



*[Quoting Scott] 'Earlier this week, I objected when a journalist dishonestly spliced my words to imply I supported Charles Murray's The Bell Curve. Some people wrote me to complain that I handled this in a cowardly way - I showed that the specific thing the journalist quoted wasn’t a reference to The Bell Curve, but I never answered the broader question of what I thought of the book. They demanded I come out and give my opinion openly. Well, the most direct answer is that I've never read it. But that's kind of cowardly too - I've read papers and articles making what I assume is the same case. So what do I think of them?

This is far enough from my field that I would usually defer to expert consensus, but all the studies I can find which try to assess expert consensus seem crazy. A while ago, I freaked out upon finding a study that seemed to show most expert scientists in the field agreed with Murray's thesis in 1987 - about three times as many said the gap was due to a combination of genetics and environment as said it was just environment. Then I freaked out again when I found another study (here is the most recent version, from 2020) showing basically the same thing (about four times as many say it’s a combination of genetics and environment compared to just environment). I can't find any expert surveys giving the expected result that they all agree this is dumb and definitely 100% environment and we can move on (I'd be very relieved if anybody could find those, or if they could explain why the ones I found were fake studies or fake experts or a biased sample, or explain how I'm misreading them or that they otherwise shouldn't be trusted. If you have thoughts on this, please send me an email). I've vacillated back and forth on how to think about this question so many times, and right now my personal probability estimate is "I am still freaking out about this, go away go away go away". And I understand I have at least two potentially irresolvable biases on this question: one, I'm a white person in a country with a long history of promoting white supremacy; and two, if I lean in favor then everyone will hate me, and use it as a bludgeon against anyone I have ever associated with, and I will die alone in a ditch and maybe deserve it. So the best I can do is try to route around this issue when considering important questions. This is sometimes hard, but the basic principle is that I'm far less sure of any of it than I am sure that all human beings are morally equal and deserve to have a good life and get treated with respect regardless of academic achievement.

(Hopefully I’ve given people enough ammunition against me that they won’t have to use hallucinatory ammunition in the future. If you target me based on this, please remember that it’s entirely a me problem and other people tangentially linked to me are not at fault.)'

** Personally I hate *some* of the shit he complains about there too, although in other cases I probably agree with the angry feminist takes and might even sometimes defend the way they are expressed. I am autistic and have had great difficulties attracting romantic interest. (And obviously, as my name indicates I am male. And straight as it happens.) But Scott's two most extensive blogposts on this are incredibly bare of sympathetic discussion of why feminists might sometimes be a bit angry and insensitive on this issue. 

I think this is too pessimistic:  why did one of Biden's cabinet ask for Christiano in one of the top positions at the US gov's AI safety org if the government will reliably prioritize the sort of factors you cite here to the exclusion of safety?: https://www.nist.gov/news-events/news/2024/04/us-commerce-secretary-gina-raimondo-announces-expansion-us-ai-safety

I also think that whether or not the government regulates private AI has little to do with whether it militarizes AI. It's not like there is one dial with "amount of government" and it just gets turned up or down. Government can do very little to restrict what Open AI/DeepMind/Anthropic do, but then also spend lots and lots of money on military AI projects. So worries about militarization are not really a reason not to want the government to restrict Open AI/DeepMind/Anthropic.

Not to mention that insofar as the basic science here is getting done for commercial reasons, any regulations which slow down the commercial development of frontier modes will actually slow down the progress of AI for military applications too, whether or not that is what the US gov intends, and regardless of whether those regulations are intended to reduce X-risk, or to protect the jobs of voice actors in cartoons facing AI replacement. 

I trust EV more than the charity commission about many things, but whether EV behaved badly over SBF is definitely not one of them. One judgment here is incredibly liable to distortion through self-interest and ego preservation, and it's not the charity commission's. (That's not a prediction that the charity commission will in fact harshly criticize EV. I wouldn't be surprised either way on that.) 

'also on not "some moral view we've never thought of".'

Oh, actually, that's right. That does change things a bit. 

People don't reject this stuff, I suspect, because there is frankly, a decently large minority of the community who thinks "black people have lower IQs for genetic reasons" is suppressed forbidden knowledge. Scott Alexander has done a lot, entirely deliberately in my view, to spread that view over the years (although this is probably not the only reason), and Scott is generally highly respected within EA. 

Now, unlike the people who spend all their time doing race/IQ stuff, I don't think more than a tiny, insignificant fraction of the people in the community who think this actually are Nazis/White Nationalists. White Nationalism/Nazism are (abhorrent) political views about what should be done, not just empirical doctrines about racial intelligence, even if the latter are also part of a Nazi/White Nationalist worldview. (Scott Alexander individually is obviously not "Nazi", since he is Jewish, but I think he is rather more, i.e. more than zero sympathetic ,to white nationalists than I personally consider morally acceptable, although I would not personally call him one, largely because I think he isn't a political authoritarian who wants to abolish democracy.) Rather I think most of them have a view something like "it is unfortunate this stuff is true, because it helps out bad people, but you should never lie for political reasons".  

Several things lie behind this:

-Lots of people in the community like the idea of improving humanity through genetic engineering, and while that absolutely can be completely disconnected from racism, and indeed, is a fairly mainstream position in analytic bioethics as far as I can tell, in practice it tends to make people more suspicious of condemning actual racists, because you end up with many of the same enemies as them, since most people who consider anti-racist a big part of their identity are horrified by anything eugenic.  This makes them more sympathetic to complaints from actual, political racists that they are being treated unfairly.

-As I say, being pro genetic enhancement or even "liberal eugenics"* is not that outside the mainstream in academic bioethics: you can publish it in leading journals etc. EA has deep roots in analytic philosophy, and inherits it's sense of what is reasonable.

-Many people in the rationalist community are for various reasons strongly polarized against "wokeness", which again, makes them sympathetic to the claims of actual political racists that they are being smeared.

-Often, the arguments people encounter against the race/IQ stuff are transparently terrible. Normal liberals are indeed terrified of this stuff, but most lack expertise in being able to discuss it, so they just claim it has been totally debunked and then clam up. This makes it look like there must be a dark truth being suppressed when it is really just a combination of almost no one has expertise on this stuff and in any case, because causation of human traits is so complex, for any case where some demographic group appears to be score worse on some trait, you can always claim it could be because of genetic causes, and in practice it's very hard to disprove this. But of course that is not itself proof that there IS a genetic cause of the differences. The result of all this can make it seem like you have to endorse unproven race/IQ stuff or take the side of "bad arguers" something EAs and rationalists hate the thought of doing. See what Turkheimer said about this here https://www.vox.com/the-big-idea/2017/6/15/15797120/race-black-white-iq-response-critics: 

'There is not a single example of a group difference in any complex human behavioral trait that has been shown to be environmental or genetic, in any proportion, on the basis of scientific evidence. Ethically, in the absence of a valid scientific methodology, speculations about innate differences between the complex behavior of groups remain just that, inseparable from the legacy of unsupported views about race and behavior that are as old as human history. The scientific futility and dubious ethical status of the enterprise are two sides of the same coin.

To convince the reader that there is no scientifically valid or ethically defensible foundation for the project of assigning group differences in complex behavior to genetic and environmental causes, I have to move the discussion in an even more uncomfortable direction. Consider the assertion that Jews are more materialistic than non-Jews. (I am Jewish, I have used a version of this example before, and I am not accusing anyone involved in this discussion of anti-Semitism. My point is to interrogate the scientific difference between assertions about blacks and assertions about Jews.)

One could try to avoid the question by hoping that materialism isn’t a measurable trait like IQ, except that it is; or that materialism might not be heritable in individuals, except that it is nearly certain it would be if someone bothered to check; or perhaps that Jews aren’t really a race, although they certainly differ ancestrally from non-Jews; or that one wouldn’t actually find an average difference in materialism, but it seems perfectly plausible that one might. (In case anyone is interested, a biological theory of Jewish behavior, by the white nationalist psychologist Kevin MacDonald,  actually exists [have removed link here because I don't want to give MacDonald web traffic-David].'

If you were persuaded by Murray and Harris’s conclusion that the black-white IQ gap is partially genetic, but uncomfortable with the idea that the same kind of thinking might apply to the personality traits of Jews, I have one question: Why? Couldn’t there just as easily be a science of whether Jews are genetically “tuned to” (Harris’s phrase) different levels of materialism than gentiles?

On the other hand, if you no longer believe this old anti-Semitic trope, is it because some scientific study has been conducted showing that it is false? And if the problem is simply that we haven’t run the studies, why shouldn’t we? Materialism is an important trait in individuals, and plausibly could be an important difference between groups. (Certainly the history of the Jewish people attests to the fact that it has been considered important in groups!) But the horrific recent history of false hypotheses about innate Jewish behavior helps us see how scientifically empty and morally bankrupt such ideas really are.' 


All this tends sadly to distract people from the fact that when white nationalists like Lynn talk about race/IQ stuff, they are trying to push a political agenda to strip non-whites of their rights, end anti-discrimination measures of any kind, and slash immigration, all on the basis of the fact that, basically, they just really don't like black people. In fact, given the actual history of Nazism, it is reasonable to suspect that at least some and probably a lot of these people would go further and advocate genocide against blacks or other non-whites if they thought they could get away with it. 




*See https://plato.stanford.edu/entries/eugenics/#ArguForLibeEuge

I find it easy to believe there was a heated argument but no threats, because it is easy for things to get exaggerated, and the line between telling someone you no longer trust them because of a disagreement and threatening them is unclear when you are a powerful person who might employ them. But I find Will's claim that the conversation wasn't even about whether Sam was trustworthy or anything related to that, to be really quite hard to believe. It would be weird for someone to be mistaken or exaggerate about that, and I feel like a lie is unlikely, simply because I don't see what anyone would gain from lying to TIME about this.

Nathan's comment here is one case where I really want to know what the people giving agree/disagree votes intended to express. Agreement/disagreement that the behaviour "doesn't sound like Will'? Agreement/disagreement that Naia would be unlikely to be lying? General approval/disapproval of the comment? 

Yes, but not at great length. 

From my memory, which definitely could be faulty since I only listened once: 

He admits people did tell him Sam was untrustworthy. He says that his impression was something like "there was a big fight and I can't really tell what happened or who is right" (not a direct quote!). Stresses that many of the people who warned him about Sam continued to have large amounts of money on FTX later, so they didn't expect the scale of fraud we actually saw either. (They all seem to have told TIME that originally also.) Says Sam wrote a lot of reflections (10k words) on what had gone wrong at early Alameda and how to avoid similar mistakes again, and that he (Will) now understands that Sam was actually omitting stuff that made him look bad, but at the time, his desire to learn from his mistakes seemed convincing. 

He denies threatening Tara, and says he spoke to Tara and she agreed that while their conversation got heated, he did not threaten her.

 Will's expressed public view on that sort of double or nothing gamble is hard to actually figure out, but it is clearly not as robustly anti as commonsense would require, though it is also clearly a lot LESS positive than SBF's view that you should obviously take it: https://conversationswithtyler.com/episodes/william-macaskill/

(I haven't quoted from the interview, because there is no one clear quote expressing Will's position, text search for "double" and you'll find the relevant stuff.) 

Actually, I have a lot of sympathy with what you are saying here. I am ultimately somewhat inclined to endorse "in principle, the ends justify the means, just not in practice" over at least a fairly wide range of cases. I (probably) think in theory you should usually kill one innocent person to save five, even though in practice anything that looks like doing that is almost certainly a bad idea, outside artificial philosophical thought experiments and maybe some weird but not too implausible scenarios involving war or natural disaster. But at the same time, I do worry a bit about bad effects from utilitarianism because I worry about bad effects from anything. I don't worry too much, but that's because I think those effects are small, and anyway there will be good effects of utilitarianism too. But I don't think utilitarians should be able to react with outrage when people say plausible things about the consequences of utilitarianism. And I think people who worry about this more than I do on this forum are generally acting in good faith. And yeah, I agree utilitarians shouldn't (in any normal context) lie about their opinions. 

Load more