DM

David Mathers

3377 karmaJoined Dec 2021

Comments
338

I trust EV more than the charity commission about many things, but whether EV behaved badly over SBF is definitely not one of them. One judgment here is incredibly liable to distortion through self-interest and ego preservation, and it's not the charity commission's. (That's not a prediction that the charity commission will in fact harshly criticize EV. I wouldn't be surprised either way on that.) 

'also on not "some moral view we've never thought of".'

Oh, actually, that's right. That does change things a bit. 

People don't reject this stuff, I suspect, because there is frankly, a decently large minority of the community who thinks "black people have lower IQs for genetic reasons" is suppressed forbidden knowledge. Scott Alexander has done a lot, entirely deliberately in my view, to spread that view over the years (although this is probably not the only reason), and Scott is generally highly respected within EA. 

Now, unlike the people who spend all their time doing race/IQ stuff, I don't think more than a tiny, insignificant fraction of the people in the community who think this actually are Nazis/White Nationalists. White Nationalism/Nazism are (abhorrent) political views about what should be done, not just empirical doctrines about racial intelligence, even if the latter are also part of a Nazi/White Nationalist worldview. (Scott Alexander individually is obviously not "Nazi", since he is Jewish, but I think he is rather more, i.e. more than zero sympathetic ,to white nationalists than I personally consider morally acceptable, although I would not personally call him one, largely because I think he isn't a political authoritarian who wants to abolish democracy.) Rather I think most of them have a view something like "it is unfortunate this stuff is true, because it helps out bad people, but you should never lie for political reasons".  

Several things lie behind this:

-Lots of people in the community like the idea of improving humanity through genetic engineering, and while that absolutely can be completely disconnected from racism, and indeed, is a fairly mainstream position in analytic bioethics as far as I can tell, in practice it tends to make people more suspicious of condemning actual racists, because you end up with many of the same enemies as them, since most people who consider anti-racist a big part of their identity are horrified by anything eugenic.  This makes them more sympathetic to complaints from actual, political racists that they are being treated unfairly.

-As I say, being pro genetic enhancement or even "liberal eugenics"* is not that outside the mainstream in academic bioethics: you can publish it in leading journals etc. EA has deep roots in analytic philosophy, and inherits it's sense of what is reasonable.

-Many people in the rationalist community are for various reasons strongly polarized against "wokeness", which again, makes them sympathetic to the claims of actual political racists that they are being smeared.

-Often, the arguments people encounter against the race/IQ stuff are transparently terrible. Normal liberals are indeed terrified of this stuff, but most lack expertise in being able to discuss it, so they just claim it has been totally debunked and then clam up. This makes it look like there must be a dark truth being suppressed when it is really just a combination of almost no one has expertise on this stuff and in any case, because causation of human traits is so complex, for any case where some demographic group appears to be score worse on some trait, you can always claim it could be because of genetic causes, and in practice it's very hard to disprove this. But of course that is not itself proof that there IS a genetic cause of the differences. The result of all this can make it seem like you have to endorse unproven race/IQ stuff or take the side of "bad arguers" something EAs and rationalists hate the thought of doing. See what Turkheimer said about this here https://www.vox.com/the-big-idea/2017/6/15/15797120/race-black-white-iq-response-critics: 

'There is not a single example of a group difference in any complex human behavioral trait that has been shown to be environmental or genetic, in any proportion, on the basis of scientific evidence. Ethically, in the absence of a valid scientific methodology, speculations about innate differences between the complex behavior of groups remain just that, inseparable from the legacy of unsupported views about race and behavior that are as old as human history. The scientific futility and dubious ethical status of the enterprise are two sides of the same coin.

To convince the reader that there is no scientifically valid or ethically defensible foundation for the project of assigning group differences in complex behavior to genetic and environmental causes, I have to move the discussion in an even more uncomfortable direction. Consider the assertion that Jews are more materialistic than non-Jews. (I am Jewish, I have used a version of this example before, and I am not accusing anyone involved in this discussion of anti-Semitism. My point is to interrogate the scientific difference between assertions about blacks and assertions about Jews.)

One could try to avoid the question by hoping that materialism isn’t a measurable trait like IQ, except that it is; or that materialism might not be heritable in individuals, except that it is nearly certain it would be if someone bothered to check; or perhaps that Jews aren’t really a race, although they certainly differ ancestrally from non-Jews; or that one wouldn’t actually find an average difference in materialism, but it seems perfectly plausible that one might. (In case anyone is interested, a biological theory of Jewish behavior, by the white nationalist psychologist Kevin MacDonald,  actually exists [have removed link here because I don't want to give MacDonald web traffic-David].'

If you were persuaded by Murray and Harris’s conclusion that the black-white IQ gap is partially genetic, but uncomfortable with the idea that the same kind of thinking might apply to the personality traits of Jews, I have one question: Why? Couldn’t there just as easily be a science of whether Jews are genetically “tuned to” (Harris’s phrase) different levels of materialism than gentiles?

On the other hand, if you no longer believe this old anti-Semitic trope, is it because some scientific study has been conducted showing that it is false? And if the problem is simply that we haven’t run the studies, why shouldn’t we? Materialism is an important trait in individuals, and plausibly could be an important difference between groups. (Certainly the history of the Jewish people attests to the fact that it has been considered important in groups!) But the horrific recent history of false hypotheses about innate Jewish behavior helps us see how scientifically empty and morally bankrupt such ideas really are.' 


All this tends sadly to distract people from the fact that when white nationalists like Lynn talk about race/IQ stuff, they are trying to push a political agenda to strip non-whites of their rights, end anti-discrimination measures of any kind, and slash immigration, all on the basis of the fact that, basically, they just really don't like black people. In fact, given the actual history of Nazism, it is reasonable to suspect that at least some and probably a lot of these people would go further and advocate genocide against blacks or other non-whites if they thought they could get away with it. 




*See https://plato.stanford.edu/entries/eugenics/#ArguForLibeEuge

I find it easy to believe there was a heated argument but no threats, because it is easy for things to get exaggerated, and the line between telling someone you no longer trust them because of a disagreement and threatening them is unclear when you are a powerful person who might employ them. But I find Will's claim that the conversation wasn't even about whether Sam was trustworthy or anything related to that, to be really quite hard to believe. It would be weird for someone to be mistaken or exaggerate about that, and I feel like a lie is unlikely, simply because I don't see what anyone would gain from lying to TIME about this.

Nathan's comment here is one case where I really want to know what the people giving agree/disagree votes intended to express. Agreement/disagreement that the behaviour "doesn't sound like Will'? Agreement/disagreement that Naia would be unlikely to be lying? General approval/disapproval of the comment? 

Yes, but not at great length. 

From my memory, which definitely could be faulty since I only listened once: 

He admits people did tell him Sam was untrustworthy. He says that his impression was something like "there was a big fight and I can't really tell what happened or who is right" (not a direct quote!). Stresses that many of the people who warned him about Sam continued to have large amounts of money on FTX later, so they didn't expect the scale of fraud we actually saw either. (They all seem to have told TIME that originally also.) Says Sam wrote a lot of reflections (10k words) on what had gone wrong at early Alameda and how to avoid similar mistakes again, and that he (Will) now understands that Sam was actually omitting stuff that made him look bad, but at the time, his desire to learn from his mistakes seemed convincing. 

He denies threatening Tara, and says he spoke to Tara and she agreed that while their conversation got heated, he did not threaten her.

 Will's expressed public view on that sort of double or nothing gamble is hard to actually figure out, but it is clearly not as robustly anti as commonsense would require, though it is also clearly a lot LESS positive than SBF's view that you should obviously take it: https://conversationswithtyler.com/episodes/william-macaskill/

(I haven't quoted from the interview, because there is no one clear quote expressing Will's position, text search for "double" and you'll find the relevant stuff.) 

Actually, I have a lot of sympathy with what you are saying here. I am ultimately somewhat inclined to endorse "in principle, the ends justify the means, just not in practice" over at least a fairly wide range of cases. I (probably) think in theory you should usually kill one innocent person to save five, even though in practice anything that looks like doing that is almost certainly a bad idea, outside artificial philosophical thought experiments and maybe some weird but not too implausible scenarios involving war or natural disaster. But at the same time, I do worry a bit about bad effects from utilitarianism because I worry about bad effects from anything. I don't worry too much, but that's because I think those effects are small, and anyway there will be good effects of utilitarianism too. But I don't think utilitarians should be able to react with outrage when people say plausible things about the consequences of utilitarianism. And I think people who worry about this more than I do on this forum are generally acting in good faith. And yeah, I agree utilitarians shouldn't (in any normal context) lie about their opinions. 

I don't necessarily disagree with most of that, but I think it is ultimately still plausible that people who endorse a theory that obviously says in principle bad ends can justify the means are somewhat (plausibly not very much though) more likely to actually do bad things with an ends-justifies-the-means vibe. Note that this is an empirical claim about what sort of behaviour is actually more likely to co-occur with endorsing utilitarianism or consequentialism in actual human beings. So it's not refuted by "the correct understanding of consequentialism mostly bars  things with an ends justifies the means vibe in practice" or "actually, any sane view allows that sometimes it's permissible to do very harmful things to prevent a many orders of magnitude greater harm". And by "somewhat plausible" I mean just that. I wouldn't be THAT shocked to discover this was false, my credence is like 95% maybe? (1 in 20 things happen all the time.)  And the claim is correlational, not causal (maybe both endorsement of utilitarianism and ends-justifies-the-means type behaviour are both caused partly by prior intuitive endorsement of ends-justifies-the-means type behaviour, and adopting utilitarianism doesn't actually make any difference, although I doubt that is entirely true.) 

The 3% figure for utilitarianism strikes me as a bit misleading on it's own, given what else Will said. (I'm not accusing Will of intent to mislead here, he said something very precise that I, as a philosopher, entirely followed, it was just a bit complicated for lay people.) Firstly, he said a lot of the probability space was taken up by error theory, the view that there is no true morality. So to get what Will himself endorses, whether or not there is a true morality, you have to basically subtract an unknown but large amount for his credence in error theory from 1, and then renormalize his other credence so that they add up to 1 on their own. Secondly, there's the difference between utilitarianism where only the consequences of your actions matter morally, and only consequences for (total or average) pain and pleasure and/or fulfilled preferences matter as consequence, and consequentialism where only the consequences of your actions matter morally, but it's left open what those consequences are. My memory of the podcast (could be wrong, only listened once!) is that Will said that, conditional on error theory being false, his credence in consequentialism, is about 0.5. This really matters in the current context, because many non-utilitarian forms of consequentialism can also promote maximizing in a dangerous way, they just disagree with utilitarianism about exactly what you are maximizing. So really, Will's credence in a view that, interpreted naively recommends dangerous maximizing is functionally (i.e. ignoring error theory in practice) more like 0.5 than 0.03, as I understood him in the podcast. Of course, he isn't actually recommending dangerous max-ing regardless of his credence in consequentialism (at least in most contexts*), because he warns against naivety.  

(Actually, my personal suspicion is that 'consequentialism' on its own is basically vacuous, because any view gives a moral preferability ordering over choices in situations, and really all that the numbers in consequentialism do is help us represent such orderings in a quick and easily manipulable manner, but that's a separate debate.)

*Presumably sometimes dangerous, unethical-looking maximizing actually is best from a consequentialist point of view, because the dangers of not doing so, or the upside of doing so if you are right about the consequences of your options outweigh the risk that you are wrong about the consequences of different options, even when you take into account higher-order evidence that people who think intuitively bad actions maximize utility are nearly always wrong. 

Load more