Kaj_Sotala

Comments

Some thoughts on the EA Munich // Robin Hanson incident

Thanks. It looks to me that much of what's being described at these links is about the atmosphere among the students at American universities, which then also starts affecting the professors there. That would explain my confusion, since a large fraction of my academic friends are European, so largely unaffected by these developments.

there could be a number of explanations aside from cancel culture not being that bad in academia.

I do hear them complain about various other things though, and I also have friends privately complaining about cancel culture in non-academic contexts, so I'd generally expect this to come up if it were an issue. But I could still ask, of course.

"Disappointing Futures" Might Be As Important As Existential Risks

We also discussed some possible reasons for why there might be a disappointing future in the sense of having a lot of suffering, in sections 4-5 of Superintelligence as a Cause or Cure for Risks of Astronomical Suffering. A few excerpts:

4.1 Are suffering outcomes likely?

Bostrom (2003a) argues that given a technologically mature civilization capable of space colonization on a massive scale, this civilization "would likely also have the ability to establish at least the minimally favorable conditions required for future lives to be worth living", and that it could thus be assumed that all of these lives would be worth living. Moreover, we can reasonably assume that outcomes which are optimized for everything that is valuable are more likely than outcomes optimized for things that are disvaluable. While people want the future to be valuable both for altruistic and self-oriented reasons, no one intrinsically wants things to go badly.

However, Bostrom has himself later argued that technological advancement combined with evolutionary forces could "lead to the gradual elimination of all forms of being worth caring about" (Bostrom 2004), admitting the possibility that there could be technologically advanced civilizations with very little of anything that we would consider valuable. The technological potential to create a civilization that had positive value does not automatically translate to that potential being used, so a very advanced civilization could still be one of no value or even negative value.

Examples of technology’s potential being unevenly applied can be found throughout history. Wealth remains unevenly distributed today, with an estimated 795 million people suffering from hunger even as one third of all produced food goes to waste (World Food Programme, 2017). Technological advancement has helped prevent many sources of suffering, but it has also created new ones, such as factory-farming practices under which large numbers of animals are maltreated in ways which maximize their production: in 2012, the amount of animals slaughtered for food was estimated at 68 billion worldwide (Food and Agriculture Organization of the United Nations 2012). Industrialization has also contributed to anthropogenic climate change, which may lead to considerable global destruction. Earlier in history, advances in seafaring enabled the transatlantic slave trade, with close to 12 million Africans being sent in ships to live in slavery (Manning 1992).

Technological advancement does not automatically lead to positive results (Häggström 2016). Persson & Savulescu (2012) argue that human tendencies such as “the bias towards the near future, our numbness to the suffering of great numbers, and our weak sense of responsibility for our omissions and collective contributions”, which are a result of the environment humanity evolved in, are no longer sufficient for dealing with novel technological problems such as climate change and it becoming easier for small groups to cause widespread destruction. Supporting this case, Greene (2013) draws on research from moral psychology to argue that morality has evolved to enable mutual cooperation and collaboration within a select group (“us”), and to enable groups to fight off everyone else (“them”). Such an evolved morality is badly equipped to deal with collective action problems requiring global compromises, and also increases the risk of conflict and generally negative-sum dynamics as more different groups get in contact with each other.

As an opposing perspective, West (2017) argues that while people are often willing to engage in cruelty if this is the easiest way of achieving their desires, they are generally “not evil, just lazy”. Practices such as factory farming are widespread not because of some deep-seated desire to cause suffering, but rather because they are the most efficient way of producing meat and other animal source foods. If technologies such as growing meat from cell cultures became more efficient than factory farming, then the desire for efficiency could lead to the elimination of suffering. Similarly, industrialization has reduced the demand for slaves and forced labor as machine labor has become more effective. At the same time, West acknowledges that this is not a knockdown argument against the possibility of massive future suffering, and that the desire for efficiency could still lead to suffering outcomes such as simulated game worlds filled with sentient non-player characters (see section on cruelty-enabling technologies below). [...]

4.2 Suffering outcome: dystopian scenarios created by non-value-aligned incentives.

Bostrom (2004, 2014) discusses the possibility of technological development and evolutionary and competitive pressures leading to various scenarios where everything of value has been lost, and where the overall value of the world may even be negative. Considering the possibility of a world where most minds are brain uploads doing constant work, Bostrom (2014) points out that we cannot know for sure that happy minds are the most productive under all conditions: it could turn out that anxious or unhappy minds would be more productive. [...]

More generally, Alexander (2014) discusses examples such as tragedies of the commons, Malthusian traps, arms races, and races to the bottom as cases where people are forced to choose between sacrificing some of their values and getting outcompeted. Alexander also notes the existence of changes to the world that nearly everyone would agree to be net improvements - such as every country reducing its military by 50%, with the savings going to infrastructure - which nonetheless do not happen because nobody has the incentive to carry them out. As such, even if the prevention of various kinds of suffering outcomes would be in everyone’s interest, the world might nonetheless end up in them if the incentives are sufficiently badly aligned and new technologies enable their creation.

An additional reason for why such dynamics might lead to various suffering outcomes is the so-called Anna Karenina principle (Diamond 1997, Zaneveld et al. 2017), named after the opening line of Tolstoy’s novel Anna Karenina: "all happy families are all alike; each unhappy family is unhappy in its own way". The general form of the principle is that for a range of endeavors or processes, from animal domestication (Diamond 1997) to the stability of animal microbiomes (Zaneveld et al. 2017), there are many different factors that all need to go right, with even a single mismatch being liable to cause failure.

Within the domain of psychology, Baumeister et al. (2001) review a range of research areas to argue that “bad is stronger than good”: while sufficiently many good events can overcome the effects of bad experiences, bad experiences have a bigger effect on the mind than good ones do. The effect of positive changes to well-being also tends to decline faster than the impact of negative changes: on average, people’s well-being suffers and never fully recovers from events such as disability, widowhood, and divorce, whereas the improved well-being that results from events such as marriage or a job change dissipates almost completely given enough time (Lyubomirsky 2010).

To recap, various evolutionary and game-theoretical forces may push civilization in directions that are effectively random, random changes are likely to bad for the things that humans value, and the effects of bad events are likely to linger disproportionately on the human psyche. Putting these considerations together suggests (though does not guarantee) that freewheeling development could eventually come to produce massive amounts of suffering.
Some thoughts on the EA Munich // Robin Hanson incident
yet academia is now the top example of cancel culture

I'm a little surprised by this wording? Certainly cancel culture is starting to affect academia as well, but I don't think that e.g. most researchers think about the risk of getting cancelled when figuring out the wording for their papers, unless they are working on some exceptionally controversial topic?

I have lots of friends in academia and follow academic blogs etc., and basically don't hear any of them talking about cancel culture within that context. I did recently see a philosopher recently post a controversial paper and get backlash for it on Twitter, but then he seemed to basically shrug it off since people complaining on Twitter didn't really affect him. This fits my general model that most of the cancel culture influence on academia comes from people outside academia trying to affect it, with varying success.

I don't doubt that there are individual pockets with academia that are more cancely, but the rest of academia seems to me mostly unaffected by them.

Some thoughts on the EA Munich // Robin Hanson incident

On the positive side, a recent attempt to bring cancel culture to EA was very resoundingly rejected, with 111 downvotes and strongly upvoted rebuttals.

Shifts in subjective well-being scales?

I don't know, but I get the impression that SWB questions are susceptible to framing effects in general: for example, Biswas-Diener & Diener (2001) found that when people in Calcutta were asked for their life satisfaction in general, and also for their satisfaction in 12 subdomains (material resources, friendship, morality, intelligence, food, romantic relationship, family, physical appearance, self, income, housing, and social life), they gave on average a slightly negative rating for the global satisfaction, while also giving positive ratings for all the subdomains. (This result was replicated at least by Cox 2011 in Nicaragua.)

Biswas-Diener & Diener 2001 (scale of 1-3):

The mean score for the three groups on global life satisfaction was 1.93 (on the negative side just under the neutral point of 2). [...] The mean ratings for all twelve ratings of domain satisfaction fell on the positive (satisfied) side, with morality being the highest (2.58) and the lowest being satisfaction with income (2.12).

Cox 2011 (scale of 1-7):

The sample level mean on global life satisfaction was 3.8 (SD = 1.7). Four is the mid-point of the scale and has been interpreted as a neutral score. Thus this sample had an overall mean just below neutral. [...] The specific domain satisfactions (housing, family, income, physical appearance, intelligence, friends, romantic relationships, morality, and food) have means ranging from 3.9 to 5.8, and a total mean of 4.9. Thus all nine specific domains are higher than global life satisfaction. For satisfaction with the broader domains (self, possessions, and social life) the means ranged from 4.4 to 5.2, with a mean of 4.8. Again, all broader domain satisfactions are higher than global life satisfaction. It is thought that global judgments of life satisfaction are more susceptible to positivity bias and that domain satisfaction might be more constrained by the concrete realities of an individual’s life (Diener et al. 2000)
A New X-Risk Factor: Brain-Computer Interfaces
In particular, Elon Musk claims that BCIs may allow us to integrate with AI such that AI will not need to outcompete us (Young, 2019). It is unclear at present by what exact mechanism a BCI would assist here, how it would help, whether it would actually decrease risk from AI, or if it is a valid claim at all. Such a ‘solution’ to AGI may also be entirely compatible with global totalitarianism, and may not be desirable. The mechanism by which integrating with AI would lessen AI risk is currently undiscussed; and at present, no serious academic work has been done on the topic.

We have a bit of discussion about this (predating Musk's proposal) in section 3.4. of Responses to Catastrophic AGI Risk; we're also skeptical, e.g. this excerpt from our discussion:

De Garis [82] argues that a computer could have far more processing power than a human brain, making it pointless to merge computers and humans. The biological component of the resulting hybrid would be insignificant compared to the electronic component, creating a mind that was negligibly different from a 'pure' AGI. Kurzweil [168] makes the same argument, saying that although he supports intelligence enhancement by directly connecting brains and computers, this would only keep pace with AGIs for a couple of additional decades.
The truth of this claim seems to depend on exactly how human brains are augmented. In principle, it seems possible to create a prosthetic extension of a human brain that uses the same basic architecture as the original brain and gradually integrates with it [254]. A human extending their intelligence using such a method might remain roughly human-like and maintain their original values. However, it could also be possible to connect brains with computer programs that are very unlike human brains and which would substantially change the way the original brain worked. Even smaller differences could conceivably lead to the adoption of 'cyborg values' distinct from ordinary human values [290].
Bostrom [49] speculates that humans might outsource many of their skills to non-conscious external modules and would cease to experience anything as a result. The value-altering modules would provide substantial advantages to their users, to the point that they could outcompete uploaded minds who did not adopt the modules. [...]
Moravec [194] notes that the human mind has evolved to function in an environment which is drastically different from a purely digital environment and that the only way to remain competitive with AGIs would be to transform into something that was very different from a human.
Slate Star Codex, EA, and self-reflection

Let's look at some of your references. You say that Scott has endorsed eugenics; let's look up the exact phrasing (emphasis mine):

Even though I like both basic income guarantees and eugenics, I don’t think these are two things that go well together – making the income conditional upon sterilization is a little too close to coercion for my purposes. Still, probably better than what we have right now.

"I don't like this, though it would probably be better than the even worse situation that we have today" isn't exactly a strong endorsement. Note the bit about disliking coercion which should already suggest that Scott doesn't like "eugenics" in the traditional sense of involuntary sterilization, but rather non-coercive eugenics that emphasize genetic engineering and parental choice.

Simply calling this "eugenics" with no caveats is misleading; admittedly Scott himself sometimes forgets to make this clarification, so one would be excused for not knowing what he means... but not when linking to a comment where he explicitly notes that he doesn't want to have coercive forms of eugenics.

Next, you say that he has endorsed "Charles Murray, a prominent proponent of racial IQ differences". Looking up the exact phrasing again, Scott says:

The only public figure I can think of in the southeast quadrant with me is Charles Murray. Neither he nor I would dare reduce all class differences to heredity, and he in particular has some very sophisticated theories about class and culture. But he shares my skepticism that the 55 year old Kentucky trucker can be taught to code, and I don’t think he’s too sanguine about the trucker’s kids either. His solution is a basic income guarantee, and I guess that’s mine too. Not because I have great answers to all of the QZ article’s problems. But just because I don’t have any better ideas1,2.

What is "the southeast quadrant"? Looking at earlier in the post, it reads:

The cooperatives argue that everyone is working together to create a nice economy that enriches everybody who participates in it, but some people haven’t figured out exactly how to plug into the magic wealth-generating machine, and we should give them a helping hand (“here’s government-subsidized tuition to a school where you can learn to code!”) [...] The southeast corner is people who think that we’re all in this together, but that helping the poor is really hard.

So Scott endorses Murray's claims that... cognitive differences may have a hereditary component, that it might be hard to teach the average trucker and his kids to become programmers, and that we should probably implement a basic income so that these people will still have a reasonable income and don't need to starve. Also, the position that he ascribes to both himself and Murray is the attitude that we should do our best to help everyone, and that it's basically good for everyone try to cooperate together. Not exactly ringing endorsements of white supremacy.

Also one of the foonotes to "I don't have any better ideas" is "obviously invent genetic engineering and create a post-scarcity society, but until then we have to deal with this stuff", which again ties to the part where to the extent that Scott endorses eugenics, he endorses liberal eugenics.

Finally, you note that Scott identifies with the "hereditarian left". Let's look at the article that Scott links to when he says that this term "seems like as close to a useful self-identifier as I’m going to get". It contains an explicit discussion of how the possibility of cognitive differences between groups does not in any sense imply that one of the groups would have more value, morally or otherwise, than the other:

I also think it’s important to stress that contemporary behavioral genetic research is — with very, very few exceptions — almost entirely focused on explaining individual differences within ancestrally homogeneous groups. Race has a lot to do with how behavioral genetic research is perceived, but almost nothing to do with what behavioral geneticists are actually studying. There are good methodological reasons for this. Twin studies are, of course, using twins, who almost always self-identify as the same race. And genome-wide association studies (GWASs) typically use a very large group of people who all have the same self-identified race (usually White), and then rigorously control for genetic ancestry differences even within that already homogeneous group. I challenge anyone to read the methods section of a contemporary GWAS and persist in thinking that this line of research is really about race differences.
Despite all this, racists keep looking for “evidence” to support racism. The embrace of genetic research by racists reached its apotheosis, of course, in Nazism and the eugenics movements in the U.S. After all, eugenics means “good genes”– ascribing value and merit to genes themselves. Daniel Kevles’ In the Name of Eugenics: Genetics and the Uses of Human Heredity should be required reading for anyone interested in both the history of genetic science and in how this research has been (mis)used in the United States. This history makes clear that the eugenic idea of conceptualizing heredity in terms of inherent superiority was woven into the fabric of early genetic science (Galton and Pearson were not, by any stretch, egalitarians) and an idea that was deliberately propagated. The idea that genetic influence on intelligence should be interpreted to mean that some people are inherently superior to other people is itself a racist invention.
Fast-forward to 2017, and nearly everyone, even people who think that they are radical egalitarians who reject racism and white supremacy and eugenic ideology in all its forms, has internalized this “genes == inherent superiority” equation so completely that it’s nearly impossible to have any conversation about genetic research that’s not tainted by it. On both the right and the left, people assume that if you say, “Gene sequence differences between people statistically account for variation in abstract reasoning ability,” what you really mean is “Some people are inherently superior to other people.” Where people disagree, mostly, is in whether they think this conclusion is totally fine or absolutely repugnant. (For the record, and this should go without saying, but unfortunately needs to be said — I fall in the latter camp.) But very few people try to peel apart those ideas. (A recent exception is this series of blog posts by Fredrik deBoer.) The space between, which says, “Gene sequence differences between people statistically account for variation in abstract reasoning ability” but also says “This observation has no bearing on how we evaluate the inherent value or worth of people” is astoundingly small. [...]
But must genetic research necessarily be interpreted in terms of superiority and inferiority? Absolutely not. To get a flavor of other possible interpretations, we can just look at how people describe genetic research on nearly any other human trait.
Take, for example, weight. Here, is a New York Times article that quotes one researcher as saying, “It is more likely that people inherit a collection of genes, each of which predisposes them to a small weight gain in the right environment.” Substitute “slight increase in intelligence” for “small weight gain” in that sentence and – voila! You have the mainstream scientific consensus on genetic influences on IQ. But no one is writing furious think pieces in reaction to scientists working to understand genetic differences in obesity. According to the New York Times, the implications of this line of genetic research is … people shouldn’t blame themselves for a lack of self-control if they are heavy, and a “one size fits all” approach to weight loss won’t be effective.
As another example, think about depression. The headline of one New York Times article is “Hunting the Genetic Signs of Postpartum Depression with an iPhone App.” Pause for a moment and consider how differently the article would be received if the headline were “Hunting the Genetic Signs of Intelligence with an iPhone App.” Yet the research they describe – a genome-wide association study – is exactly the same methodology used in recent genetic research on intelligence and educational attainment. The science isn’t any different, but there’s no talk of identifying superior or inferior mothers. Rather, the research is justified as addressing the needs of “mothers and medical providers clamoring for answers about postpartum depression.” [...]
1. The idea that some people are inferior to other people is abhorrent.
2. The mainstream scientific consensus is that genetic differences between people (within ancestrally homogeneous populations) do predict individual differences in traits and outcomes (e.g., abstract reasoning, conscientiousness, academic achievement, job performance) that are highly valued in our post-industrial, capitalist society.
3. Acknowledging the evidence for #2 is perfectly compatible with belief #1.
4. The belief that one can and should assign merit and superiority on the basis of people’s genes grew out of racist and classist ideologies that were already sorting people as inferior and superior.
5. Instead of accepting the eugenic interpretation of what genetic research means, and then pushing back against the research itself, people – especially people with egalitarian and progressive values — should stop implicitly assuming that genes==inherent merit.

So you are arguing that Scott is a white supremacist, and your pieces of evidence include:

  • A comment where Scott says that he doesn't want to have coercive eugenics
  • An essay where Scott talks about the best ways of helping people who might be cognitively disadvantaged, and suggests that we should give them a basic income guarantee
  • A post where Scott links to and endorses an article which focuses on arguing that considering some people as inferior to others is abhorrent, and that we should reject the racist idea of genetics research having any bearing to how inherently valuable people are
Slate Star Codex, EA, and self-reflection

Also the sleight of hand where the author implies that Scott is a white supremacist, and supports this not by referencing anything that Scott said, but by referencing things that unrelated people hanging out on the SSC subreddit have said and which Scott has never shown any signs of endorsing. If Scott himself had said anything that could be interpreted as an endorsement of white supremacy, surely it would have been mentioned in this post, so its absence is telling.

As Tom Chivers recently noted:

It’s part of the SSC ethos that “if you don’t understand how someone could possibly believe something as stupid as they do”, then you should consider the possibility that that’s because you don’t understand, rather than because they’re stupid; the “principle of charity”. So that means taking ideas seriously — even ones you’re uncomfortable with. And the blog and its associated subreddit have rules of debate: that you’re not allowed to shout things down, or tell people they’re racist; you have to politely and honestly argue the facts of the issue at hand. It means that the sites are homes for lively debate, rare on the modern internet, between people who actually disagree; Left and Right, Republican and Democrat, pro-life and pro-choice, gender-critical feminists and trans-activist, MRA and feminist.
And that makes them vulnerable. Because if you’re someone who wants to do a hatchet job on them, you can easily go through the comments and find something that someone somewhere will find appalling. That’s partly a product of the disagreement and partly a function of how the internet works: there’s an old law of the internet, the “1% rule”, which says that the large majority of online comments will come from a hyperactive 1% of the community. That was true when I used to work at Telegraph Blogs — you’d get tens of thousands of readers, but you’d see the same 100 or so names cropping up every time in the comment sections.
(Those names were often things like Aelfric225 or TheUnBrainWashed, and they were usually really unhappy about immigration.)
That’s why the rationalists are paranoid. They know that if someone from a mainstream media organisation wanted to, they could go through those comments, cherry-pick an unrepresentative few, and paint the entire community as racist and/or sexist, even though surveys of the rationalist community and SSC readership found they were much more left-wing and liberal on almost every issue than the median American or Briton. And they also knew that there were people on the internet who unambiguously want to destroy them because they think they’re white supremacists.
Slate Star Codex, EA, and self-reflection
Not to be rude, but what context do you recommend would help for interpreting the statement, "I like both basic income guarantees and eugenics," or describing requiring poor people to be sterilized to receive basic income as "probably better than what we have right now?"

The part from the middle of that excerpt that you left out certainly seems like relevant context: "Even though I like both basic income guarantees and eugenics, I don’t think these are two things that go well together – making the income conditional upon sterilization is a little too close to coercion for my purposes. Still, probably better than what we have right now." (see my top-level comment)

Reducing long-term risks from malevolent actors
Malevolent humans with access to advanced technology—such as whole brain emulation or other forms of transformative AI—could cause serious existential risks and suffering risks.

Possibly relevant: Machiavellians Approve of Mind Upload Technology Directly and Through Utilitarianism (Laakasuo et al. 2020), though it mainly tested whether machiavellians express moral condemnation of mind uploading, rather than their interest directly.

In this preregistered study, we have two novel findings: 1) Utilitarian moral preferences are strongly and psychopathy is mildly associated with positive approval of MindUpload; and 2) that Machiavellianism – essentially a calculative self-interest related trait – is strongly associated with positive approval of Mind Upload, even after controlling for Utilitarianism and the previously known predictor of Sexual Disgust (and conservatism). In our preregistration, we had assumed that the effect would be dependent on Psychopathy (another Dark Triad personality dimension), rather than Machiavellianism. However, given how closely related Machiavellianism and Psychopathy are, we argue that the results match our hypothesis closely. Our results suggest that the perceived risk of callous and selfish individuals preferring Mind Upload should be taken seriously, as previously speculated by Sotala & Yampolskiy (2015)
Load More