Researcher at the Center on Long-Term Risk.


EA considerations regarding increasing political polarization

Great post, thanks for writing this!

Aside from the interventions you and Tobias list, promoting (participation in) forecasting tournaments might be another way to reduce excessive polarization.

Mellers, Tetlock, and Arkes (2018) found that "[...] participants who actively engaged in predicting US domestic events were less polarized in their policy preferences than were non-forecasters. Self-reported political attitudes were more moderate among those who forecasted than those who did not. We also found evidence that forecasters attributed more moderate political attitudes to the opposing side."

However, people who are willing to participate in such a tournament for many months are presumably quite unrepresentative of the general population. Generally, my hunch is that it would be very difficult to convince many people to participate in such tournaments, especially if this requires active participation for a considerable amount of time. Still, promoting the institution of forecasting tournaments would have several other benefits.

(To be clear, I'm not sure that this intervention is particularly promising, I'm mostly brainstorming.)

Reducing long-term risks from malevolent actors

Thank you, great points.

It seems you think that one of the essential things is developing and using manipulation-proof measures of malevolence. If you were very confident we couldn't do this, how much of an issue would that be?

I wouldn't say it's "essential"—influencing genetic enhancement would still be feasible—though it would certainly be a big problem.

Regarding objective measures, there will be 'Minority Report' style objections to actually using them in advance, even if they have high predictive power

Yes, that would be an issue.

The area where I see this sort of stuff working best is in large organisations, such as civil services, where the organisations have control over who gets promoted. I'm less optimistic this could work for the most important cases, political elections, where there is not a system that can enforce the use of such measures.

That seems true though it seems at least conceivable that voters will demand such measures in the future. (As an aside, you mention large organisations but it seems such measures could also be valuable when used in smaller (non-profit) organizations?)

But it's not clear to me how much of an innovation malevolence tests are over the normal feedback processes used in large organisations.

Yeah, true. I guess it's also a matter of how much (negative) weight you put on malevolent traits, how much of an effort you make to detect them, and how attentive you are to potential signs of malevolence—most people seem to overestimate their ability to detect (strategic) malevolence (at least I did so before reality taught me a lesson).

It might be worth adding that the reason the Myers-Brigg style personality tests are, so I hear, more popular in large organisations than the (more predictive) "Big 5" personality test is that Myers-Briggs has no ostensibly negative dimensions.

Interesting, that seems plausible! I've always been somewhat bewildered by its popularity.

If this is the case, which seems likely, I find it hard e.g. Google will insist that staff take a test they know will assess them on their malevolence!

True. I guess measures of malevolence would work best as part of the hiring process (i.e., before one has formed close relationships).

As a test for the plausibility of introducing and using malevolence tests, notice that we could already test for psychopathy but we don't. That suggests there are strong barriers to overcome.

I agree that there are probably substantial barriers to be overcome. On the other hand, it seems that many companies are using "integrity tests" which go in a similar direction. According to Sacket and Harris (1984), at least 5,000 companies used "honesty tests" in 1984. Companies were also often using polygraph examinations—in 1985, for example, about 1.7 million such tests were administered to (prospective) employees (Dalton & Metzger, 1993, p. 149)—until they became illegal in 1988. And this even though polygraph tests and integrity tests (as well as psychopathy tests) can be gamed (rather easily).

I could thus imagine that at least some companies and organizations would start using manipulation-proof measures of malevolence (which is somewhat similar to the inverse of integrity) if it was common knowledge that such tests actually had high predictive validity and could not be gamed.

Reducing long-term risks from malevolent actors

Thanks, that's a good example.

my impression is that corporations have rewarded malevolence less over time.

Yeah, I think that's probably true.

Just to push back a little bit, the pessimistic take would be that corporate executives simply have become better at signalling and public relations. Maybe also partly because the downsides of having bad PR are worse today compared to, say, the 1920s—back then, people were poorer and consumers didn't have the luxury to boycott companies whose bosses said something egregious; workers often didn't have the option to look for another job if they hated their boss, et cetera. Generally, it seems plausible to me that "humans seem to have evolved to emphasize signaling more in good times than in bad." (Hanson, 2009).

I wonder if one could find more credible signals of things like "caring for your employers", ideally in statistical form. Money invested in worker safety might be one such metric. Salary discrepancies between employees and corporate executives might be another one (which seems to have gotten way worse since at least the 1970s) though there are obviously many confounders here.

The decline in child labor might be another example of how corporations have rewarded malevolence less over time. In the 19th century, when child labor was common, some amount of malevolence (or at least indifference) was arguably beneficial if you wanted to run a profitable company. Companies run by people who refused to employ children for ethical reasons presumably went bankrupt more often given that they could not compete with companies that used such cheap labor. (On the other hand, it's not super clear what an altruistic company owner should have done. Many children also needed jobs in order to be able to buy various necessities—I don't know.)

Maybe this is simply an example of a more general pattern: Periods of history marked by poverty, scarcity, instability, conflict, and inadequate norms & laws will tend to reward or even require more malicious behavior, and the least ruthless will tend to be outcompeted (compare again Hanson's "This is the Dream Time", especially point 4).

Reducing long-term risks from malevolent actors

Thanks, these are valid concerns.

  1. It seems that any research on manipulation proof measures for detection for malevolence, would help the development of tools that would be useful for a totalitarian state.

My guess is that not all research on manipulation-proof measures of malevolence would pose such dangers but it’s certainly a risk to be aware of, I agree.

  1. I'm sceptical of further research on malevolence being helpful in stopping these people being in positions of power. At first glance I don't think a really well developed literature on malevolence, would of changed leaders coming to power in 20th century.

In itself, a better scientific understanding of malevolence would not have helped, agreed. However, more reliable and objective ways to detect malevolence might have helped iff there also had existed relevant norms to use such measures and place at least some weight on them.

I think it really matters more who is in charge. I doubt bio-ethicists saying dark triad traits are bad will have much of an effect.

Bioethicists sometimes influence policy though I generally agree with your sentiment. This is also why we have emphasized the value of acquiring career capital in fields like bioinformatics.

In terms of further GWAS studies, I suspect by the time this becomes feasible more GWAS on desirable personality traits will have been undertaken.

I agree that this is plausible—though also far from certain. I’d also like to note that (very rudimentary) forms of embryo selection are already feasible, so the issue might be a bit time-sensitive (especially if you take into account that it might take decades to acquire the necessary expertise and career capital to influence the relevant decision makers).

Reducing long-term risks from malevolent actors

Thank you! I agree that the distinction between affective and cognitive empathy is relevant, and that low affective empathy (especially combined with high cognitive empathy) seems particularly concerning. I should have mentioned this, at least in the footnote you quote.

And I remember being told during my psych undergrad that "psychopaths" have low levels of affective empathy but roughly average levels of cognitive empathy, while people on the autism spectrum have low levels of cognitive empathy but roughly average levels of affective empathy. (I haven't fact-checked this, though, or at least not for years.)

That sounds right. According to my cursory reading of the literature, psychopathy and all other Dark Tetrad traits are characterized by low affective empathy. While all Dark Tetrad traits except for narcissism also seem to correlate with low cognitive empathy, the correlation with diminished affective empathy seems substantially more pronounced (Pajevicc et al., 2017, Table 1; Wai & Tiliopoulos, 2012, Table 1).[1] As you write, people on the autism spectrum basically show the opposite pattern (normal affective empathy, lower cognitive empathy (Rogers et al., 2006; Rueda et al., 2015).

We focused on the Dark Tetrad traits because they overall seem to better capture the personality characteristics we find most worrisome. Low affective empathy seems a bit too broad of a category as there are several other psychiatric disorders which don’t seem to pose any substantial dangers to others but which apparently involve lower affective empathy: schizophrenia (Bonfils et al., 2016), schizotypal personality disorder (Henry et al., 2007, Table 2), and ADHD (Groen et al., 2018, Table 2).[2]

Of course, the simplicity of a unidimensional construct has its advantages. My tentative conclusion is that the D-factor (Moshagen et al., 2018) captures the most dangerous personalities a bit better than low affective empathy—though this probably depends on the precise operationalizations of these constructs. In any case, more research on (diminished) affective empathy seems definitely valuable as well.

  1. Though Jonason and Krause (2013) found that narcissism actually correlates with lower cognitive empathy and showed no correlation with affective empathy. ↩︎

  2. This list is not necessarily exhaustive. ↩︎

Reducing long-term risks from malevolent actors

A lot of Nazis were interested in the occult, and Mao wrote poetry.

Good point, my comment was worded too strongly. I’d still guess that malevolent individuals are, on average, less interested in things like Buddhism, meditation, or psychedelics.

Do you know where this worry [that psychedelics sometimes seem to decrease people’s epistemic and instrumental rationality] comes from?

Gabay et al. 2019 found that MDMA boosted people's cooperation with trustworthy players in an iterated prisoner's dilemma, but not with untrustworthy players. I take that as some evidence that MDMA doesn't acutely harm one's rationality.

Interesting paper! Though I didn’t have MDMA in mind; with “psychedelics” I meant substances like LSD, DMT, and psilocybin. I also had long-term effects in mind, not immediate effects. Sorry about the misunderstanding.

One reason for my worry is that people who take psychedelics seem more likely to believe in paranormal phenomena (Luke, 2008, p. 79-82). Of course, correlation is not causation. However, it seems plausible that at least some of this correlation is due to the fact that consuming psychedelics occasionally induces paranormal experiences (Luke, 2008, p. 82 ff.) which presumably makes one more likely to believe in the paranormal. This would also be in line with my personal experience.

Coming back to MDMA. I agree that the immediate, short-term effects of MDMA are usually extremely positive—potentially enormous increases in compassion, empathy, and self-reflection. However, MDMA’s long-term effects on those variables seem much weaker, though potentially still positive (see Carlyle et al. (2019, p. 15).

Overall, my sense is that MDMA and psychedelics might have a chance to substantially decrease malevolent traits if these substances are taken with the right intentions and in a good setting—ideally in a therapeutic setting with an experienced guide. The biggest problem I see is that most malevolent people likely won’t be interested in taking MDMA and psychedelics in this way.

Reducing long-term risks from malevolent actors

Thank you, excellent points!

I will probably add some of your intervention ideas to the article (I'll let you know in that case).

I felt that this article could have said more about possible policy interventions and that it dismisses policy and political interventions as crowded too quickly.

Sorry about that. It certainly wasn’t our intention to dismiss political interventions out of hand. The main reason for not writing more was our lack of knowledge in this space; which is why our discussion ends with “We nevertheless encourage interested readers to further explore these topics”. In fact, a comment like yours—containing novel intervention ideas written by someone with experience in policy—is pretty much what we were hoping to see when writing that sentence.

Better mechanisms for judging individuals. Eg ensuring 360 feedback mechanisms are used routinely to guide hiring and promotion decisions as people climb political ladders. (I may do work on this in the not too distant future)

Very cool! This is partly what we had in mind when discussing manipulation-proof measures to prevent malevolent humans from rising to power (where we also briefly mention 360 degree assessments).

For what it's worth, Babiak et al. (2010) seemed to have some success with using 360 degree assessments to measure psychopathic traits in a corporate setting. See also Mathieu et al. (2013).

Reducing long-term risks from malevolent actors

It seem plausible that institutional mechanisms that prevent malevolent use of power may work well today in democracies.

I agree that they probably work well but there still seems to be room for improvement. For example, Trump doesn't seem like a beacon of kindness and humility, to put it mildly. Nevertheless, he got elected President. On top of that, he wasn't even required to release his tax returns—one of the more basic ways to detect malevolence.

Of course, I agree that stable and well-functioning democracies with good cultural norms would benefit substantially less from many of our suggested interventions.

Also, the major alternative to reducing the influence of malevolent actors may be in the institutional decision making itself, or some structural interventions. AI Governance as a field seems to mostly go in that route, for example.

Just to be clear, I'm very much in favor of such "structural interventions". In fact, they overall seem more promising to me. However, it might not be everyone's comparative advantage to contribute to them which is why I thought it valuable to explore potentially more neglected alternatives where lower-hanging fruits are still to be picked.

That said, I think that efforts going into your suggested interventions are largely orthogonal to these alternatives (and might actually be supportive of one another).

Yes, my sense is that they should be mutually supportive—I don't see why they shouldn't. I'm glad you share this impression (at least to some extent)!

Load More