Follow up to “Political Debiasing and the Political Bias Test”. Connected to “Effective altruism is a Question (not an Ideology)"

There are a number of different psychological effects which contribute to political bias. These include the halo effect, wishful thinking and confirmation bias, all of which can cause people’s political values to colour their factual beliefs. This obviously leads to a correlation between values and factual beliefs. For instance, those who believe that the market is just will also think it’s efficient, and vice versa.

All of these psychological mechanisms operate so to say on the individual level. They cause individuals’ interpretation of evidence to be influenced by their political values, but not by other people. What makes things worse, however, is that there are social structures which underwrite bias. Once the correlation between values and factual beliefs among groups of people is established, social biases kick in.

To see that, suppose that political group X is, from the start, defined solely by its values - say “the market is just” (this is rather unusual, but let’s grant this for the sake of the argument). Because of the aforementioned biases, most X-ers soon start believing that the market also is effective. This becomes the normal belief in the group, and what’s normal usually starts becoming the norm among political groups. Hence, the definition of X is changed. Believing that the market is effective becomes part of what it means to be an X-er. It doesn’t perhaps become completely impossible to stay an X-er if you don’t believe that the market is effective, but it definitely is frowned upon by your in-group.

In fact, the normal state of affairs is that membership in a political group is defined both by certain political values and certain factual beliefs. (Also, values and factual beliefs typically aren’t clearly distinguished.) This makes it hard for group members to evaluate their factual beliefs objectively. If they do so, and come to the wrong conclusion, they may have to leave their political group. Vox.com:s Ezra Klein quotes political psychologist Dan Kahan.

[I]f [an ordinary member of the public] forms the wrong position on climate change relative to the one that people with whom she has a close affinity – and on whose high regard and support she depends on in myriad ways in her daily life – she could suffer extremely unpleasant consequences, from shunning to the loss of employment.”

Kahan calls this theory Identity-Protection Cognition: “As a way of avoiding dissonance and estrangement from valued groups, individuals subconsciously resist factual information that threatens their defining values.

Clearly, Identity-Protection Cognition – which is a social bias, unlike, e.g. the halo effect – is a major cause of political bias. How could we reduce it?

I think that if we retain the definition of Effective Altruism as a social movement that is trying to do good as effectively as possible, the EA movement has a good chance of reducing this particular type of political bias. This is because unlike standard political ideologies, the EA movement is not defined by any factual beliefs (beyond extremely general ones, such as “reason and evidence are good ways of finding out about the world”). The EA movement is not defined, e.g. by any particular conception of the market economy, or any other factual view. Thus you can give up more or less all of your factual beliefs and still remain an Effective altruist. This means that EAs should feel much freer to evaluate factual claims objectively.

EA members are not immune to individual biases like the halo effect and confirmation bias, however. (To reduce them, we have to use other means than those discussed in this post.) Hence, they are likely to end up with factual beliefs which match their political values to some extent, just like other people. There’s therefore a risk that these factual beliefs start becoming part of the definition of Effective altruism (this was precisely what happened with our hypothetical ideology X). To avoid this, it is important that we make a conscious effort to retain the original definition of the EA movement.

Here I think explicitness helps. Most ideologies aren’t very explicitly defined (at least not beyond the groves of the academy), which facilitates drift. Witness, e.g. the drift of the term “liberal” (which is left-wing in the US, but right-wing in Scandinavia). To avoid such drift we need to discuss and emphasize the meaning of “Effective altruism” over and over again.

Could Effective altruism also contribute to reducing political bias in society at large? One can only speculate on this. I certainly think that working to reduce political bias is a worthwhile cause for effective altruists, as I made clear in my last post on the topic. Presumably, when effective altruists do so, they could refer to the EA movement’s impartial attitude to factual issues, and suggest non-EAs to adopt it. More generally, the EA movement could do a great deal of good by making non-EA-groups adopt part of their message in this way.

Let me finish with a general historical reflection. Fundamentalist religions and oppressive political ideologies – such as Nazism or Soviet communism – prohibit people from expressing certain factual beliefs – e.g., the belief that God does not exist. Thankfully, they have little power in the Western world today. The democratic political ideologies that dominate in the West do allow you to question factual beliefs.

However, most of these ideologies still discourage you from questioning certain factual beliefs which are integral to them. In other words, they are partial on certain factual issues. This is not enough – we need to go all the way and create a level playing-field for all factual views. Effective altruism, with, its refusal to commit to any factual belief, is uniquely well-suited for that.

13

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since: Today at 11:59 AM

It's true for me, and others, that we got much more interested in the "rationality" project when we came to understand it as improving our altruism. Learning the most common of biases can quickly render the selection process for interventions abysmal, as evidenced by more learned peers correcting others in ways which hindsight seems obvious. Many of us gained motivation to understand rationality as an instrumental tool necessary for doing as much good as possible. I think the influence of CFAR and LessWrong on effective altruism is remarkable, considering virtually every metacharity and their supporters I can think of uses tools from Bayesian epistemology learned on LessWrong to explain the reasoning behind choices that are at odds with conclusions of the parent of LessWrong, the Machine Intelligence Research Institute.

Your suggestions near the end of this article are some of the first showing how effective altruism may enhance rationality. In how effective altruism reaches out to the world at large, discussing awareness of bias may become integral to how effective altruism spreads its message(s).

Mere awareness of commonly existing biases is insufficient to reduce them, and can induce people to rationalize their choices when they know about biases, since they figure they'll no longer fall prey to them. Further, one can be motivated to accuse others of a specific named biases while ceasing to check one's own thought for errors. I believe on LessWrong this is referred to as "the valley of bad rationality", the metaphor being you must make it through a low level of ratioanlity for quite some time before reaching the peak(s) of clear thinking, as if we're journeying up a mountain. I think lots of us are beyond this. I believe I am. Since dedicated altruists are so passionate, we're willing to debate for what we believe in quite fervently, to win an argument with a predetermined conclusion, in an adversarial way, rather than collaborative truth-seeking. This has been the biggest problem in effective altruism thus far.

However, many hundreds of people passionate about a preselected policy when entering effective altruism sought out the community because they perceived its great potential, and we're willing to adopt more epistemic humility, or at least mimic it in public, and learn more about other causes. Of the friends I've observed having not changed their minds from the cause they came into effective altruism with, most of them seem to be able to engage disagreement at a higher level, with a greater grasp of facts and without strawmanning positions counter to their own as often. I think in some ways doing all this seems like an imperative personal responsibility to effective altruists to keep the movement from collapsing into disparate factions who lose the ability to each alone then continue raisnug the imperative profile of effectiveness in do-gooding. While there always is and perhaps will be debates with too much vitriol in effective altruism, I think as long veterans of zero-sum debates continue imploring fresher community members to temper their overconfidence under pain of driving apart such a fragile but potent alliance, effective altruism will sustain itself.

Has all this enhanced the ability of some effective altruists to temper their own biases in domains or causes unrelated to effective altruism? I think so. However, I think we overcame initial ignorance, actually got worse in our uncalibrated passion as many young intellects do, and then came back to the zero level as we realized more information without stricter habit of thought biased us more. Coming out of the valley of bad rationality leaves us at the base of the mountain. I form beliefs relating to politics with more lightnesss than before, putting less confidence on them, and willing to dtich them faster when faced with opposing evidence. I feel like I know now how to better avoid the eorst ideas, but not good ones. I don't have policy prescriptions, I don't know who to vote for, and I don't have anything like a model which would derive from what I want to happen what I think societies should actually do, other than adopting past practices history has shown us are anti-effective, e.g., totalitarianism, as mentioned above.