This is a special post for quick takes by SamiM. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 4:54 AM

In Debiasing Decisions: Improved Decision Making With A Single Training Intervention they found that a 30-minute video reduced confirmation bias, fundamental attribution error, and bias blind spot, by 19%.
The video is super cheesy, and that makes me suspicious.
It should be noted that playing a 60-minute "debiasing" game debiased people more than the video. 


The rest of this short form is random thoughts about debiasing.


 


I tried finding tests for these biases so that I can do it myself, but I didn't find any. This made me worry that we don't have standardized tests for biases, which strikes me as bad. Although I didn't spend too much time looking into it. (More on this here)


I don't think training people to reduce 3 biases a time is a good way to go, since we have 100s of biases. If we use a taxonomy of biases like Arkes (1991) (strategy-based, association-based, and psychophysical errors). maybe we could have three interventions for each type of bias? But it's not clear how you would teach people to avoid say association-based biases by lecturing about it.
You could nudge them in small ways. From Arkes (1991)
 

For example, subjects in their third study were presented with the story of David, a high school senior who had to choose between a small liberal arts college and an Ivy League university. Several of David's friends who were attending one of the two schools provided information that seemed to favor quite strongly the liberal arts college. However, a visit by David to each school provided him with contrary information. Should David rely on the advice of his many friends (a large sample) or on his own 1-day impressions of each school (a very small sample)? Other subjects were given the same scenario with the addition of a paragraph that made them "explicitly aware of the role of chance in determining the impression one may get from a small sample" (Nisbett et al., 1983, p. 353). Namely, David drew up a list for each school of the classes and activities that might interest him during his visit there, and then he blindly dropped a pencil on the list, choosing to do those things on the list where the pencil point landed. These authors found that if the chance factors that influenced David's personal evidence base were made salient in this way, subjects would be more likely to answer questions about the scenario in a probabilistic manner (i.e, rely on the large sample provided by many friends) than if the chance factors were not made salient. Such hints, rather than blatant instruction, can provide routes to a debiasing behavior in some problems

In Sedlmeier & Gigerenzer they taught people Bayes by using frequencies rather than probabilities. E,g. Instead of saying (1% of people use drugs and they test positive 80% of the time while non-users 5% of the time), you say From 1000 people, 10 use drugs, 8 drug users test positive, while 50 non-users test positive). 
It seems to work.


If it's really hard, we should target really bad, really harmful biases. 
From here

...many of the known predictors of conspiracy belief are alterable. One of these predictors is the tendency to make errors in logical and probabilistic reasoning (Brotherton & French, 2014), and another is the tendency toward magical thinking (e.g., Darwin et al., 2011; Newheiser et al., 2011; Stieger et al., 2013; Swami et al., 2011). It is not clear whether these tendencies can be corrected (Eckblad & Chapman, 1983; Peltzer, 2003), but evidence suggests that they can be reduced by training in logic and in probability specifically (e.g., Agnoli & Krantz, 1989; Sedlmeier & Gigerenzer, 2001). The current findings suggest that interventions targeting the automatic attribution of intentionality may be effective in reducing the tendency to believe in conspiracy theories.

Perhaps finding out which are the worst biases, and what are the best interventions for them are would be useful. But increasing the effectiveness of changing beliefs is potentially dangerous, so maybe not.

I think you're wise to point out the potential risk of increasing the effectiveness of changing others' beliefs. Like any technology or technique, when we consider whether to contribute to its development, we have to consider both the potential harm it could do in the wrong hands and the potential good it could do in the right ones. I'm not sure enough people in the education and debiasing communities realize that.

More from SamiM
Curated and popular this week
Relevant opportunities