Hey!Re #4, I found this, but I'm not sure if they're still active. If this doesn't work, you very welcome to join us at EA Anywhere.
Hi, Nice post. I found this article and it had some criticism of the effect of the radio broadcasts.It will be interesting to estimate the effect of genocide on future outcomes, like economic growth or political stability. For example, Somaliland, where the Isaaqs were killed, is currently more stable than other parts of Somalia, but that could be due to factors completely orthogonal to the genocide.
Hey Marcus, good job on taking the initiative!I think we should keep in mind that if someone (an athlete in this example) was donating 5% to an average charity, and then was prompted by the pledge to merely give 2%, the difference in impact between charities might be enough to offset that.Edit: I also endorse the option of giving more than 10%, it doesn't seem to have many downsides, and the benefits were highlighted by Benjamin.
I came across this playlist about the end of the world, might be of interest.
I don't think music affects my behavior a lot, but I really like this song.
Hi Akash, great post. The link to the Nanda post isn't working. (It links back to your post)
for more about the benefits of blogging, see this post by Neel Nanda
Yes, it's free.
I just googled your question.
Here are the top 3 links.
In Debiasing Decisions: Improved Decision Making With A Single Training Intervention they found that a 30-minute video reduced confirmation bias, fundamental attribution error, and bias blind spot, by 19%.The video is super cheesy, and that makes me suspicious.It should be noted that playing a 60-minute "debiasing" game debiased people more than the video.
The rest of this short form is random thoughts about debiasing.
I tried finding tests for these biases so that I can do it myself, but I didn't find any. This made me worry that we don't have standardized tests for biases, which strikes me as bad. Although I didn't spend too much time looking into it.
I don't think training people to reduce 3 biases a time is a good way to go, since we have 100s of biases. If we use a taxonomy like Arkes (1991) (strategy-based, association-based, and psychophysical errors). Maybe we could have three interventions for each type of bias? But it's not clear how you would teach people to avoid say association-based biases by lecturing about it.You could nudge them in small ways. From Arkes (1991)
For example, subjects in their third study were presented with the story of David, a high school senior who had to choose between a small liberal arts college and an Ivy League university. Several of David's friends who were attending one of the two schools provided information that seemed to favor quite strongly the liberal arts college. However, a visit by David to each school provided him with contrary information. Should David rely on the advice of his many friends (a large sample) or on his own 1-day impressions of each school (a very small sample)? Other subjects were given the same scenario with the addition of a paragraph that made them "explicitly aware of the role of chance in determining the impression one may get from a small sample" (Nisbett et al., 1983, p. 353). Namely, David drew up a list for each school of the classes and activities that might interest him during his visit there, and then he blindly dropped a pencil on the list, choosing to do those things on the list where the pencil point landed. These authors found that if the chance factors that influenced David's personal evidence base were made salient in this way, subjects would be more likely to answer questions about the scenario in a probabilistic manner (i.e, rely on the large sample provided by many friends) than if the chance factors were not made salient. Such hints, rather than blatant instruction, can provide routes to a debiasing behavior in some problems
In Sedlmeier & Gigerenzer they taught people Bayes by using frequencies rather than probabilities. E,g. Instead of saying (1% of people use drugs and they test positive 80% of the time while non-users 5% of the time), you say From 1000 people, 10 use drugs, 8 drug users test positive, while 50 non-users test positive). It seems to work.If it's really hard, we should target really bad, really harmful biases. From here
...many of the known predictors of conspiracy belief are alterable. One of these predictors is the tendency to make errors in logical and probabilistic reasoning (Brotherton & French, 2014), and another is the tendency toward magical thinking (e.g., Darwin et al., 2011; Newheiser et al., 2011; Stieger et al., 2013; Swami et al., 2011). It is not clear whether these tendencies can be corrected (Eckblad & Chapman, 1983; Peltzer, 2003), but evidence suggests that they can be reduced by training in logic and in probability specifically (e.g., Agnoli & Krantz, 1989; Sedlmeier & Gigerenzer, 2001). The current findings suggest that interventions targeting the automatic attribution of intentionality may be effective in reducing the tendency to believe in conspiracy theories.
Perhaps finding out which are the worst biases, and what are the best interventions for them are would be useful. But increasing the effectiveness of changing beliefs is potentially dangerous, so maybe not.
This is helpful, thanks.
The information is probably here somewhere, but is that the probability of getting tenure given you finish your Ph.D.? I.e. Does this account for dropping out?
Somewhat tangential, but I think accounting for the chance of working on AI safety (or something comparably effective) outside of academia will help. I think this is more common in Economics (e.g. World Bank). But I guess OpenAI or similar institutions hire CS PhDs and working there possibly has a similar impact to working in academia.