Hide table of contents

Mindfulness meditation seems to be quite popular in the EA community and in adjacent communities. Many EAs I meet seem to be interested in meditation or actively practicing it, and there has been some written discussion of its value by EAs.[1]

I personally find mindfulness useful for reducing rumination. Its main value is revealing that your mind is basically going completely bananas all the time, that patterns of thought emerge which are difficult to control. I also find that loving and kindness meditation improves my mood in the short-term. However, I think the strength of the evidence on the benefits of meditation is often overstated by EAs and quasi-fellow travellers like Sam Harris.

Here, I discuss the overall strength of the evidence on meditation as a treatment for anxiety and depression.

1. What is mindfulness?

With its roots in Buddhism, attention towards mindfulness has grown enormously since the early 2000s.[2] Two commonly studies forms of mindfulness are Mindfulness-Based Stress Reduction and Mindfulness-Based Cognitive Therapy. Mindfulness-Based Stress Reduction is an 8-week group-based program consisting of:

● 20-26 hours of formal meditation practice, including:

○ Weekly 2 to 2.5 hour sessions

○ A whole-day retreat (6 hours)

○ Home practice ~45 mins per day for 6 days a week.[3]

Mindfulness-Based Cognitive Therapy incorporates cognitive therapy into the sessions. Mindfulness-Based Stress Reduction can be led by laypeople, whereas Mindfulness-Based Cognitive Therapy must be led by a licensed health care provider.

In my experience, most people practising mindfulness use an app such as Headspace or the Sam Harris Waking Up app. I personally find the Waking Up app far superior to the Headspace app.

2. How over-optimistic should we expect the evidence to be?

Mindfulness has many features that should make us suspect that the strength of the evidence claimed in the literature is overstated:

● A form of psychology/social psychology research

● Most outcome metrics are subjective

● Many of those researching it seem to be true believers[4]

● Hints of religion, alternative medicine, and woo

Research fields with these features are ripe for replication crisis trauma. We should expect inflated claims of impact which are then brought back to Earth by replications or further analysis of the existing research.

3. Main problems with the evidence

3.1. Reporting bias

Reporting bias includes:

  1. Study publication bias - the publication of significant results and the failure to publish insignificant results
  2. Selective outcome reporting bias, in which outcomes published are chosen based on statistical significance with non-significant outcomes not published.
  3. Selective analysis reporting bias, in which data are analyzed with multiple methods but are reported only for those that produce positive results.
  4. Other biases, such as relegation of non-significant primary outcomes to secondary status when results are published.

There is good evidence of reporting bias in mindfulness research.

Coronado-Montoya et al (2016) test for reporting bias by estimating the expected number of positive trials that mindfulness-based therapy would have produced if its effect size were the same as individual psychotherapy for depression, d = 0.55.[5] As we will see below, this is very likely an overestimate of the true effect of mindfulness-based therapy, and therefore the method used understates reporting bias in mindfulness studies.

Of the 124 RCTs included in Coronado-Montoya et al’s (2016) study, 108 (87%) were classified as positive and 16 (13%) as negative. If the true effect size of mindfulness-based therapy was d = 0.55, then we would expect 68 of 124 studies to be positive, rather than 108, meaning that the ratio of observed to expected positive studies was 1.6.[6] This is clear evidence of reporting bias.

Moreover, Coronado-Montoya et al (2016) also looked at 21 trials that were pre-registered. Of these, none specified which variable would be used to determine success, and 13 (62%) were still unpublished 30 months after the trial was completed.[7]

A recent Nature paper found that in psychology, due to selective reporting, meta-analyses produce significantly different effect sizes to large-scale pre-registered replications in 12 out of 15 cases. Where there was a difference, on average, the effect size in the meta-analysis was 3 times larger than the replications.[8] This shows that reporting bias is usually not adequately corrected for in meta-analyses.

3.2. Effect size of meditation compared to other interventions

Goyal et al conducted a meta-analysis of the effect of mindfulness-based therapy for well-being.[9] They key facts are:

Effect size

○ Cohen’s d ranging from 0.22 to 0.38 for anxiety symptoms.[10]

○ 0.23 to 0.30 for depressive symptoms.[11]

○ These were each usually compared to a nonspecific active control.

○ However, neither of these estimates correct for reporting bias.[12] I think it is plausible that this biases the estimate of the effect size upwards by a factor of 2 to 3.

Comparison to alternative treatments

■ In the 20 RCTs examining comparative effectiveness, mindfulness and mantra programs did not show significant effects when the comparator was a known treatment or therapy.[13]

■ Sample sizes in the comparative effectiveness trials were small (mean size of 37 per group), and none was adequately powered to assess noninferiority or equivalence.[14]

Antidepressants

● According to a recent meta-analysis, antidepressants have an effect size of 0.3 for depression vs placebo.

CBT

● According to one meta-analysis, compared to wait-list controls, CBT has a Cohen’s d = 0.88 on depression

● Compared to care as usual or non-specific controls, it has a Cohen’s d of 0.38.[15]

Goyal et al assessment of strength of evidence

○ Only 10 of the 47 included studies had a study quality rating of ‘good’, with the remainder having a rating of only ‘fair’ or ‘poor’.[16]

○ Goyal et al state that “none of our conclusions yielded a high strength of evidence grade for a positive or null effect.”[17]

This suggests that the strength of the evidence on meditation is weak, but that there is some evidence of small to moderate positive effect on anxiety and depression. However, the evidence seems to be much weaker than the evidence for CBT and antidepressants, and CBT and antidepressants seem to have a greater effect on depression.

We should beware the man of one study, but also beware the man of a meta-analysis that doesn’t correct for reporting bias or other sources of bias. Indeed, as argued in the section on reporting bias, there is good reason to think that a pre-registered high-powered replication would cut the estimated meta-analytic effect size for meditation by a factor of 2 to 3.

3.3. Large variation among mindfulness-based interventions

Most mindfulness-based therapy has been based on the idea of mindfulness-based stress reduction. However, there are large variations among studied mindfulness-based interventions.

● Time commitment[18]

○ The practice hours of the intervention included in the Goyal et al meta-analysis range from 7.5 hours to 78 hours.

○ The homework hours are: often not specified, exceed 30 hours in many studies and even reach up to 1,310 hours in one study.

● Methods for teaching and practicing mindful states.[19]

Van Dam et al (2017) contend that there is far greater heterogeneity among mindfulness interventions than among other intervention types such as CBT.[20] This heterogeneity across intervention types means that we should be cautious about broad claims about the efficacy of mindfulness for depression and anxiety.

It is especially important to consider this heterogeneity given that most people practicing meditation practice for 10-20 minutes per day using an app, making their experience very different to a full mindfulness-based stress reduction course.

3.4. Shaky fMRI evidence

Many studies assess the impact of meditation on brains states using fMRI imaging. These methods are highly suspect. There are numerous potential confounds in fMRI studies, such as head movement, pace of breathing, and heart rate.[21] These factors can confound a posited relationship between meditation and change in activity in the amygdala. Moreover, calculating valid estimates of effect sizes across groups in neuroimaging data is very difficult. Consequently, the practical import of such studies remains unclear.

Nonetheless, according to Van Dam et al (2017), meta-analyses of neuroimaging data suggest modest changes in brain structure as a result of practicing mindfulness. Some concomitant modest changes have also been observed in neural function. However, similar changes have been observed following other forms of mental and physical skill acquisition, such as learning to play musical instruments and learning to reason, suggesting that they

may not be unique to mindfulness.[22]

It would be interesting to compare the effects of meditation on the brain with the effects of other activities such as reading, exercise, sport, or having a conversation with friends. I suspect that the effects on fMRI scans would be quite similar for many mundane activities, though I have not looked into this.

4. Overall judgement on effectiveness

In light of the above discussion, my best guess for downward adjustments of the effect size estimated in the Goyal et al meta-analysis (which found an effect size of 0.3 on depression) is:

  • Reporting bias biases the estimate upwards by a factor of 2.
  • Time commitment. Meditating with an app for 140 minutes per week vs around 390 minutes in MBSR, a 0.35:1 ratio, meriting a discount by a factor of ~3.

I estimate that true effect size on depression of daily mindfulness meditation for 20 minutes with an app is around 0.05. This is very small: if an intervention increased a man's height with an effect size of 0.05, this would increase their height by around half a centimetre. Mindfulness is not the game-changer it is often painted to be.

5. Useful resources and reading

● Coronado-Montoya et al, ‘Reporting of Positive Results in Randomized Controlled Trials of Mindfulness-Based Mental Health Interventions’, Plos One (2016)

○ A high-quality analysis of reporting bias in mindfulness research.

● Goyal et al, ‘Meditation Programs for Psychological Stress and Well-being: A Systematic Review and Meta-analysis’ JAMA Internal Medicine 2014.

○ A review commissioned by the U.S. Agency for Healthcare Research and Quality. Provides a good overview of the quality of the evidence and estimates of effect size but crucially does not correct for reporting bias.

● Van Dam et al ‘Mind the Hype: A Critical Evaluation and Prescriptive Agenda for Research on Mindfulness and Meditation’, Perspectives on Psychological Science (2017)

○ A very critical review of the evidence on mindfulness, which raises several problems with the evidence. However, I think the tone is overall too critical given the evidence presented.

● Sam Harris - Waking Up book

○ An in my opinion overly rosy review of the evidence on meditation, especially in chapter 4.

● The Waking Up meditation app.

○ The best meditation app I have tried.


References

[1] See for example this piece by Rob Mather, and this by Louis Dixon


[2] Nicholas T. Van Dam et al., ‘Mind the Hype: A Critical Evaluation and Prescriptive Agenda for Research on Mindfulness and Meditation’, Perspectives on Psychological Science 13, no. 1 (2018): 36–37.


[3] Stephanie Coronado-Montoya et al., ‘Reporting of Positive Results in Randomized Controlled Trials of Mindfulness-Based Mental Health Interventions’, PLOS ONE 11, no. 4 (8 April 2016): 1, https://doi.org/10.1371/journal.pone.0153220.


[4] For example, a review of RCT evidence on mindfulness by Creswell opens with a quote from John Kabat-Zinn, a leading figure in mindfulness studies: “There are few people I know on the planet who couldn’t benefit more from a greater dose of awareness”. [creswell ref]


[5] Coronado-Montoya et al., ‘Reporting of Positive Results in Randomized Controlled Trials of Mindfulness-Based Mental Health Interventions’, 5.


[6] Coronado-Montoya et al., 9.


[7] Coronado-Montoya et al., 10.


[8] Amanda Kvarven, Eirik Strømland, and Magnus Johannesson, ‘Comparing Meta-Analyses and Preregistered Multiple-Laboratory Replication Projects’, Nature Human Behaviour, 23 December 2019, 1–12, https://doi.org/10.1038/s41562-019-0787-z.


[9] Madhav Goyal et al., ‘Meditation Programs for Psychological Stress and Well-Being: A Systematic Review and Meta-Analysis’, JAMA Internal Medicine 174, no. 3 (1 March 2014): 357–68, https://doi.org/10.1001/jamainternmed.2013.13018.


[10] Goyal et al., 364.


[11] Goyal et al., 364.


[12] Goyal et al., 361.


[13] Goyal et al., 364.


[14] Goyal et al., 365.


[15] Ellen Driessen and Steven D. Hollon, ‘Cognitive Behavioral Therapy for Mood Disorders: Efficacy, Moderators and Mediators’, Psychiatric Clinics 33, no. 3 (1 September 2010): 2, https://doi.org/10.1016/j.psc.2010.04.005.


[16] Goyal et al., ‘Meditation Programs for Psychological Stress and Well-Being’, Table 2.


[17] Goyal et al., 365.


[18] Goyal et al., Table 2.


[19] Van Dam et al., ‘Mind the Hype’, 40.


[20] Van Dam et al., 46.


[21] Van Dam et al., 49–51.


[22] Van Dam et al., 50–51.

Comments13
Sorted by Click to highlight new comments since: Today at 3:51 AM

I briefly and informally looked into this several years ago and, at the time, had a few additional concerns. (Can't promise I'm remembering this perfectly and the research may have progressed since then).

1) Many of the best studies on mindfulness's effect on depression and anxiety were specifically on populations where people had other medical conditions (especially, I think, chronic pain or chronic illness) in addition to mental illness. But, most people I know who are interested in mindfulness aren't specifically interested in this population.

My impression is that Jon Kabat-Zinn initially developed Mindfulness-Based Stress Reduction (MBSR) for people with other conditions and my intuition from my experience with it is that it might be especially helpful for things like chronic pain. So I had some external validity concerns.

2) There were few studies of long-term effects and it seems pretty plausible the effects would fade over time. This is especially true if we care about intention-to-treat effects. The fixed cost of an MBSR course might only be justified if it can be amortized over a fairly long period. But it wouldn't be surprising if there are short-to-medium term benefits that fade over time as people stop practicing.

By contrast, getting a prescription for anti-depressants or anti-anxiety has a much lower fixed cost and it's less costly and easier to take a pill every day (or as needed) than to keep up a meditation practice. (On the other hand, some meds have side effects for many people.)

3) You already mention that "many of those researching it seem to be true believers" but it seems worth reemphasizing this. When I looked over the studies included in a meta-analysis (I think it was the relevant Cochrane Review), I think a significant proportion of them literally had Jon Kabat-Zinn (the founder of MBSR) as a coauthor.

---

All that said, my personal subjective experience is that meditating has had a moderate but positive effect on my anxiety and possibly my depression when I've managed to keep it up.


[anonymous]4y5
0
0

On the true believers point, I have also hear second-hand stories from people who went to mindfulness conferences to find that they were full of people who really wanted mindfulness to have a big effect

Have you come across the book Altered Traits? It tries to sum up the existing evidence for meditation, and in the latter half of the book, each chapter looks at the evidence for and against a proposed benefit. At the start, they talk about their criteria for which studies to include, and seem to have fairly strict standards.

One significant weakness is that it's written by two fans of meditation, so it's probably too positive. However, to their credit, the authors exclude some of their own early studies for not being well designed enough.

One advantage is that they try to bring together multiple forms of evidence, including theory, studies of extreme meditators, and neuroscience as well as RCTs of specific outcomes – though the neuroscience is pretty basic. They also do a good job of distinguishing how there are many different types of meditation that seem to have different benefits; and also distinguishing between beginners, intermediates and experts.

[anonymous]4y2
0
0

I haven't read it, no, thanks for the tip

This is very useful – thanks for writing it up.

This heterogeneity across intervention types means that we should be cautious about broad claims about the efficacy of mindfulness for depression and anxiety.

True, but that applies equally to claims of null or small effect sizes, e.g. some forms of mindfulness could be very effective even if 'on average' it's not. Did any of the meta-analyses contain useful subgroup analyses?

(For what it's worth, a few years ago I used the Headspace app ~5x/week for 3 months and found it to be actively detrimental to my mood. Anecdotally, this seem fairly common: https://www.theguardian.com/lifeandstyle/2016/jan/23/is-mindfulness-making-us-ill)

Hello John, thanks very much for doing this careful investigation. I was wondering, what makes you think there isn't also an overestimate for the effect sizes of CBT and antidepressants? I was wondering if the metanalyses on those had controlled for such biases, but you didn't mention that.

[anonymous]4y5
0
0

Good point! I haven't looked into this. My impression is that these are much better studied than mindfulness and the quality of the evidence is better, so the estimates might be less upwardly biased. But yes they could also be upwardly biased.

The main point here was that this meta-analysis doesn't correct for reporting bias because the evidence is so weak.

I was quite unaware of these biases in meta analyses, thanks for this! I confess to being rosy myself on the topic, and will need to be more qualified in future.

It becomes clear that the scope of the post is evaluating the evidence for mindfulness meditation as a therapeutic tool. Totally fair. However, as the intro (and the title) seem much broader in scope perhaps it’s worth mentioning the non-trivial point that insight practices offer a lot more than that. Although I do not pray, if pondering “is prayer good for you?” I can appreciate a host of benefits beyond therapy.

While its recent popularity in developed non-Asian countries is as an instrumentally useful tool, I’d posit the predominant way it has been regarded globally (and historically) is as one component (along with others like concentration, joy etc.) of a rich contemplative tradition. I’d personally guess mindfulness is still presented as such in most of the world. Kabbat-Zinn and Harris do convey as much in their books.

(These aspects are certainly fair game for critiques, to be clear.)

Totally agree. I think the analysis in the post is useful but proponents of meditation claim that it has many benefits beyond what is examined here. I think that this short lesson (6min 39s) from Sam Harris is a good answer to this post. And what he says corresponds to my personal experience.

Great post, thank you for doing this. I very, very briefly (2-4 hours) looked into this a few months ago myself, and came away with broadly similar concerns.

In particular, the point that MBSR and MBCT are at first glance very different from "meditate every day for 10 minutes" seems worth emphasizing to me. Anecdotally, most people I know who hope for beneficial mental health effects do something more like the latter.

(My own experience from when I used to meditate daily for 10-30 minutes, which I did mostly out of curiosity rather than hoping for specific benefits: no clear effect on depression, extremely short-term [a few sec to min] positive effects on alertness and maybe mood, some changes in mental habits e.g. spontaneously switching to a 'mindful'/aware state more often. Interestingly, for me the positive effects noticeably diminished over time, which is also why I stopped.)

Do people here have thoughts on quality of the studies of meditation for cognition?

Particularly interested in

1. Focus

2. processing speed

3. working memory

Thanks for the post John! Very informative. I know some people thinking of doing another RCT on this and will definitely point them to it.

Also agree that heterogeneities in the actual intervention as well as population under study are major challenges here in generalizing the effects (and they are common in studies on social science interventions which probably lead to lower generalizability than medical trials).


One minor and meta comment on section 2: "How over-optimistic should we expect the evidence to be?" I'm not sure how I feel about having a section on this in a post like yours. It's totally reasonable as a way to form your prior before examining the literature, but after you do that (motivated by your skepticism based on these reasons) your learning from examining the literature "screens off" the factors that made you skeptical in the first place. (E.g. it may well be that the studies turn out to have super rigorous methodology, even though they are psychological studies conducted by "true believes" etc., and the former should be the main factor influencing your posterior on the impact of meditation -- unless the reasons that gave you a skeptical prior makes you think they may have fabricated data etc.)

So while what you said in that section is true in terms of forming a prior (before looking at the papers), I would have put it in a less prominent place in this post (perhaps at the end on "what made me particularly skeptical and hence more interested in examining the literature"). (It's totally fine if readers feel what's in section 3 mostly "screens off" what's in section 2, but if not it may unfairly bias their perception against the studies.)

(Digression: in a completely different situation, if one didn't examine the literature at all but just put out a skeptical prior based on these reasons -- I would say that is the correct way of forming a prior, but it feels slightly unfair or irresponsible. But I probably would feel it's okay if people highly qualify their statement, e.g. "I have a skeptical prior due to X, Y, and Z, but I really haven't looked at the actual studies" and perhaps even "if I did look, things like A, B, and C would convince me the studies are actually reliable / unreliable". I'm not sure about this point and curious for others' thoughts, since this is probably how a lot of people talk about studies that they haven't fully read on social media.)


Also a minor and concrete point on section 2: the 2nd bullet point "Most outcome metrics are subjective". Here are some reasons we may or may not think (ex ante) the results may be overestimated.

  • If there's a lot of noise in self-reported outcomes alone it actually doesn't lead to bias (though in a case where the outcome variable is censored, as many psychological outcomes are, and outcomes are bunched near one end, that could happen).
  • Some relevant sources of bias are
    • Social desirability bias (respondents saying what they consider is socially desirable, should affect treatment and control respondents equally and apply to other psychological studies looking at the same outcome)
    • Courtesy bias (applies to treatment respondents, who may feel obligated to report positive impact)

And since these are self-reported outcomes that can't be verified, 1) people may be less deterred from lying, 2) we will never find out the truth -- so the two biases are potentially more severe (compared to a case where outcomes can be verified).

(Please correct me if I'm wrong here!)

Curated and popular this week
Relevant opportunities