Hide table of contents

There has been increased focus on mental health as an area where effective interventions could create a lot of good. Much of this work has focused on scaling interventions such as psychiatric medications and psychotherapy. Often this work thinks creatively about how to promote mental health in under-resourced regions, for example through peer-led psychotherapy interventions. But my feeling is that there is a lot of room for thinking more broadly about what kinds of interventions might be useful.

To some this will suggest distinctively 21st-century interventions, such as the use of AI as a kind of adjunctive psychotherapy. However, this post is about a distinctively older form of technology that is I think inexpensive and underused, both in the United States and internationally. This is the use of books as adjuncts or supplements for traditional psychotherapy, or what is often called bibliotherapy.

I have written on bibliotherapy in various forums, including Psychology Today and my Substack. I am posting on it here because my sense is that this is a rich area for people in EA – it is an area where there are many low-hanging fruit, and where relatively small interventions can yield substantive benefits. I am by training a philosopher and a clinical social worker, and many of the particular interests and skills of people within EA are less familiar to me, so I would very much welcome input on how to frame these issues in a way that will be useful to the EA community (or whether, indeed, that is something worth doing at all).

This post four parts: (i) a brief summary of the evidence for the effectiveness of bibliotherapy, (ii) an estimate of what a large scale bibliotherapy intervention might cost, (iii) the potential effects of such an intervention and (iv) responses to some questions/objections.

The Evidence

One of the better attested results in the literature is that giving a person a book on a mental health topic – Feeling Good, the best-selling primer on applying Cognitive Behavioral Therapy, is a paradigm of the kind of book to be thinking of – is as effective as having them meet regularly with a therapist. 

A 1995 meta-analysis, for example, shows significant effect sizes across a broad range of mental health concerns and crucially finds "There was no significant differences between the effects of bibliotherapy and therapist-administered treatments, as well as no significant erosion of effect sizes at follow-up."

This finding has been pretty robust – there are studies of bibliotherapy for not just for depression and anxiety but for sexual dysfunction and disordered eating as well, as well as bibliotherapy for Chinese-speakers, older adults, and many other populations. To my knowledge every major study tends to find that books perform more or less as well as therapists (and, btw, that therapists and books both have meaningful positive effects).

To my mind this is one of the most important findings in 20th-century psychology, and most people don't know about it. It's an interesting theoretical question how bibliotherapy works – what exactly is the mechanism, why does the therapist contribution seem to be epiphenomenal, etc. – but the main point for this post is that it does.

The Cost

What would it cost to implement a large-scale bibliotherapy intervention? Let's consider a back-of-the-envelope calculation for addressing depression in the United States.

The National Institutes for Mental Health reports that, in 2021, 21.0 million American adults experienced at least one depressive episode. This number is drawn from the National Survey of Drug Use and Health, and more recent years indicate similar or increased numbers. Nonetheless, 21.0 million can serve as a reasonable baseline.

What is the cost of intervening on this population. Well, on Amazon, a paperback copy of Feeling Good costs $14.93. If we wanted to implement bibliotherapy at scale, there would be additional costs (such as shipping) as well as countervailing discounts (such as bulk orders). But $15/per person is a reasonable estimate for providing bibliotherapy to a given individual in the United States.

How much then would it cost to provide bibliotherapy to every person in the United States with depression? There are 21 million people, and a book costs $15, so the cost is roughly 21 million x $15 = 315 million USD.

That is a lot of money. It is useful, however, to contrast it with the cost of some other major mental health interventions. This is roughly on the order of magnitude of running a nationwide suicide hotline. It is estimated by that to a suicide hotline cost approximately $82 per call (this is call center costs only, excluding referrals to emergency services). Estimates vary, but there are plausibly at least 5 millions calls to suicide hotlines per year. So the operational cost of suicide hotlines in the United States is at least 410 million USD per year.

Suicide hotlines are of course very important and this is not an argument against them. Rather it is an argument that even this very extreme form of bibliotherapy – simply sending a book to everyone with depression – is well within the range of costs for mental health interventions that we are doing already. 

The Effects

What would the effects of such an intervention be? This part of the case becomes necessarily more speculative, but there is a reasonable case that this intervention would (i) have a meaningful impact on depression and (ii) it would, in virtue of (i), have a substantive effect on overall well-being – indeed, more than an unconditional cash transfer.

The case for (i) involves a few steps. First, as indicated above, multiple studies show that bibliotherapy has a significant effect on various mental health conditions. For depression specifically, a 2004 meta-analysis indicates an effect size of d = 0.77, and other studies have tended to find similar values.

What does that mean in terms of actual impact on depression? Since d is defined in terms of standard deviations, this means a 0.77 standard deviation improvement in depression. What exactly this looks like will depend on which depression scale we use. If we use the PHQ-9 – not the definitive depression scale but one that many will be familiar with from primary care and other settings – this translates to about 3 to 5 point decrease in PHQ-9 score (since the standard devation of the PHQ-9 appears to range from around 4 to 7 depending on the population in question). 

What effect would this have on overall well-being? I'm not aware of a specific answer to this question, and this gets us into some philosophical questions about the relationship between mental health and well-being. Studies do confirm the intuitive result that your mental health makes a big difference to your overall well-being. At first approximation, let's assume that a 1 standard deviation change in mental health is associated with a 0.5 standard deviation change in mental health. (If that is too high, then all the results below can be scaled down accordingly – as we'll see, the impact is so large that it remains significant under much weaker assumptions).

On this assumption, we might expect a 0.77 SD improvement in mental health to yield a 0.39 SD improvement in overall well-being. What this looks like will depend on our measure for well-being. If we use a standard 10-point scale of subjective well-being, which is known to have an SD of around 2, this will be about an 0.77 point change in reported life satisfaction. Very roughly, someone who rated their life around a 4/10 would, under this intervention, be expected to rate it around a 5/10.

That is a huge change. It is in the neighborhood of the changes that been reported, for example, in basic income experiments. One notable experiment in Germany provides a basic income to individuals and found an 0.417 SD improvement in life-satisfaction. But this program send 1200 Euros per month to individuals for 36 months, at a cost of approximately $50000 per person. In contrast, as we have said, the present intervention costs approximately $15 per person.  Which is to say that the bibliotherapy intervention is roughly 3000x more effective, on a per dollar basis.

There are enough conjectures here that we should have low confidence in any particular number. In particular, this reasoning relies on an assumption about the relationship between mental health and well-being, thought this assumption is I think plausible and directionally correct. But this considerations tell in favor of the view that a large-scale bibliotherapy intervention could make a meaningful difference in the lives on many people, and that it could do so at a level of cost-effectiveness that is basically unprecedented in this cause area.

Questions/Objections

Wait, how would you know who does or doesn't have depression?

You don't, and indeed shouldn't – that is private information. The program sketched above is one that cannot be implemented, and shouldn't be for general ethical reasons. It should really be thought of as a thought experiment: if this could be done, it would make a massive difference. So we should think about more selective and ethically constrained versions of doing the same thing.

What would this look like? At first pass, one intervention would allow people to opt in. Anyone who self-identifies as needing bibliotherapy would be sent a free book. This would lead to some false positives, as well as some opportunities for graft (e.g. reselling books), but in general it does not seem that these would swamp the significant benefits outlined above.

What if people don't read the books?

This is a fair concern. Bibliotherapy is a low-monitoring intervention generally, but this would be super low monitoring. Many people would probably just throw the books away, or be puzzled. So this would significantly reduce the impact of the program.

A couple of points, however.

First, the books will be so effective for the people who do read the books that the low usage will be worth it. If for example 10% of people actually read the books, then this would still be a hugely impactful program. The strategy of just giving people things and letting them use or not use them as they will is familiar in other areas of development. People are after all generally interested in using resources that will help them, so simply providing those resources unconditionally is often the most reasonable strategy.

Second, the more real-world version of this program – the opt in version – would be subject to this problem to a significantly reduced degree, on the supposition that people are more like to read books that they asked for.

How does this compare to other interventions, especially other mental health interventions?

That's a great question. The comparison to basic income interventions above is just one case. There are lots of interventions, and it is reasonable to ask how bibliotherapy compares to these.

There is a large philosophical question here – how do we compare mental health interventions (such as psychotherapy or bibliotherapy) to non-mental health interventions (such as bednets or unconditional cash transfers)? The Happier Lives Institute has done a lot of work on this question. There are also empirical questions, which are briefly but not fully addressed in the cost section above.

In lieu of an answer, here is a conjecture: a targeted bibliotherapy program would be at least as effective as any other mental health intervention. The reasoning behind this is the extremely low-cost of bibliotherapy  compared to other mental health interventions (which often involve training and other resource-intensive activities), and its demonstrated high effectiveness. But, as I say, this remains a conjecture.

What about people who can't read? What about people outside of the US?

These are obviously separate questions. Let us take them in turn.

First, literacy. It is easy to exaggerate the scale of the problem. The global adult literacy rate is about 86%, and even the region with the lowest literacy rate (sub-Saharan Africa) has an adult literacy rate of about 68%. While many people cannot read, and that is a major development concern, bibliotherapy would still be an appropriate intervention for a signficant majority of the adult population anywhere in the world.

It is true that bibliotherapy might take different forms in different places. Feeling Good has a distinctively developed-world frame of reference, and it might well be that other texts would be more appropriate for other places. But literacy per se does not pose an obstacle to the widescale dissemination of bibliotherapy, even in places with relatively low literacy rates.

That brings us to the question of place. My initial example focuses on the United States, partly because we have good data and relatively efficient modes of book delivery (such as Amazon). But ultimately I think the case for bibliotherapy is strongest and most urgent in other places.

Consider, for example, India. The WHO estimates that 1 in 20 people in India (roughly 70 million people) require treatment for depression. Yet India's mental health facilities are significantly under-resourced: there are 7.5 psychiatrists for every million people in India, as opposed to well over 100 psychiatrists per million in the United States. There are amazing NGO's, such as Sangath, working to address these facts, but bibliotherapy is potentially a meaningful addition to this work. (The adult literacy rate in India, incidentally, is estimated to be around 75%). Similar considerations apply to many countries in South Asia and sub-Saharan Africa. Though my thought experiment concerned the US, I think it is these countries where bibliotherapy could have the most significant impact.

(Image Credit: Eugenio Mazzone, Unsplash)

12

0
0

Reactions

0
0

More posts like this

Comments8
Sorted by Click to highlight new comments since:

G’day, welcome to the Forum!

I help lead a highly cost-effective EA bibliotherapy charity in India. I agree with most of your points, and in fact, Charity Entrepreneurship’s original report into cost-effective psychotherapy interventions recommended buying physical books and distributing in much the same way you suggest. My charity, Kaya Guides, was incubated from this idea by CE in 2022. We have learned a lot since then, that might add some colour to your post:

  1. Apps are much cheaper than physical books: We deliver our intervention over WhatsApp, which, at scale, will cost us a maximum of like, 20c per participant. The only reason not to do this is in internet-poor countries, but India has a high and rapidly accelerating rate of internet adoption among their under-resourced depressed population.

  2. You don’t need to distribute randomly, just use digital ads: We recruit new participants for ~$0.60 via Instagram ads that bounce users directly to WhatsApp. Meta are a trillion dollar company because they are extremely good at identifying people who might want to use your product; ~70% of users who make it to our depression questionnaire screen for depression.

  3. Adding weekly calls likely doubles to quadruples your adherence: The bibliotherapy you’re referring to, as far as I can infer from your cost model, is also known as ‘unguided self-help’. I should note that the meta-analyses you link are mostly of ‘guided self-help’, which is when the books are paired with regular contact with a therapist. Guided self-help has indistinguishable effect sizes from individual and group therapy. You may be interested in this 2019 meta-analysis which looks at 155 RCTs of different delivery formats for CBT, which finds that, relative to care as usual, guided self-help has double the effect size of unguided self-help. The reasons why aren’t perfectly understood, but the general belief is that the therapists don’t provide much in the way of extra help, but just provide a sense of accountability that helps participants make their way through the entire curriculum. See this study, which found that ‘human encouragement’ had significant effects on retention, but no unique-to-human element had a significant effect on effect sizes directly.

FWIW, I wrote about the difference between unguided and guided self-help previously on the Forum; the main things that have changed in my thinking since is that (a) you can get the cost of counsellors down further than previously thought, including if you augment them with AI, and (b) the adherence of unguided interventions seems lower than I thought. Anyway, we’ll be doing an RCT on this soon hopefully to put the issue to bed a bit more thoroughly :)

I also have updated estimates of the cost-effectiveness relative to cash transfers. Using the Happier Lives Institute’s methodology, we probably create around 1.28 wellbeing-years of value per participant (including spillovers to other members of their household), which is a bit less than top group therapy charities such as StrongMinds and Friendship Bench. But we currently treat people for around $50 per person, should reach ~$20 per person next year, and ~$5–7 per person at scale (I intend to write a post on this soon). Cash transfers, meanwhile, create about 9.22 wellbeing-years when administered by GiveDirectly, and cost about $1,200.

Hi Huw,

This is very helpful! A couple of quick notes:

Yes, I elided the distinction between guided and unguided. The intervention I described was maximally unguided, while most studies involve more guidance. It seems to me the 2019 meta-analysis you cite is unusually pessimistic about the unguided case (compare e.g.  this 2024 meta-analysis) but agree that most of the evidence support some form of guidance. Really this is a spectrum, and the question is what is the maximally cost-effective point on the unguided to guided spectrum. Your earlier post on this makes a lot of sense to me.

I am also interested in your preference for apps over books. They are cheaper. You say "the only reason not to do this" is internet connectivity issues. But some research indicates user preference for print media over digital media, e.g. this 2022 study in a US population. Do you think switching from print to apps lowers effectiveness (but is justified by cost savings) or that it is of the same (or greater) effectiveness?

Tong et al. (2024)

The devil’s in the details here. The meta-analysis you cite includes an overall estimate for unguided self-help which aggregates over different control condition types (waitlist, care-as-usual, others). When breaking down by control condition, and adding Karyotaki et al. (2021), which looks at guided vs unguided internet-based studies:

  • Waitlist control
    • Cuijpers et al. (2019): 0.52 (Guided: 0.87)
    • Karyotaki et al. (2021): 0.6 (Guided: 0.8)
    • Tong et al. (2024): 0.71
  • Care as usual control
    • Cuijpers et al. (2019): 0.13 (Guided: 0.47)
    • Karyotaki et al. (2021): 0.2 (Guided: 0.4)
    • Tong et al. (2024): 0.35

Now, Tong et al. (2024) does seem to find higher effects in general, but 32% of the interventions studied included regular human encouragement. These conditions found effect sizes of 0.62, compared to 0.47 for no support (I wish these would disaggregate against control conditions, but alas).

Tong has significantly more self-guided studies than Cuijpers. When both limit to just low risk-of-bias studies, Tong reports an effect sizes of 0.43 (not disaggregated across controls, unfortunately). Cuijpers reports 0.44 for waitlist controls, and 0.13 for care as usual, for the same restriction. So Tong has included more high risk-of-bias studies, which is quite dramatically lifting their effect size.

Now, as Cuijpers and Karyotaki are both co-authors on the Tong analysis, I’m sure that there’s value in including those extra studies, and Tong probably makes me slightly update on the effectiveness of unguided self-help. But I would be wary about concluding that Tong is ‘normal’ and Cuijpers is ‘unusually pessimistic’; probably the inverse is true.

(All up, though, I think that it’s quite likely that unguided interventions could end up being more optimally cost-effective than guided ones, and I’m excited to see how this space develops. I’d definitely encourage people to try and pursue this more! I don’t think the case for unguided interventions rests on their relative effectiveness, but much more on their relative costs.)

Apps vs books

I don’t have a strong sense here. The Jesus-Romero study is good and persuasive, but to be convinced, I’d want to see a study of revealed preferences rather than stated ones. To illustrate, one reason we think apps might be better is because our digital ads reach users while they’re on media apps like Instagram, which is a behaviour people are probably likely to engage in when depressed. I think there’s probably a much lower activity threshold to click on an ad and book a call with us, than there is to remember you ordered/were sent a book and to do an exercises from it.

Regardless, it’s likely that digital interventions are much cheaper (again, probably about $1 per participant in engineering vs. ~$5 (??) for book printing and delivery, assuming both interventions spend $1 on social media), and can scale much faster (printing and delivery requires a lot of extra coordination and volume prediction). There’s a good reason many for-profit industries have digitised; it’s just much cheaper and more flexible.

On the meta-analyses: that seems fair. I think my initial thought was just that the Cuijpers seemed very low relative to my priors, and the Tong seemed more in line with them. But maybe my priors are wrong! I take your point that the Tong may be too high because of how widely it casts the "unguided" net, though it still does find some meaningful difference. But on the main point I think we're in agreement: guided > unguided, and the case for unguidedness, if there is one, will depend on its relative cost-effectiveness.

On apps v. books: I think there are so many potentially countervailing effects here it's hard to trust my intuitive judgments. I see the consideration you cite, but on the other hand (I would guess) someone on a phone is more likely to defect away from self-help and to use their phones for all the other things that phones can be used for. It would be great to have more studies here. There are a few RCT's comparing print with courses delivered via the internet on a desktop/laptop, which seem to find little difference either way, but these studies are very sparse, and they're at some remove from the question of comparing self-help delivery via a printed book via self-help delivery via Whatapp.

I take the point about cost-effectiveness. Certainly the tendency in the for-profit space has been digitization. But here too there's a countervailing consideration. Digitized self-help is a natural fit for the for-profit space, since a product that can be monetized in various ways (subscriptions, advertising) and produced at 0 marginal cost offers an attractive business model. But books do not fit that model. So perhaps one role for NGO's in this space may be supporting interventions which are known to be effective but whose financials are less promising, and perhaps self-help books are a case of this.

I suspect the synthesis here is that unguided is very effective when adhered to, but the main challenge is adherence. The reason to believe this is that there is usually a strong dosage effect in psychotherapy studies, and that the Furukawa study I posted in the first comment found that the only value humans provided was for adherence, not effect size.

Unfortunately, this would then cause big problems, because there is likely a trial bias affecting adherence, potentially inflating estimates by 4× against real-world data. I’m surprised that this isn’t covered in the literature, and my surprise is probably good evidence that I have something wrong here. This is one of the reasons I’m keen to study our intervention’s real-world data in a comparative RCT.

You make a strong point about the for-profit space and relative incentives, which is partly why, when I had to make a decision between founding a for-profit unguided app and joining Kaya Guides, I chose the guided option. As you note, the way the incentives seem to work is that large for-profits can serve LMICs only when profit margins are competitive with expanding further in HICs. This is the case for unguided apps, because translation and adaptation is a cheap fixed cost. But as soon as you have marginal costs, like hiring humans (or buying books, or possibly, paying for AI compute), it stops making sense. This is why BetterHelp have only now begun to expand beyond the U.S. to other rich countries.

But I think you implicitly note—if one intervention has zero marginal cost, then surely it’s going to be more cost-effective and therefore more attractive to funders? One model I’ve wondered about for an unguided for-profit is essentially licensing its core technology and brand to a non-profit at cost, which would then receive donations, do translations, and distribute in other markets.

Executive summary: This exploratory post argues that bibliotherapy—using self-help books like Feeling Good to treat mental health conditions—is a cost-effective, evidence-supported, and underutilized intervention that could significantly improve well-being at scale, especially in low-resource settings.

Key points:

  1. Robust evidence base: Meta-analyses and studies across populations show that bibliotherapy can be as effective as therapist-administered treatments for conditions like depression and anxiety, with lasting effects.
  2. Extremely low cost: A back-of-the-envelope estimate suggests that sending a $15 book to every depressed adult in the U.S. would cost ~$315 million—comparable to the annual cost of suicide hotlines, but potentially far more impactful per dollar.
  3. Potential for high impact on well-being: Assuming standard effect sizes, bibliotherapy could improve life satisfaction nearly as much as costly interventions like basic income—at a fraction of the cost, potentially making it 3,000x more cost-effective.
  4. Scalable, even globally: Literacy rates are sufficiently high worldwide (e.g., 75% in India), making bibliotherapy a plausible intervention in many low-income countries where mental health services are scarce.
  5. Design and ethical considerations: While mass unsolicited book distribution may be impractical, opt-in models could retain much of the benefit with fewer downsides like waste or misuse.
  6. Conjectured comparative advantage: Though more empirical and philosophical work is needed, the author tentatively suggests bibliotherapy could rival or outperform other mental health interventions due to its unique combination of low cost and proven effectiveness.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

I agree with Huw's assessment re: books vs digital vs digital + guide. Here are a few less-discussed reasons why, hastily scribbled:

  1. Recruitment and retention costs: The cost of delivering a very cost-effective therapy is often lower than the cost of convincing someone to seriously give it a go. People don’t really want to just read a book or just use an app; they overwhelmingly want to talk to a real person. It can therefore be cheaper to recruit and retain people when a person is involved.
  2. Misinterpretation of non-significant: Psychologists often present their findings as though statistically non-significant differences should be ignored. Sometimes this results in treating an effect size of 0.3 and 0.6 as if they’re identical, leading to conclusions like “we found no significant differences between guided and unguided…”. Nobody has time to read the whole literature, so people skim—and can come away thinking there’s no real difference, when in practice it may be more like a 30–100% difference in effectiveness.
  3. Greater publication bias in unguided RCTs: It’s insanely cheap to do RCTs on unguided interventions because the cost of delivery is near zero and logistics are simple. Since it’s usually the researcher or funder who developed the treatment, they’re unlikely to publish the mediocre results. What gets published instead are lots and lots of positive findings, creating a skewed picture where unguided looks consistently effective.
  4. Retention IRL: Despite most mental health apps showing >50% completion in RCTs, they retain only ~1–3% of real users that long. Guided self-help interventions retain an order of magnitude more. You thus need recruit an order of magnitude more users to treat the same number of people. This not only undermines their cost-effectiveness, but also drives up recruitment costs for everyone else. Plus, a lot of people try something that doesn’t work for them, have their time and effort wasted, become more jaded, and are harder to convince to try again later.

All that being said, I think we focus far too much on differences between treatments and far too little on differences between clients. The latter explains roughly 4× more variance than the former, yet accounts for <1% of the research published.

Thanks for this reply! In general I agree on the effectiveness of guidance, as in my response to Huw above. The publication bias issue (3) is one that I hadn't thought about enough and may well distort our evidence on some of these questions.

Curated and popular this week
Relevant opportunities