I am a researcher at the Happier Lives Institute. In my work, I assess the cost-effectiveness of interventions in terms of subjective wellbeing.
Our updated operationalization of psychotherapy we use in our new report (page 12) is
"For the purposes of this review, we defined psychotherapy as an intervention with a structured, face-to-face talk format, grounded in an accepted and plausible psychological theory, and delivered by someone with some level of training. We excluded interventions where psychotherapy was one of several components in a programme."
So basically this is "psychotherapy delivered to groups or individuals by anyone with some amount of training".
Does that clarify things?
Also, you should be able to use our new model to calculate the WELLBYs of 1 to 1 more traditional psychotherapy since we include studies with 1 to 1 in our model. Friendship Bench, for instance, uses that model (albeit with lay mental health workers with relatively brief training). Note that in this update our findings about group versus individual therapy has reversed and we now find 1 to 1 is more effective than group delivery (page 33). This is a bit of a puzzle since it disagrees somewhat with the broader literature, but we haven't had time to look into this further.
They only include costs to the legal entity of StrongMinds. To my understanding, this includes a relatively generous stipend they provide to the community health workers and teachers that are "volunteering" to provide StrongMinds or grants StrongMinds makes to NGOs to support their delivery of StrongMinds programs.
Note that 61% of their partnership treatments are through these volunteer+ arrangements with community health workers and teachers. I'm not too worried about this since I'm pretty sure there aren't meaningful additional costs to consider. these partnership treatments appear to be based on individual CHWs and teachers opting in. I also don't think that the delivery of psychotherapy is meaningfully leading them to do less of their core health or educational work.
I'd be more concerned if these treatments were happening because a higher authority (say school administrators) was saying "Instead of teaching, you'll be delivering therapy". The costs to deliver therapy could then reasonably seen to include the teacher's time and the decrease in teaching they'd do.
But what about the remaining 39% of partnerships (representing 24% of total treatments)? These are through NGOs. I think that 40% of these are delivered because StrongMinds is giving grants to these NGOs to deliver therapy in areas that StrongMinds can't reach for various reasons. The other 60% of NGO cases appears to be instances where the NGO is paying StrongMinds to train it to deliver psychotherapy. The case for causally attributing these cases of treatment to StrongMinds seems more dubious here, and I haven't gotten all the information I'd like, so to be conservative I assumed that none of these cases StrongMinds claims as its are attributable to it. This increases the costs by around 14% because it's reducing the total number treated by around 14%.
Some preemptive hedging: I think my approach so far is reasonable, but my world wouldn't be rocked if I was later convinced this isn't quite the way to think about incorporating costs in a situation with more decentralized delivery and more unclear causal attribution for treatment.
But 1.14 * 59 is 67 not 63! Indeed. The cost we report is lower than $67 because we include an offsetting 7.4% discount to the costs to harmonize the cost figures of StrongMinds (which are more stringent about who counts as treated -- more than half of sessions completed are required) with Friendship Bench (who count anyone as treated as receiving at least 1 session). So 59 * (1 - 0.074) * 1.14 is $63. See page 69 of the report for the section where we discuss this.
Good question. I haven't dug into this in depth, so consider this primarily my understanding of the story. I haven't gone through an itemized breakdown of StrongMinds costs on a year by year basis to investigate this further.
It is a big drop from our previous costs. But I originally did the research in Spring 2021, when 2020 was the last full year. That was a year with unusually high costs. I didn't use those costs because I assumed this was mostly a pandemic related aberration, but I wasn't sure how long they'd keep the more expensive practices like teletherapy they started during COVID (programmes can be sticky). But they paused their expensive teletherapy programme this year because of cost concerns (p. 5).
So $63 is a big change from $170, but a smaller change from $109 -- their pre-COVID costs.
What else accounts for the drop though? I think "scale" seems like a plausible explanation. The first part of the story is fixed / overhead costs being spread over a larger number of people treated with variable (per person) costs remaining stable. StrongMinds spends at least $1 million on overhead costs (office, salaries, etc). The more people are treated, the lower the per person costs (all else equal). The second part of the story is that I think it's plausible that variable costs (i.e., training and supporting the person delivering the therapy) are also decreasing. They've also shifted towards moving away from staff-centric delivery model to using more volunteers (e.g., community health workers), which likely depresses costs somewhat further. We discuss their scaling strategy and the complexities it introduces into our analysis a bit more around page 70 of the report.
Below I've attached StrongMinds most recent reporting about their number treated and cost per person treated, which gives a decent overall picture for how the costs and the number treated have changed over time.
The bars for AMF in Figure 2 should represent the range of cost-effectiveness estimates that come from inputting different neutral points, and for TRIA the age of connectedness.
This differs from the values given in Table 25 on page 80 because, as we note below that table, the values there are based on assuming a neutral point of 2 and an TRIA age of connectedness of 15.
The bar also differs for the range given in Figure 13 on page 83 because the lowest TRIA value has an age of connectivity of 5 years, where in Figure 2 (here) we allow it to go as low as 2 years I believe.
I see that the footnote explaining this is broken, I'll fix that.
When I plug (Deprivationism, neutral point =0; TRIA, neutral point =0, age of connectedness = 2) into our calculator it spits out a cost-effectiveness of 91 WELLBYs per $1,000 for deprivationism and 78 for TRIA (age of connectivity = 2) -- this appears to match the upper end of the bar.
Neat work. I wouldn't be surprised if this ends up positively updating my view on the cost-effectiveness of advocacy work.
What's your take on possibility someone could empirically tackle a related issue we also tend to do a lot of guessing at -- the likelihood of $X million spent advocacy in a certain domain leading to reform.
The prospect of a nuclear conflict is so terrifying I sometimes think we should be willing to pay almost any price to prevent such a possibility.
But when I think of withdrawing support for Ukraine or Taiwan to reduce the likelihood of nuclear war, that doesn't seem right either -- as it'd signal that we could be threatened into any concession if nuclear threats were sufficiently credible.
How would you suggest policymakers navigate such terrible tradeoffs?
How much do you think the risk of nuclear war would increase over the century if Iran acquired nuclear weapons? And what measures, if any, do you think are appropriate to attempt to prevent this or other examples of nuclear proliferation?
note that a large portion of Somaliland appears occupied by rebels at the moment. But other than that it has indeed been much more peaceful.
“Thank you for the comment. There’s a lot here. Could you highlight what you think the main takeaway is? I don’t have time to dig into this at present, so any condensing would be appreciated. Thanks again for the time and effort.” ??
I believe that large tech companies are, on average, more efficient at converting talent into market cap value than small companies or startups are. They typically offer higher salaries, for one.
This may be true for market cap, but let's be careful when translating this to do-goodery. E.g., wages don't necessarily relate to productivity. Higher wages could also reflect higher rents, which seems plausibly self-reinforcing by drawing (and shelving) innovative talent from smaller firms. A quote from a recent paper by Akcigit and Goldschlag (2023) is suggestive:
"when an inventor is hired by an incumbent, compared to a young firm, their earnings increase by 12.6 percent and their innovative output declines by 6 to 11 percent."
I don't have a good grasp of the literature. Still, the impression I got hanging around economists interested in innovation during my PhD led me to believe the opposite: that smaller firms were more innovative than larger firms, and the increasing size of firms over the past few decades is a leading candidate for explaining the decline in productivity and innovation.
Speaking from my own experience working in a tiny research organisation, I wish I could have started as a researcher with the structure and guidance of a larger organization, but I really doubt I'd have pursued as important research if we hadn't tried to challenge other, larger organizations. Do you feel differently with QURI?