I am a researcher at the Happier Lives Institute. In my work I assess the cost-effectiveness of life-improving interventions in terms of subjective well-being, right now I'm working on comparing psychotherapy to cataract surgery to cash transfers. Hopefully we can improve institutional decision-making by increasing our confidence in the which measures of well-being are most accurate without triggering hyphen-inflation.

Wiki Contributions


What questions relevant to EA could be answered by surveying the public?

I agree that moral uncertainty implies it's a good idea to know what people's moral views are. 

Related to your last point: 

given uncertainty about what promotes this / measurement error in our measures of it (and perhaps also conceptual uncertainty about what it consists in, though this may collapse into moral uncertainty), I think it's plausible we should assign some weight to what people say they prefer in judging what is likely to promote their wellbeing.

Many EAs want to maximize wellbeing, and many pursue that aim using evidence. Given that, I'd be curious to know how views differ between experts, EAs, and the public on what wellbeing is and how can we measure it. I wrote a very rough example of the type of questions I could imagine asking in this document.  

What questions relevant to EA could be answered by surveying the public?

I would be interested to know the results of such a survey on these topics.  

Similarly, if experimental philosophy hasn’t already answered these questions, then I’d like to know if the public has any coherent views on “what wellbeing is, and what’s the badness of death?” I haven’t found anything in the literature that I could use, but I’m not very familiar with the research in this space. I think David has mentioned there being some extant literature surveying views on the badness of death, but I was not able to find it.

I know that David, you said: 

I mostly intend to rule out things like surveying effective altruists or elite policy-makers.

But if we survey the public moral views, I'd like to know how much they differ from EAs. At the very least for  communications

How big are the intra-household spillovers for cash transfers and psychotherapy? Contribute your prediction for our analysis.

Hi there,

Thank you for bringing this to my attention. I should have edited the form to allow only one answer per column and multiple answers per row.

Comments for shorter Cold Takes pieces

I would be interested in reading a summary of real utopias if one is available.

Donating money, buying happiness: new meta-analyses comparing the cost-effectiveness of cash transfers and psychotherapy in terms of subjective well-being

Hi Derek, thank you for your comment and for clarifying a few things.

  1. Time discounting: We will revisit time discounting when looking at interventions with longer time scales. To be clear, we plan to update these analyses for backwards compatibility as we introduce refinements to our models and analyse new interventions. 
  2. Costs: You’re right, expenses in an organisation can be lumpy over time. If costs are high in all previous years but low in 2019 and we only use the 2019 figures, we'd probably be making a wrong prediction about future costs. I think a reasonable way to account for this is by treating the cost for an organisation as an average of the previous years, where you give more weight increasingly to years closer to the present. 
  3. Depression data: Thanks for the clarification; I think I understand better now. We make a critical assumption that a one-unit improvement in depression scales corresponds to the same improvement in well-being as a one-unit change in subjective well-being scales. If SWB is our gold standard, we can ask if depression scale changes predict SWB scale changes. Our preliminary analyses suggest that the difference here would, in any case, be pretty small. For cash transfers, we found the 'SWB only' effect would be about 13% larger than the pooled 'SWB-and-MH' effect (see page 10, footnote 16). To assess therapy, we looked at some psychological interventions that had outcome measures in SWB and MH and found the SWB effect was 11% smaller (see p27-8). We'd like to dig further into this in the future. But these are not result-reversing differences.
Donating money, buying happiness: new meta-analyses comparing the cost-effectiveness of cash transfers and psychotherapy in terms of subjective well-being

Hi Derek, it’s good to hear from you, and I appreciate your detailed comments. You suggest several features we should consider in our following intervention comparison and version of these analyses. I think trying to test the robustness of our results to more fundamental assumptions is where we are likeliest to see our uncertainty expand. But I moderately disagree that this is straightforward to adapt our model to. I’ll address your points in turn.   

  • Time discounting: We omitted time-discounting because we only look at effects lasting ten years or less. Given our limited time available, adding a section discussing time-discounting would not be worth the effort. It’s worth noting that adding time discounting would only make psychotherapy look better because cash transfers’ benefits last longer.
  • Cost of StrongMinds: We include all costs StrongMinds incurs. The cost is "total expenditure of StrongMinds" / "number of people treated". We don't record any monetary cost to the beneficiary. If an expense to a beneficiary is bad because it decreases their wellbeing, we expect subjective well-being to account for that.
  • Only depression data? We have subjective well-being and mental health measures for cash transfers, but only the latter for psychotherapy. We discuss why we don’t think differences between MH and SWB measures will make much difference in sections 3.1 of the CT CEA and Appendix A of the psychotherapy report. Section 4.4 of the psychotherapy report discusses the literature on social desirability/experimenter demand (what I take you’re pointing to with your concern about “loading the dice”). The limited evidence suggests, perhaps surprisingly, that people don’t seem very responsive to the perceived demands of the experimenter, in general, or in LMIC settings.
  • Spillovers: We are working on updating our analysis to include household spillovers. We discuss the intra village spillovers in the cost-effectiveness analysis and the meta-analysis. I think we agree that the community spillovers do not appear likely to be influential.
  • Sensitivity / robustness: You are correct that we haven't run as many robustness tests as we could have. These seem like reasonable candidates to consider in an updated version of the CEA comparison. Adding these tests can be conceptually straightforward and sometimes time-efficient. I especially think it’d be good to add another frame of the cost-effectiveness analysis that outputs the likelihood to surpass the 5x-8x bar.
    • On the other hand, adding robustness checks for model-level assumptions seems like it could take a decent  amount of time. In my view it doesn't seem straightforward to, for example, operationalise moral views, the value of information, reasonable bounds for discount rates, the differences in “conversion rates” between MH and SWB data, etc. But maybe we should be more willing to make semi-uninformed guesses at the range of these values and include these in our robustness tests.  
How can we make Our World in Data more useful to the EA community?

Hi Ed,

Here are some imaginary fruit:

1. At the Happier Lives Institute we would be very interested to see something like a global burden of disease except for suffering. What are the largest sources of unhappiness across the world? 

2. OWID summarized the results of studies on important topics. That is, it collected and visualized meta-analytic information for important topics from databases like AidGrade or MetaPsy

Donating money, buying happiness: new meta-analyses comparing the cost-effectiveness of cash transfers and psychotherapy in terms of subjective well-being

Hi Michael, 

I try to avoid avoid the problem by discounting the average effect of psychotherapy. The point isn’t to try and find the “true effect”. The goal is to adjust for the risk of bias present in psychotherapy’s evidence base relative to the evidence base of cash transfers. We judge the CTs evidence to be higher quality. Psychotherapy has lower sample sizes on average and fewer unpublished studies, both of which are related to larger effect sizes in meta-analyses (MetaPsy, 2020; Vivalt, 2020, Dechartres et al., 2018 ;Slavin et al., 2016). FWIW I discuss this more in appendix C of the psychotherapy report.

I should note that I think the tool I use needs development. This issue of detecting and adjusting for the bias present in a study is a more general issue in social science.   

I do worry about the effect sizes decreasing, but the hope is that the cost will drop to a greater degree as StrongMinds scales up. 

We say "post-treatment effect" because it makes it clear the time point we are discussing. "Treatment effect" could refer either to the post-treatment effect or to the total effect of psychotherapy, where the total effect is the decision-relevant effect.

Donating money, buying happiness: new meta-analyses comparing the cost-effectiveness of cash transfers and psychotherapy in terms of subjective well-being

Brian, I am glad to see your interest in our work! 

1.) We have discussed our work with GiveWell. But we will let them respond :).

2.) We're also excited to wade deeper into deworming. The analysis has opened up a lot of interesting questions. 

3.) I’m excited about your search for new charities! Very cool. I would be interested to discuss this further and learn more about this project. 

4.) You’re right that in both the case of CTs and psychotherapy we estimate that the effects eventually become zero. We show the trajectory of StrongMinds effects over time in Figure 5. I think you’re asking if we could interpret this as an eventual tendency towards depression relapse. If so, I think you’re correct since most individuals in the studies we summarize are depressed, and relapse seems very common in longitudinal studies. However, it’s worth noting that this is an average. Some people may never relapse after treatment and some may simply receive no effect. 

5.) I'll message you privately about this for the time being. 

6.) In general we hope to get more people to make decisions using SWB. 

7.) I am going to pass the buck on making a comment on this :P. This decision will depend heavily on your view of the badness of death for the person dying and if the world is over or underpopulated. We discuss this a bit more in our moral weights piece. In my (admittedly limited) understanding, the goodness of improving the wellbeing of presently existing people is less sensitive to the philosophical view you take. 


Load More