I am a senior researcher at the Happier Lives Institute. I completed my PhD in Social Psychology at the University of British Columbia.
Psychology, happiness and wellbeing, cash transfers
Our (the HLI) comment was in reference to these quotes.
The literature on PT in LMICs is a complete mess.
Trying to correct the results of a compromised literature is known to be a nightmare.
I think it is valid to describe these as saying the literature is compromised and (probably) uninformative. I can understand your complaint about the word “bunk”. Apologies to Gregory if this is a mischaracterization.
Regarding our comment:
If one insisted only on using charity evaluations that had every choice pre-registered, there would be none to choose from.
And your comment:
I don't think anyone has claimed lack of certain choices being pre-registered is somehow fatal, only a factor to consider.
Yeah, I think this is a valid point, and the post should have quoted Gregory directly. The point we were hoping to make here is that we’ve attempted to provide a wide range of sensitivity analyses throughout our report, to an extent that we think goes beyond most charity evaluations. It’s not surprising that we’ve missed some in this draft that others would like to see. Gregory’s comments mentioned “Even if you didn't pre-specify, presenting your first cut as the primary analysis helps for nothing up my sleeve reasons” seemed to imply that we were deliberately hiding something, but in my view our interpretation was overly pessimistic.
Cheers for keeping the discourse civil.
Thanks for the suggestions. Yes, we have been in touch with Effective Thesis, and they have listed our agenda on their website here.
We have 4 researchers working on these projects internally full-time. I can think of at least 5 external collaborators we are actively working with, but the number may be slighter higher.
Our takeaway from this data is that there is not evidence of an effect (positive or negative).
We take these data to be our best guess because there are no prior studies of the effect of deworming on SWB, and the evidence of impact on other outcomes is very uncertain. However, all the effects are non-significant. We don’t have a theory of action because we think the overall evidence points to there being no effect (or at least just a very small one).
We ran the cost-effectiveness analysis as an exercise to see how deworming would look if we took the data at face-value. The point estimate was negative, but the confidence interval was so wide that the results were essentially uninformative, which converges with our conclusion that there is not a substantial effect of deworming on long-term wellbeing.
That being said, we can make assumptions that are favorable to deworming, such as assuming the effect cannot be negative. This, of course, involves overriding the data with prior beliefs — prior beliefs that we lack strong reasons to hold. In any case, we explore the results under these favorable assumptions in Appendix A2. In all plausible cases, deworming is still less cost-effective than StrongMinds, so even these exploratory analyses —which, again, we don’t endorse— don’t change our conclusion to not recommend deworming over StrongMinds.
Trying to draw conclusions from such a dramatically underpowered study (with regard to this question) strikes me as absurd.
It is unclear what evidence you use to claim the study is underpowered. As Joel mentioned in his comment to MichaelStJules (repasted below), we had 98% power to detect effect sizes of 0.08 SDs, the effect size that would make deworming more cost-effective than StrongMinds .
Assuming (generously) that the effects of deworming couldn't become negative, then the initial would need to be 0.08 SDs for deworming to beat StrongMinds, which we had 98% power to detect.