A

AdamSalisbury

25 karmaJoined

Comments
3

Hi Nick & David,

I wrote this piece and wanted to offer my $0.02 on Hawthorne effects driving these consumption spillover results as it’s not covered in the report. I don’t think this is likely to be a key driver of the large spillovers reported, for two reasons: 

  • To measure consumption spillovers, Egger et al. is essentially comparing consumption in nearby non-recipient households (e.g. <2km away) to consumption in further away non-recipient households (e.g. 10km). For this to produce biased results, you’d have to think the nearer non-recipients are gaming their answers in a way that the further away non-recipients aren’t. That seems plausible to me – but it also seems plausible that the further away non-recipients will still be aware of the program (so might have similar, counterbalancing incentives)
  • Even if you didn’t buy this, I’m not convinced the bias would be in the direction you’re implying. The program studied in Egger et al. was means-tested – cash transfers were only given to households with thatched roofs. If you think nearby non-recipients are more likely to be gaming the system, it seems plausible to me that they’d infer poorer households are more likely to get cash, so it makes sense for them to understate their consumption. This would downward bias the results

Hawthorne effects for recipient consumption gains seem more intuitively concerning to me, and I’ve been wondering whether this could be part of the story behind these large recipient consumption gains at 5-7 years we’ve been sent. We’re not putting much weight on these results at the moment as they’ve not been externally scrutinized, but it’s something I plan to think more about if/when we revisit these. 

Hi Benjamin,

I don't think you're misinterpreting something on the SMC page.

Children recently discharged after a malaria episode are being used to proxy "children susceptible to malaria death" -- i.e. a specific kind of sickliness. The reason we're not using the entire post-discharge population is because we think that would take us further away from our target population of interest. For instance, the survival prospects of a child hospitalized with asthma might be much better, so if we included these children in our sample we may be underestimating the persistence of mortality risk through time.

While we use malaria events to zoom in on the population we're interested in, we then look at their probability of dying from any cause over the next 6-12 months (i.e. all-cause mortality estimates). This is to capture for the idea that children saved from malaria death might succumb to another cause-of-death (e.g. diarrhoea) soon after -- we think we ought to account for this, and think all-cause mortality estimates allow us to do so.

It's worth reiterating that post-discharge children certainly aren't the perfect proxy for children counterfactually saved by SMC. For example, children might be discharged before they’ve fully recovered (which might overstate the persistence of risk), or they might have gained some natural immunity through their severe episode (which might understate it). In general, I view the model as just one lens to view this problem, to sit alongside other lens' such as: i) looking for rebound effects when prophylactic malaria treatment is discontinued; ii) looking at the long-run survival benefits of bed nets (another preventative malaria tool). The fact these other two lens' point in a similar direction gives me some reassurance.

Hi Ben,

Yes, that sounds right. If we assume that lives saved across years are completely uncorrelated, the probability of us saving the same life consecutively is so small (0.25%*0.25%=0.0006%) that we can effectively assume 'no overlap' in lives saved. The uncorrelated scenario isn't our best guess, but I think it's a helpful benchmark to think through this problem.

Thanks for your kind words!