I work for a nonprofit focused on building $1B+ philanthropic initiatives/megaprojects. I previously ran some RCTs in East Africa.
Love the clarity of the post but I agree with Geoffrey that the $ impact/household seems extremely low and I also don't follow how you get to $1k+/HH (which would be like doubling household income).
Back calculating to estimate benefits/household:
I'd guess that's at least part of why you don't see more bean soaking already, the savings are just so modest, unless I've missed something in my calculation.
As you note, behaviour change around cooking practices is also super hard. When I worked at One Acre Fund Tanzania, our 2 biggest failures were introducing clean cookstoves and high-iron beans, both of which people just didn't want to use because of how they conflicted existing norms, e.g. color of the new bean variety "bled" into ugali, making it look dirty.
So the $ benefits would make me skeptical of this as promising but I'm hoping I missed something big in my calculation!
Thanks Chris, that's a cool idea. I will give it a go (in a few days, I have an EAG to recover from...)
One thing I should note is that other comments on this post are suggesting this is well known and applied, which doesn't knock the idea but would reduce the value of doing more promotion. Conversely, my super quick, low-N look into cash RCTs (in my reply below to David Reinstein) suggests it is not so common. Since the approach you suggest would partly involve listing a bunch of RCTs and their treatment/control sizes (so we can see whether they are cost-optimised), it could also serve as a nice check of just how often this adjustment is/isn't applied in RCTs
For bio, that's way outside of my field, I defer to Joshua's comment here on limited participant numbers, which makes sense. Though in a situation like early COVID vaccine trials, where perhaps you had limited treatment doses and potentially lots of willing volunteers, perhaps it would be more applicable? I guess pharma companies are heavily incentivised to optimise trial costs tho, if they don't do it there'll be a reason!
As a quick data point I just checked the 6 RCTs GiveDirectly list on their website. I figure cash is pretty expensive so it's the kind of intervention where this makes sense.
It looks like most cash studies, certainly with just 1 treatment arm, aren't optimising for cost:
Suggests either 1) there's some value in sharing this idea more or 2) there's a good reason these economists aren't making this adjustment. Someone on Twitter suggested "problems caused by unbalanced samples and heteroskedasticity" but that was beyond my poor epidemiologist's understanding and they didn't clarify further.
Hi Christian-- agreed but my argument here is really for fewer treatment participants, not smaller treatment doses
Ah, that's helpful data. My experience in RCTs mostly comes from One Acre Fund, where we ran lots of RCTs internally on experimental programs, or just A/B tests, but that might not be very typical!
Hey Aidan-- that's a good point. I think it will probably apply to different extents for different cases, but probably not to all cases. Some scenarios I can imagine:
Overall, I think cases 2/3/4 benefit from the cheaper study. Scenario 1 seems more like what you have in mind and is a good point, I just think there will be enough scenarios where the cheaper trial is useful, and in those cases the charity might consider this treatment/control optimisation.
Hi Nick-- thanks for the thoughtful post!
I think cash arms make a lot of intuitive sense, my main pushback would be a practical one: cash and intervention X will likely have different impact timelines (e.g. psychotherapy takes a few months to work but delivers sustained benefits, perhaps cash has massive welfare benefits immediately but they diminish quickly over time). This makes the timing of your endline study super important, to the point that when you run the endline is really what determines which intervention comes out on top, rather than the actual differences in the interventions. I have a post on this here with a bit more detail.
Your point on the ethics here is an interesting one, I agree that medical ethics might suggest "control" groups should still receive some kind of intervention. Part of the distinction could be that medical trials give sick patients placebos, which control patients accurately believe might be medicine, which feels perhaps deceptive, whereas control groups in development RCTs are well aware that they aren't receiving any intervention (i.e. they know they haven't received psychotherapy or cash), which feels more honest?
The downside is this changes the research question from "What is the impact of X?" to "How much better is X than cash", and there are lots of cases were the counterfactual really would be inaction. A way around this might be to give control groups an intervention that we know to be "good" but that doesn't affect the specific outcome of interest. e.g. I've worked on an agriculture RCT that gave control groups water/sanitation products that had no plausible way to affect their maize yield but at least meant they weren't losing out. This might not apply to broad measures like WELBYs
I'm honestly not sure about the ethical side here though, interested to explore further.
I really appreciated this short, clear post. Thank you!
LEEP is indeed working on this -- I mentioned them in my original comment but I have no connection to them. I was thinking of a campaign on the $100M/year scale, comparable to Bloomberg's work on tobacco. That could definitely be LEEP, my sense (from quick Googling and based purely on the small size of their reported team) is they would have to grow a lot to take on that kind of funding, so there could also be a place for a large existing advocacy org pivoting to lead elimination. I have not at all thought through the implementation side of things here.
How does the time and monetary cost of buying these products compare to the time and monetary cost of giving cash?
The total value of the bundle ($120) includes all staffing (modelled at scale with 100k recipients), including procurement staff, shipping, etc. This trial was a part of a very large nonprofit, which has very accurate costs for those kinds of things.
But obviously the researchers didn't know beforehand that the programs would fail. So this isn't an argument against cash benchmarking.
That's true, I don't think I made my point well/clearly with that paragraph. I was trying to say something like, "The Vox article points to how useful the cash comparison study had been, but the usefulness (learning that USAID shouldn't fund the program) wasn't actually due to the cash arm". That wasn't really an important point and didn't add much to the post.