Glad you found this interesting, and you have my sympathies as another walking phone writer.
can't we square your intuition that the second child's wellbeing is better with preference satisfaction by noting that people often have a preference to have the option to do things they don't currently prefer
A few people have asked similar comments about preference structures. I can give perhaps a sharper example that addresses this specific point. I left a lot out of my original post in the interest of brevity, so I'm happy to expand more in the comments.
Probably the sharpest example I can give of a place where a capability approach separates from preference satisfaction is over the issue of adaptive preferences. This is an extended discussion, but the gist of it is that it seems not so hard to come up with situations where the people in a given situation do not seem upset by some x even though upon reflection (or with full/better information), they might well be upset. There is ample space for this in the capability approach, but there is not in subjective preference satisfaction. This point is similar in spirit to my women in the 1970s example, and similar to where I noted in the text that "Using subjective measures to allocate aid means that targeting will depend in part on people’s ability to imagine a better future (and thus feel dissatisfaction with the present)." The chapter linked above gives lots of nice examples and has good discussion.
If you want a quick example: consider a case where women are unhappy because they lack the right to vote. In the capability approach, this can only be addressed in one way, which is to expand their capability to vote. In preference satisfaction or happiness approaches, one could also do that, or one could shape the information environment so that women no longer care about this and this would fix the problem of "I have an unmet preference to vote" and "I'm unhappy because I can't vote." I prefer how the capability approach handles this. The downside to the way the capability approach handles this is that even if the women were happy about not voting the capability approach would still say "they lack the capability to vote" and would suggest extending it (though of course the women could still personally not exercise that option, and so not do the functioning vote).
Hope that helps to make some of these distinctions sharper. Cheers.
I will probably have longer comments later, but just on the fixed effects point, I feel it’s important to clarify that they are sometimes used in this kind of situation (when one fears publication bias or small study-type effects). For example, here is a slide deck from a paper presentation with three *highly* qualified co-authors. Slide 8 reads:
This is basically also my take away. In the presence of publication bias or these small-study type effects, random effects "are much more biased" while fixed effects are "also biased [...] but less so." Perhaps there are some disciplinary differences going on here, but what I'm saying is a reasonable position in political science, and Stanley and Doucouliagos are economists, and Ioannidis is in medicine, so using fixed effects in this context is not some weird fringe position.
(disclosure: I have a paper under review where Stanley and Doucouliagos are co-authors)
Glad the post was useful.
Thank you for this writeup. I enjoyed reading it.
As someone who is pretty convinced by the capability approach , one thing that I feel is somewhat missing from this (well done) exercise is a consideration of the foreclosing of options that a child marriage entails for the girl. Even if the girl grows up happy and doesn't have the problems that you sought out measures for (schooling, experiencing violence), her life options may have been quite massively curtailed by being entered into this hard-to-break legal and social arrangement before she was an adult. I think that alone makes it bad.
I'm not saying I have a way to cost this out in terms of capabilities, but I think this consideration merits attention. My guess is when a lot of us think about what is wrong with child marriage we start with intuitions around "losing options for life" but then our training or norms guides us to things that are easier to measure like "school completion." That's not necessarily bad, but I think it would be a mistake to see no effects on the latter and conclude that the former didn't happen.
Again, thanks for posting this and thanks for the work that went into it.
Thank you for this excellent summary! I can try to add a little extra information around some of the questions. I might miss some questions or comments, so do feel free to respond if I missed something or wrote something that was confusing.
On alignment with intuitions as being "slightly iffy as an argument": I basically agree, but all of these theories necessarily bottom out somewhere and I think they all basically bottom out in the same way (e.g. no one is a "pain maximizer" because of our intuitions around pain being bad). I think we want to be careful about extrapolation, which may have been your point in the comment, because I think that is where we can either be overly conservative or overly "crazy" (in the spirit of the "crazy train"). Best I can tell where one stops is mostly a matter of taste, even if we don't like to admit that or state it bluntly. I wish it was not so.
"...I'm worried that once you add the value-weighting for the capabilities, you're imposing your biases and your views on what matters in a similar way to other approaches to trying to compare different states of the world. "
I understand what you're saying. As was noted in a comment, but not in my post, Sen in particular would advocate for a process where relatively small communities worked out for themselves which capabilities they cared most about and the ordering of the sets. This would not aggregate up into a global ordered list, but it would allow for prioritization within practical situations. If you want to depart from Sen but still try to respect the approach when doing this kind of weighting, one can draw on survey evidence (which is doable and done in practice).
I don't think I have too much to add to 3bi or the questions around "does this collapse into preference satisfaction?". I agree that in many places this approach will recommend things that look like normal welfarism. However, I think it's very useful to remember that the reason we're doing these things is not because we're trying to maximize happiness or utility or whatnot. For example, if you think maximizing happiness is the actual goal then it would make sense to benchmark lots of interventions on how effectively they do this per dollar (and this is done). To me, this is a mistake borne out of confusing the map for the territory. Someone inspired by the capability approach would likely track some uncontentiously important capabilities (life, health, happiness, at least basic education, poverty) and see how various interventions impact them and try to draw on evidence from the people affected about what they prioritize (this sort of thing is done).
Something I didn't mention in the post that will also be different from normal welfarism is that the capability approach naturally builds in the idea that one's endowments (wealth, but also social position, gender, physical fitness, etc) interact with the commodities they can access to produce capabilities. So if we care about basic mobility (e.g. the capability to get to a store or market to buy food) then someone who is paraplegic and poor and remote will need a larger transfer than someone who is able bodied but poor and remote in order to get the same capability. This idea that we care about comparisons across people "in the capability space" rather than "in the money space" or "in the happiness space" can be important (e.g. it can inform how we draw poverty lines or compare interventions) and it is another place where the capability approach differs from others.
All that said, I agree that in practice the stuff capability-inspired people do will often not look very different from what normal welfarism would recommend.
Related: you asked "If we just claim that people value having freedoms (or freedoms that will help them achieve well-being), is this structurally similar to preference satisfaction?"
I think this idea is similar to this comment and I think it will break for similar meta-level reasons. Also, it feels a bit odd to me to put myself in a preference satisfaction mindset and then assert someone's preferences. To me, a huge part of the value of preference satisfaction approaches is that they respect individual preferences.
Re: paradox of choice: If more choices are bad for happiness then this would be another place where the capability approach differs from a "max happiness" approach, at least in theory. In practice, one might think that the practical results of limiting choices is likely to usually be bad (who gets to make the limits? how?, etc.) and so this won't matter. I personally would bet against most of those empirical results mattering. I have large doubts that they would replicate in their original consumer choice context, and even if they do replicate I have doubts that they would apply to the "big" things in life that the capability approach would usually focus on. But all that said, I'm very comfortable with the idea that this approach may not max happiness (or any other single functioning).
On the particular example of: "Sometimes I actually really want to be told, "we're going jogging tonight," instead of being asked, "So, what do you want to do?""
Yeah, I'm with you on being told to exercise. I'm guessing you like this because you're being told to do it, but you know that you have the option to refuse. I think that there are lots of cases where we like this sort of thing, and they often seem to exist around base appetites or body-related drives (e.g. sex, food, exercise). To me, this really speaks to the power of capabilities. My hunch is you like being told "you have to go jogging" when you know that you can refuse but you'd hate it if you were genuinely forced to go jogging (if you genuinely lacked the option to say no).
Again, thank you for such a careful read and for offering such a nice summary. In a lot of places you expressed these ideas better than I did. I was fun to read.
To answer your question directly: yes, but I did not when I was young. I'm pretty steeped in Abrahamic cultural influences. That said, I do not think the post presumes anything about universal religious experiences or anything like that.
However, I'd probably express these ideas a little bit differently if I had to do it again now. Mainly, I'd try harder to separate two ideas. While I'm as convinced as ever that "messianic AI"-type claims are very likely wrong, I think the fact that lots of people make claims of that form may just show that they're from cultures that are Abrahamic or otherwise strongly influenced by that thinking and so when they grasp for ways to express their hopes and fears around AI they latch onto that. So to the extent that people are offering those kinds of claims about AI, I remain very skeptical of those specific claims. However, I do not think that one should jump from that to complacency around AI. Hopefully that helps to clear things up.
That's so reasonable.
I think that we can all agree that the analysis was done in an atypical way (perhaps for good reason), that it was not as rigorous as many people expected, and that it had a series of omissions or made atypical analytical moves that (perhaps inadvertently) made SM look better than it will look once that stuff is addressed. I don't think anyone can speak yet to the magnitude of the adjustment when the analysis is done better or in a standard way.
But I'd welcome especially Joel's response to this question. It's a critical question and it's worth hearing his take.
Fair re: Egger. I just eyeballed the figure.
Thank you for sharing these Joel. You've got a lot going on in the comments here, so I'm going only make a few brief specific comments and one larger one. The larger one relates to something you've noted elsewhere in the thread, which is:
"That the quality of this analysis was an attempt to be more rigorous than most shallow EA analyses, but definitely less rigorous than an quality peer reviewed academic paper. I think this [...] is not something we clearly communicated."
This work forms part of the evidence base behind some strong claims from HLI about where to give money, so I did expect it to be more rigorous. I wondered if I was alone in being surprised here, so I did a very informal (n = 23!) Twitter poll in the EA group asking about what people expected re: the rigor of evidence for charity recommendations. (I fixed my stupid Our World in Data autocorrect glitch in a follow up tweet).
I don't want to lean on this too much, but I do think it suggests that I'm not alone in expecting a higher degree of rigor when it comes to where to put charity dollars. This is perhaps mostly a communication issue, but I also think that as quality of analysis and evidence becomes less rigorous then claims should be toned down or at least the uncertainty (in the broad sense) needs to be more strongly expressed.
On the specifics, first, I appreciate you noting the apparent publication bias. That's both important and not great.
Second, I think comparing the cash transfer funnel plot to the other one is informative. The cash transfer one looks "right". It has the correct shape and it's comforting to see the Egger regression line is basically zero. This is definitely not the case with the StrongMinds MA. The funnel plot looks incredibly weird, which could be heterogeneity that we can model but should regardless make everyone skeptical because doing that kind of modelling well is very hard. It's also rough to see that if we project the Egger regression line back to the origin then the predicted effect when the SE is zero is basically zero. In other words, unwinding publication bias in this way would lead us to guess at a true effect of around nothing. Do I believe that? I'm not sure. There are good reasons to be skeptical of Egger-type regressions, but all of this definitely increases my skepticism of the results. While I'm glad it's public now, I don't feel great that this wasn't part of the very public first cut of the results.
Again, I appreciate you responding. I do think going forward it would be worth taking seriously community expectations about what underlies charity recommendations, and if something is tentative or rough then I hope that it gets clearly communicated as such, both originally and in downstream uses.
Thank you for responding Jason. That makes sense. The analysis under question here was done in Oct 2021, so I do think there was enough time to check a funnel plot for publication bias or odd heterogeneity. I really do think it's a bad look if no one checked for this, and it's a worse look if people checked and didn't report it. This is why I hope the issue is something like data entry.
Your core point is still fair though: There might be other explanations for this that I'm not considering, so while waiting for clarification from HLI I should be clear that I'm agnostic on motives or anything else. Everyone here is trying.