Hide table of contents

See below for the executive summary; the full report can be found on our website.

Comments are welcome, and feel free to share the report wherever you think it could be helpful!

Update 06-06-2019:

As announced below, we have now published a new version of the women's empowerment report on our website. This takes into account some of the feedback in the comments below. For instance, it makes clearer that equivalent or potentially even better outcomes for women might be achieved by donating to some of the other charities FP (and GW) recommend, which do not explicitly focus on women's empowerment.

Thanks again for all your comments! And keep them coming :).


Update 14-02-2019 12.12 pm BST:

This is to thank you all once more for all your comments here, and to let you know they have been useful and we have incorporated some changes to account for them in a new version of the report, which will be published in March or April. They were also useful in our internal discussion on how to frame our research, and we plan to keep improving our communication around this throughout the rest of the year, e.g. by publishing a blog post / brief on cause prioritisation for our members.

One thing I still want to stress here to avoid misconceptions: FP generally chooses the areas we research through cause prioritisation / in a cause neutral way, and we do try to fully answer the question 'how can we achieve the most good' in the areas we investigate, not (even) shying away from harder-to-measure impact. In fact, we are moving more and more in the latter direction, and are developing research methodology to do so (see e.g. our recently published methodology brief on policy interventions). Some of our reports so far, including this one, have been an exception to these rules for pragmatic (though impact-motivated) reasons, mainly:

  • We quickly needed to build a large enough 'basic' portfolio of relatively high-impact charities, so that we could make good recommendations to our members.
  • There are some causes our members ask lots of questions about / are extra interested in, and we want to be able to say something about those areas, even if we in the end recommend them to focus on other areas instead, when we find better opportunities there.

But there's definitely ways in which we can improve the framing of these exceptions, and the comments provided here have already been helpful in that way.



Update 21-12-2018 1.18 pm BST:

To provide some context (thanks SiebeRozendal for the comment):

  • We selected to work on this particular cause at least partially because of a large interest in our member community.
    • This does not mean that the choice of writing this report was a cause-partial choice: for Founders Pledge to do the most good we obviously need to take our community's preferences into account.
    • Neither does it mean that one couldn't arrive at women's empowerment as a high-potential cause area through cause prioritisation, given certain values.
  • The report isn't written specifically for an EA target audience. However, it's finding should be of interest to a cause-neutral reader. In particular, our research shows:
    • Bandhan and Village Enterprise are both likely more cost-effective than GiveDirectly, but not than GiveWell's other recommended charities, taking GiveWell's value inputs;
    • StrongMinds is likely more cost-effective than GiveDirectly if looked at in terms of DALYs averted and with GiveWell's value inputs, though again not than GiveWell's other recommended charities. (If looked at through the lense of improving subjective well-being, it could arguably be at least as cost-effective as many, if not all, GiveWell's recommendations.);
    • No Means No Worldwide is a harder case, as it requires value and empirical judgements with very large uncertainties, plus sexual violence is obviously a very sensitive area to discuss. My current personal best guess is that they are in the same ballpark as GiveDirectly, but I could be wrong by a large margin in either direction.

EXECUTIVE SUMMARY

The cause area

One hundred four countries still have laws preventing women from working in specific jobs; only 56% of women giving birth in Africa deliver in a health facility; and at least 35% of women worldwide have experienced some form of physical or sexual violence. These are just some of the challenges that women and girls around the globe face today.

In this report, we focus on women’s empowerment, by which we mean improving the lives of women and girls. We researched charity programmes to find those that most cost-effectively improve the lives of women and girls. As a heuristic for finding the most cost-effective interventions, we chose to focus on programmes aimed at low- and middle-income countries.

Our process

We used a top-down approach to select charities. First, we categorised women’s empowerment in low- and middle-income countries into twelve subfields. We then reviewed literature and interviewed twenty experts in these subfields. This yielded a shortlist of eleven promising interventions across subfields, including the graduation approach to combat extreme poverty, empowerment-self-defence courses to prevent sexual violence, and interpersonal group therapy to treat depression.

With this shortlist, we began evaluating charities. We started with a longlist of 163 women’s-empowerment charities, and narrowed it down to a shortlist of 15 charities based on our intervention research and a quick scan of organisational strength. We then compared the shortlisted organisations using more detailed information on both cost-effectiveness and strength of evidence. By our criteria, four charities especially stood out. For each of those, we investigated organisational strength and plans, which led us to recommend three and provisionally recommend the fourth.

We also recommend charities that are highly cost-effective in improving women’s lives but do not focus exclusively on women’s empowerment. We discuss these organisations, including those recommended by our research partner GiveWell, in other research reports on our website.

Charity recommendations

StrongMinds

What do they do? StrongMinds implement Interpersonal Group Psychotherapy (IPT-G), training laypeople to treat women suffering from depression in Uganda.

Does the intervention work? Evidence for the efficacy of IPT-G in low-resource settings comes from two randomised controlled trials (RCTs) and StrongMinds’s own quasi-experimental impact assessment.

Is the intervention cost-effective? We estimate that StrongMinds prevent the equivalent of one year of severe major depressive disorder for a woman at a cost of $200–$299, with a best guess estimate of $248.

What are the wider benefits? There are indications of improvements in employment, nutrition, physical health, housing, and children’s education.

Are they a strong organisation? They have a good track record and a strong focus on generating evidence. They are transparent about their mistakes and lessons, and are committed to continuous improvement.

Is there room for funding? StrongMinds could productively use an extra $5.1 million in funding through 2020.

Bandhan’s Targeting the Hardcore Poor programme

What do they do? As part of their Targeting the Hardcore Poor (THP) programme, Bandhan provide women living in extreme poverty in India with a productive asset, a savings account, business training, mentoring, consumption support, and information on education and health. They also work with the Indian government and other NGOs to scale up their model.

Does the intervention work? A high-quality long-term RCT supports the effectiveness of Bandhan’s THP programme. Additional evidence gathered in different contexts suggests that the ‘graduation approach’ adopted by Bandhan can effectively address extreme poverty.

Is the intervention cost-effective? We estimate that Bandhan’s THP programme doubles a participant’s consumption for one year at a cost of $41–$134, with a best guess estimate of $62. This suggests that Bandhan’s programme can bring about nominal gains in consumption of about $1.77 for each $1.00 donated. Adjusting for purchasing power, this is equivalent to gains of $7.27 for each $1.00 donated.

What are the wider benefits? There is some evidence that the programme improves food security, physical health, and subjective well-being.

Are they a strong organisation? Bandhan is a specialised organisation with a good track record. They are careful to maintain high-quality delivery of their programme; they are committed to evidence; and they have been transparent throughout our analysis of their programme. One point for improvement, however, is that their website lacks up-to-date information.

Is there room for funding? The key impediment preventing Bandhan from scaling up is funding, as they have all the required infrastructure and capacity in place. Another $24 million would allow them to reach an additional 60,000 households over the coming six years. For efficiency, we recommend a minimum donation to Bandhan’s THP programme of $320,000.

Village Enterprise

What do they do? Village Enterprise provide business and financial-literacy training, seed funding, mentoring, and access to business savings groups to people living in extreme poverty in Sub-Saharan Africa.

Does the intervention work? A recent high-quality RCT provides evidence that supports Village Enterprise’s programme. There is also some external evidence that the ‘graduation approach’ on which Village Enterprise’s model is based effectively addresses extreme poverty.

Is the intervention cost-effective? We estimate that Village Enterprise double a participant’s consumption for one year at a cost of $157–$367, with a best guess estimate of $250. This suggests that Village Enterprise’s programme can bring about nominal gains in consumption of about $0.99 for each $1.00 donated. Adjusting for purchasing power, this is equivalent to gains of $2.18 for each $1.00 donated.

What are the wider benefits? There is some evidence that the programme improves subjective well-being.

Are they a strong organisation? Village Enterprise are a strong organization, and routinely account for evidence and cost-effectiveness in decision-making. They have strong monitoring and learning processes and are outstandingly transparent and accountable.

Is there room for funding? They could productively use an extra $28 million in funding through 2021.

No Means No Worldwide [provisional]

What do they do? No Means No Worldwide (NMNW) train instructors to teach their ‘IMpower’ courses to both boys and girls, to help prevent sexual assault. They also work with large NGOs and governments to scale these courses up.

Does the intervention work? Evidence suggests that NMNW’s IMpower intervention reduces the incidence of sexual violence in several settings and for girls at different ages. This evidence comes mostly from two RCTs and two quasi-RCTs.

Is the intervention cost-effective? We estimate that NMNW prevent a sexual assault for $9–$757, with a best guess estimate of $62 per case averted.

What are the wider benefits? There is evidence that NMNW’s programme decreases negative gender attitudes among boys and reduces rates of pregnancy-related school dropouts.

Are they a strong organisation? NMNW are exceptionally committed to generating evidence; are transparent about their performance and motivations; and have a good track record supporting IMpower implementation.

Is there room for funding? NMNW could productively use an additional $7 million in funding through 2021.

Why is our recommendation provisional? Based on the current evidence, we feel confident recommending NMNW to donors with a specific interest in averting sexual assault. Depending on the results of an independent evaluation of NMNW’s IMpower programme, which are due at the end of 2018, we may either recommend NMNW more generally to donors interested in women’s empowerment; keep recommending them only to donors interested in averting sexual assault; or decide not to recommend them

25

0
0

Reactions

0
0

More posts like this

Comments24
Sorted by Click to highlight new comments since: Today at 8:04 AM

"at least 35% of women worldwide have experienced some form of physical or sexual violence."

The article uses this statistic to try to motivate why we might be interested in charities that focus specifically on women. However, we cannot evaluate this statistic in isolation: to draw this conclusion we need to compare against assault rates for men.

I wasn't able to immediately find a comparable stat for men - the source for the stat appears to be a women-specific WHO report - but I was able to find homicide data. This data is often regarded as especially reliable, because there are fewer issues about underreporting when there is a dead body. (I apologize in advance if the authors did in fact compare assault rates between sexes and just omitted this from the report).

So what does the data say? According to the UN Office on Drugs and Crime, men are dramatically more likely to be victims of homicide in virtually every country. Almost 80% of global homicide victims are male. And the small number of countries where this is not the case tend to be in the developed world, which is not where the charities in this post focus, or very small countries where I suspect there was only one homicide that year.

So a neutral observer would conclude this was a reason to support charities that reduced violence against men, not women, if one were inclined to choose one or the other.

The fact that this article does not seem to even investigate this makes me sceptical of the quality of the rest of the work. If EAs are going to write non-cause-neutral reports, we should at least be clear at the very beginning of the report that other causes are likely to be be better - rather than than presenting misleading evidence to the contrary. Otherwise we are in danger of sacrificing a very important part of what makes EA distinctive.

Source: http://www.unodc.org/gsh/en/data.html

Very quick reply as I don't have much time now: note that this statistic is about intimate partner violence and sexual violence (where there is a clear difference between men and women), not about violence as a whole. This is clear in the body of the report; the statistic was shortened (though still correct) for the executive summary. Of course this doesn't fully change your point, but it does influence it a little bit. (I agree that when looking at violence generally we should compare the two) Note also, as noted in the edit to the main post, that this report was not arrived at through cause prioritization, and that is not what the introduction tries to do; it merely gives an overview of the problems one could solve in this area. The introduction/overview is hence not what should be most interesting to a cause-neutral reader; that should be the charity evaluations, as they can be compared to charities in other areas.

Hi - am a little late to your comment, but unsure that the other replies address this. Though 80% of homicide victims are male, this doesn't mean anything like 80% of men experience homicide. However 35% of women experience intimate partner violence or sexual violence. It seems to me like the homicide statistic you give doesn't take the scale of homicide into account, which is much smaller than 35% of the male population. I would accept your point that the comparable rate for intimate partner violence of any kind for men isn't given, which while my prior is that this is lower isn't easily evidenced as you point out.

It seems reasonable to use homicides as a proxy for physical violence, especially if no other data is available, but very odd to use homicides as a proxy for sexual violence.

Thanks for posting this. Some comments and questions:

I echo Habryka's reluctance about cause-partial research, and I would have appreciated if you'd shared a little more context for EA's, given that this is the EA Forum. (for example, why Founders Pledge decided to research this, how it's relevant from an EA perspective and how it's not, which assumptions it makes)

Some specific questions:

1. Why haven't you compared charities on a comparably metric, such as the DALY, or Life-Satisfaction Points? Is this because the report assumes what is (intrinsically or robustly) important is empowerment, and that is hard to measure?

2. Why do you recommend both Village Enterprise and Bandhan's 'Targeting the Hardcore Poor' programme? The latter appears 3.5 times as cost-effective (in terms of increasing purchasing power), and the organisations appear to have similar strength, room for more funding, and wider benefits.

3. Do you have a publicly available cost-effectiveness model for the 15 charities considered? By how much did these charities differ on comparable measures?

Thanks for these questions Siebe! And I take your point on sharing context; I'll edit in some points in the main post.

1. We have internally compared these charities on something close to a DALY-equivalent to aid our decisions (similar to what GiveWell does in their cost-effectiveness analyses), but have not included this in the report. This is not because of any assumptions the report makes on empowerment (note that it defines empowerment simply as 'improving lives'). It's mainly because of time constraints: we didn't think it was worth putting in the time to present our estimates in a polished way, given the aims we have with this report (making high-quality recommendations to our members). This is also because internally we are still in the process of developing our views on how to compare across causes and outcome metrics.

2. In terms of cost-effectiveness estimates both do better than GiveDirectly (which we also recommend), and there is obviously large uncertainty in such estimates. Furthermore, Bandhan only accepts donations over $320,000 at this point. Last but not least, the organisations differ in marked ways (where they work, programme focus, target group, type of evidence) and might appeal to different people in our community.

3. We do have rough cost-effectiveness models on almost all of the other charities, but unfortunately I cannot make those public. This is partially for reasons of information sharing (I'd have to check with the charities that provided extra info), but also because these models aren't as worked out as the ones in the report, and a one-to-one comparison would in many cases be confusing rather than valuable. In fact, most initial cost-effectiveness estimates of the other charities are higher than the final estimates of the recommended charities, and we had to deprioritise them to a large extent because the evidence was weaker. Moreover, we find that as we do a more extensive cost-effectiveness analysis of a charity (as we did for our recommended charities), the numbers often go down rather than up, so it's likely that our 'final' estimates of the other charities would be much lower than the initial, rough estimates we have now.

I remember Founders Pledge saying something about this before, they work with a lot of startup founders so they often take the existing priorities of people peripheral to EA as given. They have other cause reports like this.

Just to clarify, at FP we don't take existing priorities/preferences as a given, but we of course do take them into account to some extent when making recommendations (if only because otherwise nobody would follow those recommendations!). We currently use something called the value-discovery approach, which is about asking members about the underlying values driving their preferences (e.g. do you care about people living in the future?), and then making cause/charity recommendations based on those rather than on cause/charity preferences themselves. We also spend quite some time on educating our community on EA/effective giving principles, e.g. this is a main focus of our Programmes team.

Thanks for sharing this research! Women's empowerment may not be a standard EA cause area, but I'm almost always interested to see good evaluations of charities working in the global development space. I especially liked Founders Pledge's evaluation of J-PAL's GPI program for scaling proven interventions.

I see that several people downvoted the post: If you did this, and see this comment, would you mind explaining why?

Even if you disagree with the importance of an area that Founders Pledge chooses to evaluate, it would be helpful to share why you think the content doesn't meet the standards or goals of the Forum. I've personally found their evaluations to be pretty strong; not as thorough as GiveWell, but certainly adding solid information to discussions around EA topics.

(Keeping this brief, and don't have super much time to justify my full perspective, so not sure how much I will respond to comments)

I glanced at the methodology, which seemed relatively weak to me, and I looked at the recommendations which were all denominated in impact measures that I don't care about and seemed arbitrary if you looked at it from a cause-neutral view.

I am generally not very excited about non-cause-neutral research on the forum, and this topic in particular seems like it would likely only have been analyzed because it's a topic that's popular, not because there is any a-priori reason to assume that the interventions in this area are particularly effective from a cause-neutral view.

I think it's key that research in EA stays in a cause-neutral frame and tries to justify itself from that perspective. The state of the broader charity world suggests that there is a strong attractor in people choosing cause-areas that they are personally invested in, and then sticking to those, while justifying their decision to do so with post-hoc justifications. This research seems to mostly provide that post-hoc justification, which seems overall net-negative.

I'd distinguish between two ways in which a report can 'be' cause-neutral:

1. Whether its domain of focus/cause area was chosen purely through cause prioritisation

2. Whether its contents are of value from a cause-neutral perspective

Now I agree that this report is not cause-neutral on (1): it was written at least partially because many of FP's community members are interested in women's empowerment.*

However, note that cause prioritisation is just a heuristic to restrict our domain of search: what you want to compare in the end are the (donation) opportunities themselves, not which cause/domain they happen to be in by some categorisation.

Maybe you don't think women's empowerment should be the first domain to check when you are looking for the highest-impact charities overall, but you should at least agree that it is valuable from a cause-neutral perspective to know what the best charities within this particular domain of search are. You might then be surprised that they are actually better than you thought, or you might find that your intuition of other areas having better opportunities is confirmed.

As the methodology of this report allows you to compare the charities to those in other areas (we don't use outcome measures that are restricted to women's empowerment/the analysis is done in a cause-neutral frame), I think it to be cause-neutral on (2). And I hence think it's very much worth discussing (from a cause-neutral perspective of course!) its contents on the EA forum, e.g. how do the recommended charities compare to other near-term welfare opportunities, such as those recommended by GiveWell?

Lastly, I don't think this research provides a post-hoc justification for women's empowerment: in my view it could have as much provided a justification to not donate in that area (if the best charities turn out to be worse than in other areas) as a justification to donate in that area. At FP we do research into areas not to justify our member's initial preferences, but to be recommend high-impact opportunities tailored to those preferences (if high-impact opportunities are available), as well as to be able to make a solid, justified argument to focus on other areas (if higher-impact opportunities are available in those other areas).

*This does not mean that the choice of writing this report was a non-cause-neutral choice: for FP to do the most good we obviously need to take our community's preferences into account. Neither does it mean that one couldn't arrive at women's empowerment as a high-potential cause area through cause prioritisation.

Maybe you don't think women's empowerment should be the first domain to check when you are looking for the highest-impact charities overall, but you should at least agree that it is valuable from a cause-neutral perspective to know what the best charities within this particular domain of search are. You might then be surprised that they are actually better than you thought, or you might find that your intuition of other areas having better opportunities is confirmed.

I agree that comparisons of that type are valuable, but I don't think that this report helps me much in doing that kind of comparison. This report did no comparative analysis of the interventions against other near-term welfare interventions, and you used denominations that make that comparison quite difficult (as SiebeRozendal pointed out in another comment).

See for example this:

This suggests that Village Enterprise’s programme can bring about nominal gains in consumption of about $0.99 for each $1.00 donated. Adjusting for purchasing power, this is equivalent to gains of $2.18 for each $1.00 donated.

I don't know how to compare an increase in consumption with other near-term interventions, so as long as this number isn't shockingly high or low it's quite hard for me to judge whether this is a good intervention. So while your analysis helps me a bit in comparing Village Enterprise to other near-term welfare charities, it really doesn't help me much and I still need to put in the vast majority of work, which consists of building models about the world in how things like increases in consumption compare against direct reductions in disease burden (and then how those compare against increasing or decreasing the speed of technological progress, and other major methods of impact). The analysis has some use, but I think it's relatively minor for the cases I am interested in.

Lastly, I don't think this research provides a post-hoc justification for women's empowerment: in my view it could have as much provided a justification to not donate in that area (if the best charities turn out to be worse than in other areas) as a justification to donate in that area.

I think the current framing of the post and report does not allow for the possibility of a negative recommendation, and I expect the casual reader to walk away with a mistaken sense that this has been chosen as a promising cause area comparable to other top cause areas. De-facto, even though the numbers seem on a first glance a lot worse than other top GiveWell recommendations, the post does not give a negative recommendation. I recognize that the report was written for a different audience than the core EA community, but I think that's what makes it lose most of its value to me.

Hi Habryka, just wanted to draw your attention to the update above, which is in part referring to some of your comments that have been incorporated in the new version of the report. Thanks for those!

Thanks for writing this out, Habryka!

These are all important considerations, and while I disagree about the strength of the methodology (it seems stronger than that of many posts I've seen be popular on the Forum), I agree that having a more comparison-friendly impact measure would have been good, as well as a justification for why we should care about this subfield within global development.

----

I'm not sure how the Forum should generally regard "research into the best X charity" for values of "X" that don't return organizations with metrics comparable to the best charities we know of.

On the one hand, it can be genuinely useful for the community to be able to reach people who care about X by saying "with our tools, here's what we might tell you, but if you trust this work, maybe also look at Y".

On the other hand, it may drain time and energy from research into causes that are more promising, or dilute the overall message of EA.

I guess I'll keep taking posts like this on a case-by-case basis for now, and I thought this particular case was worth a (non-strong) upvote. But I have a better understanding of why one might come to the opposite conclusion.

I think this was the part of the report that made me distrust the methodology the most:

Our research partner GiveWell[69] was an expert in the subfield and/or was building further expertise, and we thought it unlikely that we would find donation opportunities better than or equivalent to their current or near-future top charities within our timeframe for this research project (in the case of maternal health, family planning, HIV and other STDs, and health (other)).

Even in the specific cause area, it seemed from the beginning likely that existing GiveWell top charities outperform the ones that this report might find (and from a casual glance at the actual impact values, this has been confirmed, with the impact from GiveWell top charities being at least 2x the impact of the top recommended charities here, such that even if you only care about women's health you will probably get more value per dollar).

It seems clear to me that in that case, the correct choice would have been to suggest GiveWell top charities as good interventions in this space, even if they are not explicitly targeting women's empowerment. The fact that no single existing top-GiveWell charity was chosen suggests to me that a major filter that was applied to the prioritization was whether the charity explicitly branded itself as a charity dedicated to women's empowerment, which I think should clearly be completely irrelevant, and made me highly suspicious of the broader process.

Habryka: Did you see this line in the introduction of this post?

We also recommend charities that are highly cost-effective in improving women’s lives but do not focus exclusively on women’s empowerment. We discuss these organisations, including those recommended by our research partner GiveWell, in other research reports on our website.

On the other hand, it does seem like a specific GiveWell charity or two should have shown up on this list, or that FP should have explicitly noted GiveWell's higher overall impact (if the impact actually was higher; it seems like GiveDirectly isn't clearly better than Village Enterprise or Bandhan at boosting consumption, at least based on my reading of p. 5o of the 2018 GD study, which showed a boost of roughly 0.3 standard deviations in monthly consumption vs. 0.2-0.4 SDs for Bandhan's major RCT, though there are lots of other factors in play).

I think I've come halfway around to your view, and would need to read GiveWell and FP studies much more carefully to figure out how I feel about the other half (that is, whether GiveWell charities really do dominate FP's selections).

I'd also have to think more about whether second-order effects of the FP recommendations might be important enough to offset differences in the benefits GiveWell measures (e.g. systemic change in norms around sexual assault in some areas -- I don't think I'd end up being convinced without more data, though).

Finally, I'll point out that this post had some good features worth learning from, even if the language around recommending organizations wasn't great:

  • The "why is our recommendation provisional" section around NMNW, which helped me better understand the purpose and audience of FP's evaluation, and also seems like a useful idea in general ("if your values are X, this seems really good; if Y, maybe not good enough").
  • The discussion of how organizations were chosen, and the ways in which they were whittled down (found in the full report).

On the other hand, I didn't like the introduction, which used a set of unrelated facts to make a general point about "challenges" without making an argument for focusing on "women's empowerment" over "human empowerment". I can imagine such an argument being possible (e.g. women are an easy group to target within a population to find people who are especially badly-off, and for whom marginal resources are especially useful), but I can't tell what FP thinks of it.

Note that GiveDirectly in general is a bit of a weird outlier in terms of GiveWell top recommendations, because it's a lot less cost-effective than the other charities, but is very useful as a "standard candle" for evaluating whether an intervention is potentially a good target for donations. I think being better than GiveDirectly is not sufficient to be a top recommendation for a cause area.

Methodologically, I do think there are a variety of reasons for why you should estimate a regression to the mean in these impact estimates, more so than for GiveDirectly, in large parts because the number of studies in the space is lot lower, and the method of impact is a lot more complicated in a way that allows for selective reporting.

I did not see that line! I apologize for not reading thoroughly enough.

I do think that makes a pretty big difference, and I retract at least part of my critique, though basically agree with the points you made.

No problem, thanks for your comments anyway and please let me know if any part of your critique remains that I haven't engaged with. (Please see edit in main post which should have cleared most up)

I think most of my critique still stands, and I am still confused why the report does not actually recommend any GiveWell top charities. The fact that the report is limiting itself to charities that exclusively focus on women's empowerment seems like a major constraint that makes the investigation a lot less valuable from a broad cause-prioritization perspective (and also for donors who actually care about reducing women's empowerment, since it seems very likely that the best charities that achieve that aim do not aim to achieve that target exclusively).

Habryka: Did you see this line in the introduction of this post?

Thanks for pointing this out, Aaron! Happy that's cleared up.

On the other hand, it does seem like a specific GiveWell charity or two should have shown up on this list, or that FP should have explicitly noted GiveWell's higher overall impact (if the impact actually was higher; it seems like GiveDirectly isn't clearly better than Village Enterprise or Bandhan at boosting consumption, at least based on my reading of p. 5o of the 2018 GD study, which showed a boost of roughly 0.3 standard deviations in monthly consumption vs. 0.2-0.4 SDs for Bandhan's major RCT, though there are lots of other factors in play).
I think I've come halfway around to your view, and would need to read GiveWell and FP studies much more carefully to figure out how I feel about the other half (that is, whether GiveWell charities really do dominate FP's selections).

Please see my updates in the main post and let me know if you still have questions about this. (Do you now understand why we didn't recommend any other specific GW- or FP-recommended charity in this report, but referred to them as a group?)

On the other hand, I didn't like the introduction, which used a set of unrelated facts to make a general point about "challenges" without making an argument for focusing on "women's empowerment" over "human empowerment". I can imagine such an argument being possible (e.g. women are an easy group to target within a population to find people who are especially badly-off, and for whom marginal resources are especially useful), but I can't tell what FP thinks of it.

I hope the reason for this is now also clearer, given the purpose of the report.

Please see my updates in the main post and let me know if you still have questions about this. (Do you now understand why we didn't recommend any other specific GW- or FP-recommended charity in this report, but referred to them as a group?)

As I mentioned in the other comment, I am still not sure why you do not recommend any GW top charities directly. It seems like your report should answer the question "what charities improve women's health the most?" not the question "what charities that exclusively focus on women's health are most effective?". The second one is a much narrower question and its answer will probably not overlap much with the answer to the first question.

You mention them, but only in a single paragraph. It seems that even from the narrow value perspective of "I only care about women's empowerment" the question of "are women helped more by GiveWell charities or the charities recommended here?" is a really key question that your report should try to answer.

The top of your report also says the following:

We researched charity programmes to find those that most cost-effectively improve the lives of women and girls.

This however does not actually seem to be the question you are answering, as I mentioned above. I expect the best interventions for women's empowerment to not exclusively focus on doing so (because there are many many more charities trying to improve overall health, because women's empowerment seems like it would overlap a lot with general health goals, etc). I even expect them to not overlap that much with GiveWell's recommendations, though that's a critique on a higher level that I think we can ignore for now.

To be transparent about my criticism here, the feeling that I've gotten from this report, is that the goal of the report was not to answer the question of "how can we best achieve the most good for the value of women's empowerment?" but was instead focusing on the question "what set of charity recommendations will most satisfy our potential donors, by being rigorous and seeming to cover most of the areas we are supposed to check".

To be clear, I think the vast majority of organizations fall into this space, even in EA, and I have roughly similar (though weaker) criticisms for GiveWell itself, which focuses on global development charities in a pretty unprincipled way that I think has a lot to do with global development being transparent in a way that more speculative interventions are not (though most of the key staff has switched from GiveWell to OpenPhil now, I think in parts because of the problems of that approach that I am criticizing here).

I think focusing on that transparency can sometimes be worth it for an individual organization in the long run by demonstrating good judgement and therefore attracting additional resources (as it did in the case of GiveWell), but generally results in the work not being particularly useful for answering the real question of "how can we do the most good?".

And on the margin I think that that kind of research is net-harmful for the overall quality of research and discussion on general cause-prioritization by spreading a methodology that is badly suited for answering the much more difficult questions of that domain (similarly to how p-testing has had a negative effect on psychology research, by it being a methodology that is badly suited for the actual complexity of the domain, while still being well-suited to answer questions in a much narrower domain).

I think overall this report is pretty high-quality by the standards of global development research, but a large number of small things (the choice of focus area, limiting yourself to charities exclusively focused on women's empowerment, the narrow methodological focus, and I guess my priors for orgs working in this space) give me the sense that this report was not primarily written with the goal of answering the question "what interventions will actually improve women's lives?" but was instead more trying to do a broad thing, a large part of which was to look rigorous and principled, conform to what your potential donors expect from a rigorous report, be broadly defensible, and fit with the skills and methodologies that your current team has (because those are the skills that are prevalent in the global development community).

And I think all of those aims are reasonable aims for the goal of FP, I just think they together make me expect that EAs with a different set of aims will not benefit much from engaging with this research, and because you can't be fully transparent about those aims (because doing so would confuse your primary audience or be perceived as deceptive), it will inevitably confuse at least some of the people trying to do something that is more aligned with my aims and detract from what I consider key cause-prioritization work.

This overall leaves me in a place where I am happy about this research and FP existing, and think it will cause valuable resources to be allocated towards important projects, but where I don't really want a lot more of it to show up on the EA Forum. I respect your work and think what you are doing is broadly good (though I obviously always have recommendations for things I would do differently).

Hi Habryka,

This is to thank you (and others) once more for all your comments here, and to let you know they have been useful and we have incorporated some changes to account for them in a new version of the report, which will be published in March or April. They were also useful in our internal discussion on how to frame our research, and we plan to keep improving our communication around this throughout the rest of the year, e.g. by publishing a blog post / brief on cause prioritisation for our members.

I also largely agree with the views you express in your last post above, insofar as they pertain to the contents of this report specifically. However, very importantly, I should stress that your comments do not apply to FP research generally: we generally choose the areas we research through cause prioritisation / in a cause neutral way, and we do try to answer the question 'how can we achieve the most good' in the areas we investigate, not (even) shying away from harder-to-measure impact. In fact, we are moving more and more in the latter direction, and are developing research methodology to do so (see e.g. our recently published methodology brief on policy interventions).

Some of our reports so far have been an exception to these rules for pragmatic (though impact-motivated) reasons, mainly:

  • We quickly needed to build a large enough 'basic' portfolio of relatively high-impact charities, so that we could make good recommendations to our members.
  • There are some causes our members ask lots of questions about / are extra interested in, and we want to be able to say something about those areas, even if we in the end recommend them to focus on other areas instead, when we find better opportunities there.

But there's definitely ways in which we can improve the framing of these exceptions, and the comments you provided have already been helpful in that way.

Good point, though what about the $60/sexual assault one? That impact even seems better than AMF for combined impact.