CEO of Fortify Health, Mulago and Jacobs Fellow, Ex-IDinsight and Management Consulting.
I lead Fortify Health, a GiveWell, Coefficient Giving and Founder's Pledge supported non-profit dedicated to reducing and preventing iron-deficiency anaemia. I love thinking about how to scale impactful, evidence-based, cost-effective interventions to alleviate poverty.
Thanks for sharing this post. I appreciated the honest behind-the-scenes look at what is involved in being a grant maker.
As someone not working in the AI safety space, I'm intrigued by your opinions as to what extent grant making within AI safety is similar to and different from grant making within other cause priority areas, for example animal advocacy and global development and health?
My sense from reading the post is that those areas may be relatively less neglected, with fewer opportunities for identifying opportunities with outsized impact returns on investment. Do you think that is a reasonable assumption to be making?
Many exciting ideas here - thanks very much for sharing.
From an initial read, there seem to be a lot of similarities to the approach taken by Ambitious Impact's Charity Entrepreneurship Program. Would you be able to share a little bit of context about how this program compares and contrasts to Ambitious Impact, aside from, of course, the geographic focus on Latin America?
Aside from financial donations, are there other ways to support these organizations? I'm thinking in particular in terms of providing advisory support or connections.
Finally, is it possible to share a little bit of context about the meta organization leading this incubation program, and in particular any details on any track record from prior cohorts, if conducted?
Thanks for raising this. I agree strongly with your sentiment.
Sharing some of my quick opinions on the topic: The forum is like any other marketplace, requiring both sufficient demand and supply to drive meaningful engagement and discourse. To drive interest, there need to be incentives on both sides. Here I consider demand to be those reading and engaging with material, and supply to be those seeding new ideas and pushing the discourse forward.
My sense is that GHD has seen systematic shifts over the past couple of years from both the supply and demand side, as the data-driven or evidence-backed component of global health and development has become more mainstream. On the supply side, many organizations that would formerly be considered EA global health and development organizations are becoming more mainstream. Also from a supply side perspective, my anecdotal experience is that a lot of engagement is associated with building eminence, often related to seeking funds or finding jobs, and both of these purposes have found newer, more targeted spaces elsewhere. Seeing the plethora of organizations supporting impact-focused job searches and new, more professionalized funding mechanisms.
Finally, the level of philosophical discourse around GHD has shifted in the past couple of years, especially as the philanthropic sector has seen massive changes, combined with a relative increase in marginal EA funding being directed towards other cause priority areas, particularly AI safety.
All this being said, I personally hope to engage more with GHD on this forum, both on the demand and supply side. I find it an incredibly intellectually stimulating area with values-aligned individuals, and I hope that this kind of seeding, as you've suggested, can drive more engagement with global health and development within EA.
Thanks for sharing - I think that the idea of making GiveWell's CEAs more interactive and approachable is fantastic. Right now, I think many of us implicitly trust GiveWell's transparent process and the mighty individuals who are willing to engage in depth and red-team critical CEAs. However, I do believe that in order for cost-effectiveness to become a norm in the philanthropic sector, approachability is critical.
Would you be open to piloting an extension of this approach with other organisations and interventions that may not be on GiveWell's top recommended list but for whom they have prepared rigorous CEAs. I lead an organisation called Fortify Health that falls into this category and would love to see if there is a way we may be able to collaborate on an extension of this methodology and tool?
This is a fantastic list and I strongly resonate as the leader of a rapidly growing organisation with some of the challenges as fixes that you have shared.
One half-baked thought that I have is that often the skills that make individuals great at entrepreneurship are orthogonal to the skills that may make someone good at building the operational backbone of an organisation. For examples - entrepreneurs need to be comfortable with rapid and often ad hoc decision making, failing fast, comfort with risk and uncertainties, and generally a cowboy / cowgirl mindset. However, in order to build long-term, sustainable operational structures for growing organisations one needs systematic thinking, SOPs and organised policies, risk aversion or at least a risk mitigation mindset, certaintiy and more of a 'city-planner' mindset.
Therefore, one thing that I have observed is that it can often be helpful for organisations to explicitly recognise when they are shifting from R&D / pilot mode to growth / scaling mode and to invest early in operational capacity.
Fair points, all well made and is so often the case, I think we are actually in vast agreement.
Sorry for not being clear on my analogy to overheads. The discussion of 'overheads' often feels to me like a proxy for a larger, ethical discussion on whether intrinsic motiviation for charity work should be priced into the finances of an organisation. I agree whole-heartedly with you and disdain the 'overheads' framing for evaluating the financial health and financial models of organisations. Similarly, I don't think intrinsic motivation should be priced into team member pay, and I understand you feel the same way.
Upon reflection on this discussion, I am trying to think through reasons why we observe different labor dynamics in the NGO-space in parts of Africa as opposed to what I have observed in South Asia and the West. Perhaps the labour markets in these various African markets have been distorted by international funding with significantly greater purchasing power parity. In India, we observe that most NGOs are significantly funded by local sources of funding meaning that the funding dynamics are somewhat calibrated to existing, domestic market forces. Would be interested on your experiences and why you believe you see these multiples of salaries in your local context?
@NickLaing As always, love to hear your opinions.
I would like to share one agreement and one disagreement and would love to hear your thoughts
Agreement: Fund solutions not projects - I strongly agree with this framing and I fear that the project-centric approach to fundraising has caused NGOs to also have a project-focused mindset to trying to reach its objectives. The Wicked Problems of global health and development required long-term, systemic and integrated solutions across a range of private, public and civil society actors that are unlikely to drive long-term meaningful change through singular projects. While I believe that we can make real impact through marginal, cost-effective, long-hanging interventions (perhaps fortification of food staples :P), a solution mindset helps align thinking and incentives.
Disagreement: Lower salaries - I find similarities in your argument to the more common "funding overheads" argument which is where my initial disagreement stems from. In reading beyond the headline, I would actually agree that if NGOs are paying multiples more than comparable private or public sector roles then this can have a fundamental distorting effect on the labor market and may be attracting talent for extrinsic, short-term motivations which could be very problematic.
However, at least from my experience in the Indian NGO space, I have not seen the scenario that you lay out with NGO workers getting paid so much more. I wonder if this is something systemic to the Ugandan context or other more narrow context.
Personally, I am in favor of seeking to align NGO salaries with overall labor market salaries - whether that means increasing or decreasing the benchmarks. I don't think that the pricing in of intrinsic motivation or altruism is an effective salary-structure approach, especially once organisations pass beyond the "R&D" mode and start meaningfully growing and seeking general talent from the labour market.
Thanks for the very reasonable question. In short, our current budget for the next financial year (through to June 2026) is currently earmarked to existing programmatic obligations. Additional marginal funding would allow us to bring in support to start building out this solution and we would leverage existing team members for the on-ground validation exercises.
This is a thoughtful framework, and I broadly find the approach reasonable. One dimension I'd like to see explored further, though, is the risks embedded in using collective user preference as the mechanism for determining what counts as "prosocial."
The post rightly flags the challenge of identifying uncontroversial prosocial actions, and grounding this in aggregated user preferences is an intuitive starting point. But collective preference carries well-documented risks, including majoritarian bias, and what users collectively want may not align with what is genuinely beneficial for minority groups or for society in the long run. The history of democratic theory gives us good reason to be cautious here.
This raises a question I'd genuinely like to hear views on: to what extent should formal governance structures, including governments and their regulatory capacity, play a role in defining the boundaries of prosocial AI behaviour? I recognise the practical complexity here, particularly given the current fragmented state of AI governance globally. But philosophically, democratically accountable institutions offer something that collective user preference alone cannot: legitimacy derived from deliberative processes, legal accountability, and explicit protections for minority interests.
I'm not arguing that regulation is a clean solution. But it might serve as a useful complementary layer to user preference aggregation, providing a check against the most significant failure modes of purely preference-based approaches.