Reposting from my personal Medium blog, all views personal
I am increasingly concerned about the future of global health.
This week, the global, technology and development-sector ‘elites’ are converging on Delhi for the India AI Impact Summit and the dozens of associated side-events. With that AI is emerging as THE central solution to the wicked problems of global health.
What concerns me is how AI-based conversations have come to dominate the sector’s mindspace and, increasingly, its power structures. The debates around decolonisation, political economy, systems building, social determinants of health and inequality have lost institutional backing. AI narratives have stepped in; bringing with them funding, legitimacy, and a small set of highly amplified voices — just have a look at the names attending the Summit.
These framings are not inherently wrong. But they risk blinding us. We are susceptible to building an echo chamber focused on shiny, sexy AI-enabled tools and sidelining the nuanced, historical and frankly morally painful discussions that sit at the heart of our challenges.
If global health is not anchored first in its moral purpose: improving people’s physical, mental, social, and environmental wellbeing, we risk allowing AI and the funding currents around it, to determine our priorities. In doing so, we may build a sector shaped by tools and incentives rather than by the futures we are ethically obligated to create. This is why I believe it is imperative to shift the conversation away from how to use AI, and back to a more fundamental question: why do we want to use it at all?
To be explicit about where I am coming from: I lead an Indian nonprofit, ‘Fortify Health’ that actively uses data and evidence to amplify impact, and we are seriously exploring how AI might help us do that better. I worked at IDinsight, including supporting Educate Girls on one of the earliest tangible uses of artificial intelligence to improve real-world outcomes on the ground. Heck, for full transparency, I even used artificial intelligence to help structure this article (though the views and words are my own). My concern is not whether AI should be used in global health but instead how it is fundamentally changing the conversation and creating a new ‘elite’.
Losing Sight of the Purpose
Powerful tools like AI do not merely help solve problems; they shape which problems are visible, fundable, and legitimate. If we continue running around with a hammer, all we will see are nails.
AI incentivises the pursuit of a particular class of problems, namely those that are measurable, optimisable, and amenable to rapid iteration. Let me share three examples of where I believe AI falls short:
- Policy change: If we are serious about scale, government will often be the payer and doer (especially within our new funding paradigm). But policy change is not a prediction problem. It requires aligning incentives across ministries, navigating historical norms, reading power structures, and building legitimacy over years. These are deeply human processes. They depend on trust, reputation, and judgment. AI can model scenarios or summarise consultation inputs. It cannot build coalitions or negotiate political feasibility. And without adoption, even the most elegant technical solution remains irrelevant.
- Behaviour Change: We are seeing a wave of AI-powered interventions embedded in platforms like WhatsApp, optimising the delivery of health information. There is real efficiency here. But awareness alone does not drive sustained behaviour change. Capability is only one piece. Opportunity and motivation — social norms, economic constraints, emotional drivers — matter just as much. If we reduce public health to an information transmission problem, we risk confusing communication efficiency with actual transformation.
- Social Structures and Trust: Many AI tools attach themselves to high-trust actors, especially frontline workers like nurses and teachers. Yet these individuals are not merely conduits for information. They carry contextual knowledge, relational equity, and legitimacy. If we convert their interactions into auto-generated outputs, we risk eroding the very reason why we rely on these invaluable (and highly underappreciated) individuals — their unique local relationships and trust! In complex systems, success often depends less on optimal design and more on socially implementable solutions. And implementability rests on trust.
Our challenges are not primarily technical, instead they are political, economic, social, and behavioural at their core. There is nothing inherent about artificial intelligence that makes it uniquely suited to addressing such problems. If anything, given its lack of humanity, AI is often poorly positioned to grapple with issues whose solutions ultimately depend on trust, legitimacy, and human judgment.
The New AI Echo Chambers and Rise of the New Elite
The AI-centric framing that is coming to dominate current discourse is narrowing the global health conversation. It is shaping which ideas, organisations, and approaches are given space, power, and legitimacy, and which quietly fall away.
The myriad of ‘AI-adjacent’ summits, accelerators, panels and conversations are backed by some of the world’s largest technology institutions, or by philanthropic actors closely linked to them, whether through funding, partnerships, or shared intellectual ecosystems. The major organisations at the forefront of AI algorithms, such as: Microsoft, Meta, Open AI, and Google, have all played visible roles in shaping this terrain.
For those of us operating on the front lines, the message feels implicit but clear: get on board, or risk being left behind. If you are not sitting on these panels, being invited to these closed-door roundtables or able to eloquently describe your technology strategy, then you do not have access to key decision-makers. I have experienced first-hand, at least a dozen instances where I was explicitly asked in a funding conversation or grant application about how our team is planning to use technology and AI to amplify impact!
We spent much of the early 2020s grappling with power asymmetries rooted in colonial histories. I am concerned that, without deliberate course correction, we may now be constructing new forms of power asymmetry, this time backed not by empire, but by algorithms.
Is AI the new RCT?
The randomized controlled trial era of the 2010s demonstrated how a dominant tool can simultaneously strengthen rigour and narrow ambition. I worry that our current fixation on artificial intelligence risks repeating this pattern: elevating a powerful method in ways that constrain how global health problems are framed, funded, and addressed.
Poor Economics by Prof. Esther Duflo and Prof. Abhijit Banerjee was the book that first drew me into development economics and global health. I was inspired by the idea that pragmatic, scientific, data-driven tools could meaningfully amplify impact. Today, I lead a nonprofit built on high-quality randomized controlled trial evidence, and I remain convinced these tools have improved the lives of hundreds of millions of people.
Yet experience has also revealed RCT’s limits. As development economics has matured, it has become increasingly clear that many of global health’s most pressing challenges, such as malnutrition, health system resilience, pandemic preparedness, non-communicable diseases, antimicrobial resistance, and climate change, are wicked problems. They are political, economic, social, and institutional at their core, and rarely yield to optimisation alone. Durable impact comes from people changing behaviours, institutions shifting incentives, and systems evolving over time — see this fantastic article on Scaling Global Health Interventions in SSIR.
RCTs remain an extraordinarily powerful tool, but they are not a solution in themselves. In much the same way, artificial intelligence, will not solve global health challenges on its own. Overemphasis on any single method risks incentivising narrow, highly quantifiable interventions, often expressed in languages preferred by large funders, while diverting attention from learning by doing, systems building, and the hard work of sustained behaviour and institutional change.
The lesson from the randomista era is not to abandon tools, but to resist allowing them to define our ambitions. It would be a profound mistake to repeat that narrowing of perspective now that a new and powerful tool is once again in our hands.
A Way Forward: Putting Purpose Back at the Centre
The solution to our challenges is not to reject tools, but to subordinate them, placing clarity of purpose and goals first, and only then selecting the right tool for the right questions at the right time. Undoubtedly, artificial intelligence can be the right tool for many problems. But what we urgently need is a recalibration of the core questions we are asking ourselves in global health:
● How are we seeking to reduce inequalities?
● How do we balance preventative action with curative responses?
● What role does systems building play in achieving durable impact?
● How should responsibility be divided between the private and public sectors when it comes to medical technologies, data, and information sharing?
● How do we build food systems that are both effective and equitable?
● What is the role of the public sector in regulating markets while still enabling innovation?
● How do governments and businesses respectively function as doers, payers, and scalers of global health interventions?
● Critically, what role do local and ground-level voices play in shaping these decisions, and how do we ensure dignity and agency are preserved?
These are the purpose-driven discussions that I believe we should be having, anchored in the defining health challenges of our time. By reasserting teleology, we can move toward conversations and interventions focused on outcomes rather than tools. Away from spaces dominated by the technologically elite and towards those grounded in lived realities and experiential knowledge. In my opinion it is only through these perspectives can we identify the right tools at the right moments and, in doing so, make meaningful and lasting improvements to global health outcomes.
