I recently participated in a hackathon with the theme "AI for Good." However, throughout the event, looking at the organizers' promotional materials and the other competing projects, I felt that something was amiss—a certain, different kind of voice was missing.
Our lives are filled with countless "overlooked details," and I saw many projects aimed at addressing these details, such as those designed to help the elderly, children, and people with disabilities. These included intelligent recipe assistants for patients, voice models for autistic children, diagnostic systems for rural doctors, as well as projects supporting agriculture, psychological interventions, and more. These are undoubtedly wonderful designs, and they certainly improve the lives of some. Yet, seeing them, I couldn't shake the feeling that another kind of voice was absent—a voice that isn't so comfortable to hear.
When we talk about "AI for Good" and how to use artificial intelligence to make the world a better place, we always think in patterns, falling into a kind of mental inertia, focusing on things that have already been labeled. Beneath this grand narrative, I can't help but wonder: are we being selective in what we "see"?
When we think about what to do, we always start from a grand perspective. We are always eager to help those who have been labeled as "vulnerable groups"—the elderly who need companionship, the children who need better education. But what about the Swing Kids defying the Third Reich, the Zoot Suiters in 1940s America, the hippies of the counter-culture, or the Shamate, the ostracized migrant-worker youth of China's factory towns? These groups are often stigmatized, simplified, misunderstood, and relegated to the "margins," forgotten, or even shunned. Their existence, their culture, their struggles, and their rebellion against or alienation from the mainstream order seem to have never been illuminated by the light of "AI for Good." Are they really "problem youth," or are they just seeking a shred of dignity amidst loneliness, childhood trauma, and a world that doesn't understand them?
For example:
When we talk about "caring for the elderly," a man who has just come from the gym, muscular and with a generous pension, is undoubtedly "elderly." But is he the "vulnerable" person we need to prioritize helping the most?
Conversely, when we see a fifteen- or sixteen-year-old with colorful hair and a body full of tattoos, our first reaction might be "delinquent." But do we ever stop to think that he might come from a poor farming family with critically ill parents, and that he works alone in a factory, using this external "armor" to protect himself? Can a learning gadget that combines play and study solve his problems?
And what about the silent, struggling majority? The office workers living in old apartments, worrying daily about tuition and their parents' medical bills? The young people crushed by structural pressure, forced to choose between caring for their aging parents and their own children? Who is the "vulnerable" group here? Is their exhaustion and despair drowned out by the daily hustle, untouched by the grand narrative of "AI for Good"?
Beneath the glossy surface of society, who is truly bearing the structural pressure? Whose dignity is precariously eroding day by day, yet struggles to receive effective support and attention? Does the "respect" and "care" we take for granted sometimes become superficial, or even a new form of moral high ground that conceals deeper injustices?
The pain of individuals trapped in systemic predicaments, deprived of a voice, for whom even "being seen" is a luxury, is often diffuse, difficult to attribute to a single cause, and may even challenge the existing social order and our comfortable perceptions. Consequently, their needs and plights can, paradoxically, be marginalized in the mainstream "ethical agenda," or simplified into individual problems requiring "psychological counseling," rather than systemic issues demanding fundamental changes to the social structure.
In our current systems of evaluation and resource allocation, whose difficulties and needs are most often underestimated or even ignored? How can we empower those who are truly crushed at the bottom, systematically stripped of their dignity and hope, and struggling to survive outside the mainstream view, to regain control of their own destinies?
That extreme individual suffering, that despair born from being pushed to the brink of survival by poverty, discrimination, oppression, and institutional injustice, and the fundamental questioning of life's meaning that follows—do these most direct and heart-wrenching predicaments always become the core, priority issues in discussions of AI ethics? Do they receive an equal and urgent level of attention and response?
I see my friend, giving up on himself. When encouraged to study, his only response is "I'm lazy." He finds his sense of existence by attacking others, lives in constant anxiety, complains about politics and reality to a chatbot every day, and wallows in self-abandonment, thinking he'll just end it all when he can't go on anymore.
I see my friend in daily agony because of her marriage. I see my friend lost, with no idea what to do with his future. I see so many girls whose only goal is to marry a rich man. I see so many people who constantly distract themselves with all kinds of entertainment but dare not face reality.
When an individual feels that no amount of effort will allow them to meet societal expectations or improve their situation, "giving up on oneself" can become a form of... "self-preservation" or "silent protest." And the thought that "I'll just die when I can't take it anymore" is the most extreme manifestation of this despair—a signal that must be taken with the utmost seriousness and vigilance.
These are not isolated cases. To varying degrees, they reflect the pressure, confusion, and struggles that many people, especially the young, may be experiencing in modern society. When individuals feel immense pressure, injustice, powerlessness, and a lack of hope in their real lives, they may adopt negative coping mechanisms—whether it's disillusionment with relationships, confusion about the future, fantasies of "shortcuts," or immersion in the virtual world. Behind these behaviors often lies a profound longing for dignity, a sense of worth, security, understanding, love, and a "meaningful life," coupled with the immense disappointment that these desires cannot be met in reality.
When the pressures of reality are too great, the sense of frustration too strong, or the future feels hopeless, the instant gratification, sense of control, and temporary oblivion offered by the virtual world become an incredibly tempting "sanctuary." However, while this escapism can temporarily alleviate anxiety, in the long run, it often exacerbates the individual's disconnect from reality, eroding their will and ability to change their situation, thus creating a vicious cycle.
This is the true picture of those "beaten down by life"—the real, widespread, individual pain and collective anxiety that is overlooked or simplified by the mainstream narrative, hidden beneath the daily clamor. Merely providing "treatment" solutions like "early education machines" or "psychotherapy" may not touch the fundamental predicaments arising from one's "fate" —that is, the deeper social structures, economic pressures, unequal opportunities, and the resulting loss of hope.
AI for Good may not be able to cure poverty, discrimination, oppression, institutional injustice, or solve problems of justice, survival, dignity, and a future without hope. But it is precisely because we "see" all of this... this real pain and struggle hidden beneath the daily clamor... that perhaps we should try, in a... different way, to attempt to touch and heal these "wounded souls," rather than choosing to ignore them.

I'm not sure I agree with the premise of this argument: that the concept of AI for good is faulty, because it can't solve all the problems.
I don't think "AI for good" claims to solve all the problems. Absolutely let's take issue with the idea that AI is going to resolve everything, but that doesn't mean it can't help with anything.
But I'm not worried that AI won't touch the fundamental problems of "social structures, economic pressures, and unequal opportunities". I'm worried that it already is, and is moving the dial in the wrong direction. Automation moves wealth and power away from individuals and towards companies. Concentration of wealth and power in the hands of ever smaller number of individuals and companies is exactly what drives economic, social issues and inequality.
Unless AI is governed and managed appropriately, it's going to be part of the problem, more than part of the solution.
I think this op-ed sets out some of these issues really well: https://nathanlawkc.substack.com/p/its-time-to-build-a-democracy-ai
You've absolutely nailed it. Thank you for this incredibly insightful comment.
I want to wholeheartedly agree with your core point: my deepest fear isn't just that 'AI for Good' won't solve these fundamental problems, but that mainstream AI development, as it currently stands, is actively exacerbating them. You've perfectly articulated the mechanism behind this: the automation-driven concentration of wealth and power.
To clarify the premise of my original post: I don't believe the concept of 'AI for Good' is inherently flawed, nor is my critique that 'AI for Good is deficient because it can't solve every problem.' My critique is aimed at the narrative's focus. I am concerned that the "AI for Good" movement often directs our attention and resources towards more palatable, surface-level issues. Meanwhile, the far more powerful, fundamental engine of commercial AI development relentlessly fuels the very structural inequalities we claim to be fighting.
This is exactly what I see in some of the projects I've encountered. For instance:
Your point and mine are two sides of the same coin, and together they paint a grim picture:
My argument is that the "good" side of AI often has a focus that is too narrow, neglecting the deepest forms of suffering.
Your argument is that the dominant, commercial side of AI is actively making the root causes of this suffering worse.
This leads to a terrifying conclusion: our "AI for Good" efforts, however well-intentioned, risk becoming a rounding error—a fig leaf hiding a much larger, systemic trend towards greater inequality.
This brings me to a follow-up question that I'd love to hear your (and others') thoughts on:
Given this reality, what is the most effective role for the "AI for Good" community? Should we continue to focus on niche applications? Or should our primary focus shift towards advocacy, governance, and creating "counter-power" AI systems—tools designed specifically to challenge the concentration of wealth and power you described? How do we stop applying bandages and start treating the disease itself?
Yes, we are in total agreement. https://gradual-disempowerment.ai/ is a scary and relevant description of the concentration of wealth and power.
I think it's about the framing of AI for good. The "AI for good" narrative is most looking at "what can AI do?", and as you say, this just leads to sticking plasters - and at worst, it's technical people designing solutions to problems they don't really understand.
I think the question in AI for good instead needs to be "How do we do AI?". This means looking at how the public are involved in development of AI, how people can have a stake, how the public can help to oversee and benefit from AI, rather than corporations.
https://publicai.network/ are making headway on some of this thinking.
Personally, I don't think that there's a tension between niche applications of AI and governance/counter power AI systems. I think the answer is to create the niche applications with the public, and in ways that empower the public. For example, how can the public have greater control over their data and share in the profits from its use in AI?
I really appreciate how this post highlights the real, tangible suffering that often remains invisible beneath grand narratives like "AI for Good." It's crucial that we recognize the everyday struggles of people who are exhausted, economically strained, and emotionally burned out—struggles that tech-focused solutions frequently overlook.
Your critique resonates deeply with my recent work on the Time × Scope framework, where I suggest explicitly structuring ethics around two core parameters: how far into the future we look (Time, δ) and how broadly we extend our moral concern (Scope, w). One of the strengths of this framework is precisely its flexibility—it can prioritize both systemic, long-term challenges and deeply personal, immediate suffering.
It would be insightful to explore how this structured ethical approach might help ensure AI interventions truly reflect and address both the broad systemic goals and the immediate, tangible needs you highlight. For instance, how might we use such frameworks to ensure AI-driven initiatives genuinely ease the chronic burnout of individuals worrying about rent or basic well-being today, rather than merely amplifying abstract ideals?
I’d be very interested to hear your thoughts on balancing these two scales—macro-level visions and micro-level realities—without losing sight of either.
Thank you so much for this insightful comment and for introducing me to your work. The "Time × Scope" framework is a powerful lens for analysis, and it gives me a new, structured language to articulate the core problems I was trying to describe.
If I'm understanding it correctly, your framework provides a crucial map for ethical deliberation. My essay, in essence, is a real-world exploration of what happens when we get the parameters on that map wrong. I would argue that the "AI for Good" narrative I critiqued often sets its Scope (w) far too narrowly, precisely because it relies on a limited, intuitive empathy that only extends to neatly labeled, "palatable" groups, while ignoring the stigmatized and the structurally oppressed.
This brings me to what I believe is the core psychological variable that your framework can help us address: empathy. It feels like the fundamental engine that drives the Scope (w) parameter. The true power of your framework might lie not just in setting these parameters top-down, but in inspiring us to ask how AI itself could be used to cultivate and expand the very empathy we need.
This could become a new, constructive direction for "AI for Good." For instance:
UThis connects directly to your excellent question about balancing macro-level visions and micro-level realities.
I believe the answer lies in using the micro to constantly ground and validate the macro. The tangible well-being of the individual—which we can only truly appreciate through empathy—must be the ultimate "ground truth" for any grand, systemic AI initiative.
In the context of your framework, the balance can be achieved by stipulating that no matter how far the Time (δ) horizon is, its implementation must demonstrably improve the "flourishing" of individuals within our immediate Scope (w). If a grand vision for the future is built upon a failure of empathy for the silent suffering of the present, the framework would tell us that our ethical equation is fundamentally flawed. The micro-reality isn't something to be balanced against the macro-vision; it's the foundation upon which that vision must be built.
Thank you again for providing such a clarifying and productive framework. It's a perfect bridge between a humanistic critique and a structured, actionable ethical approach.
You absolutely captured the essence of the Time × Scope framework, and your interpretation is spot-on.
I particularly appreciate how you highlighted empathy as a key driver of the Scope parameter (w)—this perspective significantly enriches my initial formulation. Empathy indeed shapes our moral boundaries, and the idea of actively using AI to expand our empathic circles is compelling and aligns perfectly with the deeper aspirations of the framework.
Your insight about the interplay between macro-vision and micro-reality also resonates strongly with me. You're completely right that the micro-level—individual flourishing and immediate well-being—must anchor any broader, long-term vision. The ultimate goal, after all, is not abstract perfection but tangible improvement in real lives.
Your response helps bring clarity and practical relevance to the Time × Scope concept. I'm genuinely excited to explore these dimensions further, especially how AI might actively foster empathy and directly enhance human flourishing.
Thanks again for your insightful and inspiring contributions—it's truly rewarding to see these ideas resonate and evolve through such engaging dialogue!
Thank you for your incredibly generous and thoughtful response. I'm genuinely moved and inspired by this exchange.
This dialogue has been a profound learning experience for me as well. You've provided a powerful, structured framework that has given clarity and language to intuitions I've struggled to articulate. Seeing how these humanistic concerns can be integrated with such a rigorous model has been incredibly rewarding.
I am truly excited by this shared vision we've landed on—the idea of shifting the "AI for Good" focus from mere problem-solving to actively cultivating empathy and human flourishing. That feels like a genuinely hopeful and meaningful direction for our collective future.
I look forward with great anticipation to following your work on the Time × Scope framework and seeing how these ideas evolve.
Thank you again for one of the most stimulating and rewarding conversations I've ever had.
I'm genuinely delighted that our dialogue proved so inspiring for you too! Your insights about empathy as the driving force behind Scope (w) and the fundamental role of micro-level realities for macro-level vision were incredibly valuable to me and profoundly enriched my understanding. Thank you for your openness, depth of thought, and this truly stimulating conversation. I look forward to crossing paths in future discussions!