Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach?
I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk.
Cause Prioritization. Does It Ignore Political and Social Reality?
EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda?
Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources?
And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in?
Long termism. A Luxury When the Present Is in Crisis?
I get why long termists argue that future people matter. But should we really prioritize them over people suffering today?
Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable deaths.
Derek Parfit (Reasons and Persons) says future people matter just as much as we do. But Bernard Williams (Ethics and the Limits of Philosophy) counters that moral urgency matters more when suffering is immediate and solvable.
So my question is, Are we sacrificing real, solvable suffering today for speculative risks tomorrow? It’s not that I don’t care about the long-term future. But can we really justify diverting resources from urgent crises in order to reduce the probability of a future catastrophe?
AI Safety. A Real Threat or Just Overhyped?
When I first saw AI as an EA cause area, I honestly thought, “This sounds more like science fiction than philanthropy.” But after reading more, I see why people are concerned misaligned AI could be dangerous.
But does it really make sense to prioritize AI over problems like poverty, malnutrition, or lack of healthcare? Uganda has 1.8 million out-of-school children (UNESCO, 2023) and 50% of rural communities without clean water (WaterAid, 2022).
Nick Bostrom (Superintelligence) warns that AI could pose an existential risk. But AI timelines are uncertain, and most risks are still theoretical. Meanwhile, a $3 deworming pill (J-PAL, 2017) can improve a child’s lifetime earnings by 20%.
So my question is. How do we compare existential risks with tangible, preventable suffering? Is AI safety over-prioritized in EA?
Earning to Give. A Powerful Strategy or a Moral Loophole?
I was really intrigued by EA’s idea of earning to give working in high paying jobs to donate more. It sounds noble. But then I started thinking…
What if someone works at a factory farm an industry known for extreme animal suffering then donates their salary to end factory farming? Isn’t that a contradiction?
Peter Singer (Famine, Affluence, and Morality) argues that we’re obligated to prevent harm if we can do so at little cost to ourselves. But what if we are the ones causing the harm in the first place?
This isn’t just theoretical. A high earning executive in fossil fuels could donate millions to climate change charities. But wouldn’t it be better if they didn’t contribute to the problem in the first place?
EA encourages “net impact” thinking, but is this approach ethically consistent?
Global vs. Local Causes. Does Proximity Matter?
EA says encourages donations where impact is highest, which often means low income countries. But what happens when you live in one of those countries? Should I still prioritize problems elsewhere?
Take this example:
- Deworming is a high-impact intervention—just $0.50 per treatment (GiveWell, 2023).
- But in Uganda, communities lack access to clean water, making reinfection common. So, should we prioritize deworming, or fix the root problem first?
Amartya Sen (Development as Freedom) says that well-being isn’t just about cost-effectiveness, it’s about giving people the capability to sustain improvements in their lives. That makes me wonder: Are EA cause priorities too detached from local realities? Shouldn’t people closest to a problem have more say in solving it?
Final Thoughts: What Am I Missing?
I’m not here to attack EA, I see its value. But after going through this program, I can’t help but feel that some things just don’t sit right.
🔹 Does cause prioritization account for real-world challenges like political instability?
🔹 How do we balance longtermism with urgent crises today?
🔹 Is AI safety getting too much attention compared to tangible problems?
🔹 Should earning to give have stronger ethical guidelines?
🔹 How do we ensure EA incorporates local knowledge instead of focusing only on global metrics?
some recommendations of resources for clarity about my issues are welcome.
A warm welcome to the forum!
I don't claim to speak authoritatively, or to answer all of your questions, but perhaps this will help continue your exploration.
There's an "old" (by EA standards) saying in EA, that EA is a Question, Not an Ideology. Most of what connects the people on this forum is not necessarily that they all work in the same cause area, or share the same underlying philosophy, or have the same priorities. Rather, what connects us is rigorous inquiry into the question of how we can do the most good for others with our spare resources. Because many of these questions are philosophical, people who start from that same question can and do disagree.
Accordingly, people in EA fall on both sides of many of the questions you ask. There are definitely people in EA that don't think that we should prioritize future lives over present lives. There are definitely people who are skeptical about AI safety. There are definitely people who are concerned about the "moral licensing" effects of earning-to-give.
So I guess my general answer to your closing question is: you are not missing anything; on the contrary, you have identified a number of questions that people in EA have been debating for the past ~20 years and will likely continue doing so. If you share the general goal of effectively doing good for the world (as, from your bio, it looks like you do), I hope you will continue to think about these questions in an open-minded and curious way. Hopefully discussions and interactions with the EA community will provide you some value as you do so. But ultimately, what is more important than your agreement or disagreement with the EA community about any particular issue is your own commitment to thinking carefully about how you can do good.
Thanks, Cullen
I really appreciate this perspective. The idea that EA is a question rather than an ideology really resonates, especially when thinking about the diversity of approaches within the movement. It’s reassuring to know that many of these debates about long termism, AI safety, and earning-to-give aren’t settled, but rather ongoing discussions that reflect different ways of reasoning about impact. Coming from a background in fish welfare and food systems in Uganda, I see similar tensions how do we balance immediate suffering with long-term ch... (read more)