Hide table of contents

Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach?

I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk.

Cause Prioritization. Does It Ignore Political and Social Reality?

EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda?

Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources?

And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in?

Long termism. A Luxury When the Present Is in Crisis?

I get why long termists argue that future people matter. But should we really prioritize them over people suffering today?

Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable deaths.

Derek Parfit (Reasons and Persons) says future people matter just as much as we do. But Bernard Williams (Ethics and the Limits of Philosophy) counters that moral urgency matters more when suffering is immediate and solvable.

So my question is, Are we sacrificing real, solvable suffering today for speculative risks tomorrow? It’s not that I don’t care about the long-term future. But can we really justify diverting resources from urgent crises in order to reduce the probability of a future catastrophe?

AI Safety. A Real Threat or Just Overhyped?

When I first saw AI as an EA cause area, I honestly thought, “This sounds more like science fiction than philanthropy.” But after reading more, I see why people are concerned misaligned AI could be dangerous.

But does it really make sense to prioritize AI over problems like poverty, malnutrition, or lack of healthcare? Uganda has 1.8 million out-of-school children (UNESCO, 2023) and 50% of rural communities without clean water (WaterAid, 2022).

Nick Bostrom (Superintelligence) warns that AI could pose an existential risk. But AI timelines are uncertain, and most risks are still theoretical. Meanwhile, a $3 deworming pill (J-PAL, 2017) can improve a child’s lifetime earnings by 20%.

So my question is. How do we compare existential risks with tangible, preventable suffering? Is AI safety over-prioritized in EA?

Earning to Give. A Powerful Strategy or a Moral Loophole?

I was really intrigued by EA’s idea of earning to give working in high paying jobs to donate more. It sounds noble. But then I started thinking…

What if someone works at a factory farm an industry known for extreme animal suffering then donates their salary to end factory farming? Isn’t that a contradiction?

Peter Singer (Famine, Affluence, and Morality) argues that we’re obligated to prevent harm if we can do so at little cost to ourselves. But what if we are the ones causing the harm in the first place?

This isn’t just theoretical. A high earning executive in fossil fuels could donate millions to climate change charities. But wouldn’t it be better if they didn’t contribute to the problem in the first place?

EA encourages “net impact” thinking, but is this approach ethically consistent?

Global vs. Local Causes. Does Proximity Matter?

EA says encourages donations where impact is highest, which often means low income countries. But what happens when you live in one of those countries? Should I still prioritize problems elsewhere?

Take this example:

  • Deworming is a high-impact intervention—just $0.50 per treatment (GiveWell, 2023).
  • But in Uganda, communities lack access to clean water, making reinfection common. So, should we prioritize deworming, or fix the root problem first?

Amartya Sen (Development as Freedom) says that well-being isn’t just about cost-effectiveness, it’s about giving people the capability to sustain improvements in their lives. That makes me wonder: Are EA cause priorities too detached from local realities? Shouldn’t people closest to a problem have more say in solving it?

Final Thoughts: What Am I Missing?

I’m not here to attack EA, I see its value. But after going through this program, I can’t help but feel that some things just don’t sit right.

🔹 Does cause prioritization account for real-world challenges like political instability?
🔹 How do we balance longtermism with urgent crises today?
🔹 Is AI safety getting too much attention compared to tangible problems?
🔹 Should earning to give have stronger ethical guidelines?
🔹 How do we ensure EA incorporates local knowledge instead of focusing only on global metrics?

some recommendations of resources for clarity about my issues are welcome. 

146

2
1
4

Reactions

2
1
4

More posts like this

Comments12
Sorted by Click to highlight new comments since:

A warm welcome to the forum!

I don't claim to speak authoritatively, or to answer all of your questions, but perhaps this will help continue your exploration.

There's an "old" (by EA standards) saying in EA, that EA is a Question, Not an Ideology. Most of what connects the people on this forum is not necessarily that they all work in the same cause area, or share the same underlying philosophy, or have the same priorities. Rather, what connects us is rigorous inquiry into the question of how we can do the most good for others with our spare resources. Because many of these questions are philosophical, people who start from that same question can and do disagree.

Accordingly, people in EA fall on both sides of many of the questions you ask. There are definitely people in EA that don't think that we should prioritize future lives over present lives. There are definitely people who are skeptical about AI safety. There are definitely people who are concerned about the "moral licensing" effects of earning-to-give.

So I guess my general answer to your closing question is: you are not missing anything; on the contrary, you have identified a number of questions that people in EA have been debating for the past ~20 years and will likely continue doing so. If you share the general goal of effectively doing good for the world (as, from your bio, it looks like you do), I hope you will continue to think about these questions in an open-minded and curious way. Hopefully discussions and interactions with the EA community will provide you some value as you do so. But ultimately, what is more important than your agreement or disagreement with the EA community about any particular issue is your own commitment to thinking carefully about how you can do good.

I just want to register support for this.

I think that "being willing to question orthodoxies on cost-effectiveness, using a lot of real-world data and careful thinking" is a lot of the best part of EA, and it's clear that much of this post is in that style. It sounds like you're wrestling with a bunch of important (and, as pointed out elsewhere, highly-debated) parts of the discussion on this topic. 

Overall I found this post quite refreshing. I think it's really neat that you are from Uganda and seem to understand many of the relevant facts there well. As you might have noticed, a lot of the EA community has historically been very Western-centric, and this has clearly led to important gaps in relevant discussions.

As for your specific questions, I think that other comments here have done a good job going through them point-by-point. Also, because these are fairly-discussed questions, I'd flag that LLMs probably have a decent understanding of them now. So I'd expect that Claude / ChatGPT / etc to do a decent job to quickly get you up-to-speed to the current state of discussion on these topics.

(While I like to think of myself as a decent writer, at this point, I often believe that LLMs can explain things better than I can.)
 

Are EA cause priorities too detached from local realities? Shouldn’t people closest to a problem have more say in solving it?

I think this is the most interesting question, and I would be interested in your thoughts about how to make that easier.[1]

I think part of the reason EA doesn't do this is simply because it doesn't have those answers, being predominantly young Western people centred around certain universities and tech communities[2] And also because EA (and especially the part of EA that is interested in global health) is very numbers oriented.

This is also somewhat related to a second point you raise regarding political and social realities including corruption: it is quite easy for GiveWell or OpenPhilanthropy to identify that infectious diseases are likely to be real, that a small international NGO is providing evidence that they're actually buying and shipping the nets or pills that deal with it, and that on average given infectious disease prevalence they will save a certain amount of lives. Some other programmes that may deliver results highly attuned to local needs are more difficult to evaluate (and local NGOs are not always good at dealing with the complex requests for evidence for foreign evaluators even if they are very effective at their work). The same is true of large multinational organizations that have both local capacity building programs and the ability to deal with complex requests from foreign evaluators, but are also so big that Global Fund type issues can happen...

  1. ^

    I would note that there is a regular contributor to this forum @NickLaing who is based in Uganda and focused on trying to solve local problems, although I don't believe he receives very much funding compared with other EA causes, and also @Anthony Kalulu, a rural farmer in eastern Uganda who has an ambitious plan for a grain facility to solve problems in Busoaga, but seems to be getting advice from the wrong people on how to fund it... 

  2. ^

    This is also, I suspect, part of the reason many but not all EAs think AI is so important...

I love this response. I would add the amount of money that GiveWell at least is looking to give out is unfortunately often more than local NGOs can absorb efficiently anyway.

I don't quite understand what you mean by GiveWell supporting "small international NGOs". They generally support at least medium size ones with budgets usually in the millions per year.

"small" is relative. AMF manages significantly more donations compared with most local NGOs, but it does one thing and has <20 staff. That's very different from Save the Children or the Red Cross or indeed the Global Fund type organizations I was comparing it with, that have more campaigns and programmes to address local needs but also more difficulty in evaluating how effective they are overall.  I understand that below the big headline "recommended" charities Give well does actually make smaller grants to some smaller NGOs too, but these will still be difficult to access for many

Yep 100% agree with all of that. And I absolutely love organisations that do only one thing, especially AMF obviously! 

 

Thanks for the thoughtful and organized feedbacks; I have to say I share very similar views after the intro to EA course - it seems to me back then that EA has a lot of subgroups/views. Appreciate the write up which probably spoke up for many more people!

Welcome to the forum. You are not missing anything: in fact you have hit upon some of the most important and controversial questions about the EA movement, and there is wide disagreement on many of them, both within EA and with EA's various critics. I can try and give both internal and external sources asking or rebutting similar questions. 

In regards to the issue of unintended consequences from global aid, and the global vs local issue. this was an issue raised by Leif Wenar in a hostile critique of EA here. You can read some responses and rebuttals to this piece here and here

With regards to the merits of Longtermism, this will be a theme of the debate week this coming week, so you should be able to get a feel for the debate within EA there. Plenty of EA's are not longtermist for exactly the reasons you described. Longtermism the focus of a lot of external critique of EA as well, with some seeing it as a dangerous ideology, although that author has themselves been exposed for dishonest behaviour. 

AI safety is a highly speculative subject, and their are a wide variety of views on how powerful AI can be, how soon "AGI" could arrive, how dangerous it is likely to be, and what the best strategy is for dealing with it. To get a feel for the viewpoints, you could try searching for "P doom", which is a rough estimate for the chance of destruction. I might as well plug my own argument for why I don't think it's that likely. For external critics, pivot to AI is a newsletter that compiles articles with the perspective that AI is overhyped and that AI safety isn't real. 

The case for "earning to give" is given in detail here. The argument you raise of working for unethical companies is one of the most common objections to the practice, particularly in the wake of the SBF scandal, however in general EA discourages ETG with jobs that are directly harmful. 

Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources?

I think there are basically two ways of looking at this question.

One is the typical EA/'consequentialist' approach. Here you accept that some amount of the money will be wasted (fraud/corruption/incompetence), build this explicitly into your cost-effectiveness model, and then see what the bottom line is. If I recall correctly, GiveWell explicitly assumes something like 50% of insecticide-treated bednets are not used properly; their cost-effectiveness estimate would be double if they didn't make this adjustment. $1.6m of mismanagement seems relatively small compared to the total size of anti-malaria programs, so presumably doesn't move the needle much on the overall QALY/$ figure. This sort of approach is also common in areas like for-profit businesses (e.g. half of all advertising spending is wasted, we just don't know which half...) and welfare states (e.g. tolerated disability benefit fraud in the UK). To literally answer your question, that $1.6m is presumably not the best use of resources, but we're willing to tolerate that loss because the rest of the money is used for very good purposes so overall malaria aid is (plausibly) the best use of resources.

The alternative is a more deontological approach, where basically any fraud or malfeasance is grounds for a radical response. This is especially common in cases where adversarial selection is a big risk, where any tolerated bad actors will rapidly grow to take a large fraction of the total, or where people have particularly strong moral views about the misconduct. Examples include zero-tolerance schemes for harassment in the workplace, DOGE hunting down woke in USAID/NSF, or the Foreign Corrupt Practices Act. In cases like this people are willing to cull the entire flock just to stop a single infected bird—sometimes a drastic measure can be warranted to eliminate a hidden threat.

In the malaria example, if the cost is merely that $1.6m is set on fire, the first approach seems pretty appropriate. The second approach seems more applicable if you thought the $1.6m was having actively negative effects (e.g. supporting organised crime) or was liable to grow dramatically if not checked.

Hi, there! I completely understand your concerns. I have had to grapple with many of them over the years. Don't feel discouraged, many of these issues are still subject to debate within the wider EA community. 

I am a Zimbabwean who lives and works in Nigeria and South Africa at present. I would like to provide my insights on whether proximity matters. Like all effective altruists, I fundamentally desire to flip the notion that charity begins at home on its head in line with Singerian approaches to ethics and philanthropy. I seek to go where I can help the most. 

At this stage however, I believe in focusing on primarily addressing African problems due to my own local expertise and the sheer scale of the developmental challenges in the African context. Some of my critical priorities include reducing poverty, improving healthcare outcomes and enhancing access to quality education. 

You're not missing anything!

Cause Prioritization. Does It Ignore Political and Social Reality?

People should be factoring in the risk of waste, fraud, or mismanagement, as well as the risk of adverse leadership changes, into their cost-effectiveness estimates. That being said, these kinds of risks exist for most potential altruistic projects one could envision. If the magnitude of the risk (and the consequences of the fraud etc.) are similar between the projects one is considering, then it's unlikely that consideration of this risk will affect one's conclusion.

EA says encourages donations where impact is highest, which often means low income countries. But what happens when you live in one of those countries? Should I still prioritize problems elsewhere?

I think this is undertheorized in part because EA developed in, and remains focused on, high-income countries. It also developed in a very individualistic culture.

EA implicitly tells at least some members of the global top 1% that its OK to stay rich as long as they give a meaningful amount of their income away. If it's OK for me to keep ~90% of my income for myself and my family, then it's hard for me to see how it wouldn't be OK for a lower-income community to keep virtually all of its resources for itself. So given that, I'd be pretty uncomfortable with there being a "EA party line" that moderately low-income communities should send any meaningful amount of their money away to even lower-income communities. 

Maybe one could see people in lower-income areas giving money to even lower-income areas as behaving in a supererogatory fashion?

I would generally read EA materials through a lens of the main target audience being relatively well-off people in developed countries. That audience generally isn't going to have local knowledge of (often) smaller-scale, highly effective things to do in a lower-income country. Moreover, it's often not cost-effective to evaluate smaller projects thoroughly enough to recommend them over the tried-and-true projects that can absorb millions in funding. You, however, might have that kind of knowledge!

 

Amartya Sen (Development as Freedom) says that well-being isn’t just about cost-effectiveness, it’s about giving people the capability to sustain improvements in their lives. That makes me wonder: Are EA cause priorities too detached from local realities? Shouldn’t people closest to a problem have more say in solving it?

I think that's a fair question. However, in current EA global health & development work, the primary intended beneficiaries of classic GiveWell-style work are children under age 5 who are at risk of dying from malaria or other illnesses. Someone else has to speak for them as a class, and I don't think toddlers can have well-being in the broader sense you describe. Moreover, the classic EA GH&D program is pretty narrow -- such as a few dollars for a bednet -- so EA efforts generally cause only a very small fraction of all resources spent on the child beneficiary's welfare to have low local control. 

All that makes me somewhat less concerned about potential paternalism than I would be if EAs were commonly telling adult beneficiaries that they knew better about the beneficiary's own interest than said beneficiaries, or if EAs controlled a significant fraction of all charitable spending and/or all spending in developing countries.

I was just arguing a few days ago on here that your very perspective is needed in EA: https://forum.effectivealtruism.org/posts/6FLvBaEwiiqf9JGEJ/history-of-diversity-efforts-and-trends-in-ea?commentId=Egczucx2c3uX4qEMo

Something I do like about EA is that while the main ideas that many hold do have value, as you say... there also are some people who find their own way in EA on whatever topic most interests them even if their ideas are NOT held closely by many. (For example, while I am really concerned about AI, 95% of my interest in EA is global poverty, and teaching others some EA basics alongside non-EA ideas... and I mostly just link up with people who have those specific interests.)

That said, EA would be way better off as a whole if more diversity was present, from the high-income country world and especially from the low-middle income country world.

Cheers and thanks for writing this !!!!

Curated and popular this week
Relevant opportunities