This is the first post in a short series where I share some academic articles on effective altruism I've written over the last couple of years. Hopefully, this is also the first in a longer series of posts over the summer where I try to share some of my thinking over the last year - for these, I'm aiming to lower my quality threshold, in order to ease the transmission of ideas and discussion from the research side of EA to the broader community, and to get some feedback.


In 2017, philosopher Larry Temkin gave the prestigious Uehiro Lectures at Oxford University, where he was critical of some aspects of effective altruism. I was invited to write a short critical commentary, which is now on-line here. (You might first want to read Larry's synopsis of his argument in the same volume to understand what I'm responding to; while you're there, Matt Clark and Theron Pummer's entry on effective altruism and each-we dilemmas is also very good. )

Here's my abstract: "In the article, ‘Being Good in a World of Need: Some Empirical Worries and an Uncomfortable Philosophical Possibility,’ Larry Temkin presents some concerns about the possible impact of international aid on the poorest people in the world, suggesting that the nature of the duties of beneficence of the global rich to the global poor are much more murky than some people have made out.

In this article, I’ll respond to Temkin from the perspective of effective altruism—one of the targets he attacks. I’ll argue that Temkin’s critique has little empirical justification, given the conclusions he wants to reach, and is therefore impotent."


This 'aid sceptic' objection to Singer's arguments has been commonly repeated in philosophers' discussion of that argument; I think it's quite badly misguided and hopefully this short article helps put that objection to rest. The general reason why I think the objection is misguided is given at the end of the article:

"Let me end with a comment about the nature of the broader dialectic regarding Singer’s argument for the conclusion that we in rich countries have strong duties of beneficence. Often, critics of Peter Singer focus on whether or not aid is effective. But that is fundamentally failing to engage with core of Singer’s argument. Correctly understood, that argument is about the ethics of buying luxury goods, not the ethics of global development. Even if it turned out that every single development program that we know of does more harm than good, that fact would not mean that we can buy a larger house, safe in the knowledge that we have no pressing moral obligations of beneficence upon us. There are thousands of pressing problems that call out for our attention and that we could make significant inroads on with our resources [...]

In order to show that Singer’s argument is not successful, one would need to show that for none of these problems can we make a significant difference at little moral cost to ourselves. This is a very high bar to meet. In a world of such suffering, of such multitudinous and variegated forms, often caused by the actions and policies of us in rich countries, it would be a shocking and highly suspicious conclusion if there were simply nothing that the richest 3% of the world’s population could do with their resources in order to significantly make the world a better place.

The core of Singer’s argument is the principle that, if it is in our power to prevent something very bad from happening, without thereby sacrificing anything morally significant, we ought, morally, to do so. We can. So we should."

71

0
0

Reactions

0
0

More posts like this

Comments13


Sorted by Click to highlight new comments since:

Will: Thanks for posting this! I look forward to more posts in the series. To expand on a question from another commenter:

  • What has it been like to engage the broader philosophical community with arguments based on effective altruism? Do you feel as though EA is generally taken seriously as a philosophical perspective, even when people don't agree with it?
  • I'd guess that the people you're trying to persuade are mostly bystanders rather than direct opponents; have you had good results in...
    • ...moving either type of philosopher closer to your position?
    • Convincing philosophers to start donating/examine EA-relevant-topics? (Recently, that is -- since it seems clear that you were influential in getting a lot of philosophers on board with EA in the early days.)
  • It seems to me like EA has changed and adapted new ideas reasonably often over the last ten years, but I'm not sure how much of this change came out of conversations with philosophers and other intellectuals who were generally opposed to the movement or the ideas. Have you gotten any especially useful feedback from people who disagreed with EA's core arguments? (Say, people who were as critical or more critical than Temkin?)

In order:

1. Yes, it's definitely taken seriously but it's currently widely misunderstood - associated very closely with Peter Singer's views.

2. I think that Larry himself is more sympathetic to what EA is doing after my and others' conversations with him, or at least has a more nuanced view. But in terms of bystanders - yes, from my impressions at the lectures I think the audience came out more EA-sympathetic than when they went in. And especially at the graduate level there's a lot of recent interest, driven primarily by GPI, and for that purpose it's important to engage with critiques, especially if they are high-profile.

3. Honestly, not really. Outsiders usually have some straw man perception of EA, and so the critiques aren't that helpful. The best critiques I've found have tended to come from insiders, but I'm hoping that will change as more unsympathetic academics better understand what EA is and isn't claiming. I do find engaging with philosophers who have very different views of morality (e.g. that there's just no such thing as 'the good') very helpful though.

[anonymous]8
0
0
This is the first post in a short series where I share some academic articles on effective altruism I've written over the last couple of years. Hopefully, this is also the first in a longer series of posts over the summer where I try to share some of my thinking over the last year - for these, I'm aiming to lower my quality threshold, in order to ease the transmission of ideas and discussion from the research side of EA to the broader community, and to get some feedback.

I'm excited to hear this and look forward to reading more of your posts!

Here I sit, comfortably speculating about various possible negative effects that aid groups may produce…. I haven’t offered empirical evidence to support the concerns that I have raised.

Is it worth William's time to engage with such critiques?

The core of Singer’s argument is the principle that, if it is in our power to prevent something very bad from happening, without thereby sacrificing anything morally significant, we ought, morally, to do so. We can. So we should.

This is solid. I fully agree. Individuals in th EA movement can avoid the pitfalls that might come from large scale initiatives. For EA's until their individual donations collectively become large the unintended systemic effects can be ignored.

We're well past the point where unintended systemic effects can be ignored. Givewell has directly moved or directed a half billion dollars, and the impact on major philanthropic giving is a multiple of that. Malaria and schistosomiasis initiatives are significantly impacted by this, and just as the effects cannot be dismissed, neither can the conclusion that these are large scale initiatives, with all the attendant pitfalls.

Thanks. Give Well is big, and is about 100 million dollars a year. And about 50 million from individual donors (less than 1 million a year). This is not much money in the overall scheme of things. Even if Malaria and schistosomiasis are fulled funded by that 50 million, there are many more things to do.

There 5 million kids dying every year 1 2, lets say 4 million are preventable, give well cost per life saved estimate is lets say $1000 of lower end.

The required funding to solve child deaths is 4 billion a year, just for this alone.

We have to think about unintended effects, but there are likely to be marginal and small.

I don't understand why your argument responds to mine. They don't need to be big enough to directly solve problems to be large enough to have critical systemic side effects.

I agree that small amounts of money could in theory have systemic side effects, but that is only if the money is spent on effecting something critical (say influencing the outcome of election etc..). Most of Give Well money is spent on health interventions which are far less likely to have critical systemic side effects.

The worst I could think of them is that they are insensitive/disrespectful to the local populations and have no health effect. Neither of these possible outcomes are critically negative in the systemic sense.

Two international health interventions are running into local resistance 1) Polio Vaccination in Pakistan 2) Ebola treatment in Democratic Republic of Congo neither of the efforts seem bad in my opinion.

Yes, there are plausible tipping points, but I'm not talkin about that. I'm arguing that this isn't "small amounts of money," and it is well into the amounts where international funding displaces building local expertise, makes it harder to focus on building health systems generally instead of focusing narrowly, undermines the need for local governments to take responsibility, etc.

I still think these are outweighed by the good, but the impacts are not trivial.

I'm arguing that this isn't "small amounts of money,"

I am not convinced. In proportion to the needs, the amount seems small, also the money is spent in several countries and hence per capita spending is low (I doubt it goes above $10 per person per year in any of the health interventions SMC is at $7).

undermines the need for local governments to take responsibility, etc.

local governments do take responsibility, what they can achieve in their circumstances is limited though. hence the need for money and outside support.

it is well into the amounts where international funding displaces building local expertise

I am not sure I understand why international funding should displace local expertise, why are the international funders, not funding local organizations? and building local leadership? taking help from local expertise? I think local partners and leaders should take front seat

makes it harder to focus on building health systems generally instead of focusing narrowly

This part I agree, but if overall funding is limited then it makes sense for individuals to look for narrow effects. Give Well is good at this for EA movement, since EA is small compared to the needs. By the same token Give Well type analysis makes less sense at a government to government level when entire health departments are supported. The building of those health institutions takes a long time, the results come slowly with a time lag of 10+ years. Even then they have interactions with the rest of societal institutions like education, economy.

In proportion to the needs...

Again, I don't think that's relevant. I can easily ruin systems with a poorly spent $10m regardless of how hard it is to fix them.

I am not sure I understand why international funding should displace local expertise...

You're saying that these failure modes are avoidable, but I'm not sure they are in fact being avoided.

The building of those health institutions takes a long time, the results come slowly with a time lag of 10+ years.

Yes, and slow feedback is a great recipe for not noticing how badly you're messing things up. And yes, classic GiveWell type analysis doesn't work well to consider complex policy systems, which is exactly why they are currently aggressively hiring people with different types of relevant expertise to consider those types of issues.

And speaking of this, here's an interesting paper Rob Wiblin just shared on complexity and difficulty of decisionmaking in these domains; https://philiptrammell.com/static/simplifying_cluelessness.pdf

I can easily ruin systems with a poorly spent $10m regardless of how hard it is to fix them.

I understand, Give Well recommendations are not going down a path of destruction. So I am not worried. I would be really worried when they try to influence policies.

Also in the big picture I think AID helps if directed well, but it is a small part of the budgets of poor countries and can only be expected (in the big scheme of things) to have small effects. Most of the improvement has come from people/national governments improving their countries.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f