Benjamin Hoffman recently wrote a post arguing that "drowning children are rare":

Stories such as Peter Singer's "drowning child" hypothetical frequently imply that there is a major funding gap for health interventions in poor countries, such that there is a moral imperative for people in rich-countries to give a large portion of their income to charity. There are simply not enough excess deaths for these claims to be plausible.
...
As far as I can see, this pretty much destroys the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent. Insofar as there's a way to fix these problems as a low-info donor, there's already enough money. Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense. Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits.

Imagine that the best intervention out there was direct cash transfers to globally poor people. The amount of money that could be productively used here is very large: it would cost at least $1T to give $1k to each of the 1B poorest people in the world. This is very far from foundations already having more than enough money. That there are extremely poor people who can do far more with my money than I can is enough for me to give.

While I also think there are other ways to spend money altruistically that have more benefit per dollar than cash transfers, this only strengthens the argument for helping.

How does Ben reach the opposite conclusion? Reading his post several times it looks to me like two things:

  • He's looking at "saving lives via preventing communicable, maternal, neonatal, and nutritional diseases" as the only goal. While it's a category of intervention that people in the effective altruism movement have talked about a lot, it's definitely not the only way to help people. If you were to completely eliminate deaths in this category it would be amazing and hugely beneficial, but there would still be people dying from other diseases, suffering in many non-fatal ways, and generally having poverty limit their options and potential. And that's without considering more speculative options like trying to keep us from killing ourselves off or generally trying to make the long-term future go as well as possible.
  • He's setting a threshold of $5k for how much we'd be willing to pay to avert a death, which is much too low. I do agree there is some threshold at which you'd be very reasonable to stop trying to help others and just do what makes you happy. Where this threshold is depends on many things, especially how well-off you are, but I would expect it to be more in the $100k range than the $5k range for rich-country effective altruists. By comparison, the US Government uses ~$9M.

I do think the "drowning children" framing isn't great, primarily because it puts you in a frame of mind where you expect that things will be much cheaper than they actually are (familiar), but also because it depends on being in a situation where only you can help and where you must act immediately. There's enough actual harm in the world that we don't need thought experiments to show why we should help. So while there aren't that many "drowning children", there is definitely a lot of work to do.

(Crossposted from jefftk.com)

134

0
0

Reactions

0
0

More posts like this

Comments30
Sorted by Click to highlight new comments since: Today at 11:52 PM
mic
5y43
0
0

I think the post is more fundamentally flawed; there is a substantial funding gap under Benjamin's assumptions, even if we were to ignore GiveDirectly and other cause areas, and even if we were unwilling to save a life for any more than $5,000.

According to the 2017 Global Burden of Disease report, around 10 million people die per year, globally, of "Communicable, maternal, neonatal, and nutritional diseases.”* This is roughly the category that the low cost-per-life-saved interventions target. If we assume that all of this is treatable at current cost per life saved numbers - the most generous possible assumption for the claim that there's a funding gap - then at $5,000 per life saved (substantially higher than GiveWell's current estimates), that would cost about $50 Billion to avert.

This is already well within the capacity of funds available to the Gates Foundation alone, and the Open Philanthropy Project / GiveWell is the main advisor of another multi-billion-dollar foundation, Good Ventures. The true number is almost certainly much smaller because many communicable, maternal, neonatal, and nutritional diseases do not admit of the kinds of cheap mass-administered cures that justify current cost-effectiveness numbers.

Of course, that’s an annual number, not a total number. But if we think that there is a present, rather than a future, funding gap of that size, that would have to mean that it’s within the power of the Gates Foundation alone to wipe out all fatal communicable diseases immediately, a couple times over - in which case the progress really would be permanent, or at least quite lasting. And infections are the major target of current mass-market donor recommendations.

The Open Philanthropy Project started out with $8.3 billion in 2011, and presumably has less now. The Gates Foundation has an endowment of $50.7 billion as of 2017. They wouldn't be able to sustain $50 billion of annual donations for very long. As such, I think the first and second paragraphs are essentially invalid.

It sounds dubious that we could wipe out communicable diseases in a few years and have that be permanent without any further investment. The 2017 Global Burden of Disease lists some communicable diseases as follows: HIV/AIDS, syphilis, chlamydia, gonococcal infection, tuberculosis, other respiratory infections, diarrheal disease, typhoid, salmonella, malaria, schistosomiasis, dengue, rabies, other neglected tropical diseases, ebola, zika, meningitis, measles, hepatitis, tetanus, and so on.

My understanding is that rather few of these have been permanently eliminated, even in high income countries. Distributing condoms and PrEP for a few years isn't going to permanently eliminate HIV. Bed nets and seasonal chemoprevention aren't going to eliminate malaria. Measles needs ongoing vaccinations. Etc.

There are of course more permanent solutions that we can use, but these are probably much more expensive and it's unclear whether the two foundations would be able to fully fund them. In the late 1940s, the US substantially reduced malaria by draining swamps and spraying mosquito spray.¹ There's gene drives of course, but we probably need more research at this point before we can safely try to eliminate mosquitoes with that. Ending worms, diarrheal disease, or typhoid would probably require incredible improvements to the water supply. Still, HIV and respiratory infections would probably not be possible to eliminate without substantial improvements in medicine.

Also, the Gates Foundation is not particularly EA, and we should not expect it to put all its money into global health. (Nor would we assume Open Phil to do so, because it also cares about other cause areas.) In any case, even if they could fill the gap, that's not a relevant counterfactual unless they would fill the gap.

All of the above is using Benjamin's charitable, optimistic assumption that we can save a life for $5,000 up to $50 billion per year. If we consider just the room for more funding of all the top GiveWell charities better than GiveDirectly, is that low enough that Open Phil and the Gates Foundation can completely fill it? Possibly, in which case I will defer to the argument Jeff Kaufman's post.

While I agree with a lot of the critiques in this comment, I do think it isn't really engaging with the core point of Ben's post, which I do think is actually an interesting one.

The question that Ben is trying to answer is "how large is the funding gap for interventions that can save lives for around $5000?". And for that, the question is not "how much money would it take to eliminate all communicable diseases?", but instead is the question "how much money do we have to spend until the price of saving a life via preventing communicable diseases becomes significantly higher than $5k?". The answer to the second question is upper-bounded by the first question, which is why Ben is trying to answer that one, but that only serves to estimate the $5k/life funding gap.

And I think he does have a reasonable point there, in that I think the funding gap on interventions at that level of cost-effectiveness does seem to me to be much lower than the available funding in the space, making the impact of a counterfactual donation likely a lot lower than that (though the game theory here is complicated and counterfactuals are a bit hard to evaluate, making this a non-obvious point).

I think, though I have very high uncertainty bounds around all of this, is that the true number is closer to something in the space of $20k-$30k in terms of donations that would have a counterfactual impact of saving a life. I don't think this really invalidates a lot of the core EA principles as Ben seems to think it implies, but it does make me unhappy with some of the marketing around EA health interventions.

The 8.3 billion should have grown since 2011. Openphil's grants have not even totalled 800 million yet and that is the amount that the fund should have grown *per year* in the interim.

As I commented on Ben's blog, I just think it bears mentioning that we're allowed to focus on our own lives whether or not there are people who could use or money more than us. So if anyone were motivated to undermine the need for donations in order to feel justified in focusing on themselves and their loved ones, they needn't do it. It's already okay to do that, and no one's perfectly moral. Maybe if you don't feel the need to prove EA wrong before taking care of yourself, you'll want to return to giving or other EA activities after giving yourself some tlc, because instead of feeling forced, you know you want to do these things of your own free will.

... we're allowed to focus on our own lives whether or not there are people who could use our money more than us.

I agree, though it's worth noting that Singer explicitly argues against this in Famine, Affluence, and Morality, which is a foundational paper for the EA position.

Singer says it's wrong to spend frivolously on ourselves while there are others in need but he doesn't say it should be illegal. He also doesn't give any hard and fast rules about giving, and he doesn't think people who don't give should be shamed. He simply points out how much more the money could do for others, each of whom matter as much as any of us.

I just get the feeling that Ben isn't comfortable doing what he wants or what he thinks would make most of us (wealthy people) happier without getting us to agree with him first that it's what everyone should do. I want to remind him that what he does within the law is his prerogative. We don't have to be wrong for him to do what he wants. If he just wants to focus on himself and his loved ones, he doesn't have to convince us that we've filled every funding gap so our ideas are moot and he's still a good person despite not giving. He's already free to act as he sees fit. The last thing he needs to do to feel in charge of his own life and resources is attack EA.

I say this all because that line about focusing on your loved one and doing "concrete" things made me suspect that that desire might have motivated the whole argument. In that case, we can avoid a pointless argument of dueling back-of-the-envelope estimates by pointing out that EA doesn't have to be wrong for Ben and others like him to do what they want with their lives.

I could be wrong and the post could represent Ben's true rejection. In that case, I'd expect to hear back that he is doing what he wants, and what he wants depends on the frequency of drowning children, which is why he's trying to figure this out.

Quoting from Famine, Affluence, and Morality:


Despite the limited nature of the revision in our moral conceptual scheme which I am proposing, the revision would, given the extent of both affluence and famine in the world today, have radical implications. These implications may lead to further objections, distinct from those I have already considered. I shall discuss two of these.
One objection to the position I have taken might be simply that it is too drastic a revision of our moral scheme. People do not ordinarily judge in the way I have suggested they should.
Most people reserve their moral condemnation for those who violate some moral norm, such as the norm against taking another person's property. They do not condemn those who indulge in luxury instead of giving to famine relief. But given that I did not set out to present a morally neutral description of the way people make moral judgments, the way people do in fact judge has nothing to do with the validity of my conclusion.

I understand this to mean that while Singer isn't (explicitly) saying we should shame or outlaw people who don't meet the standard he presents, we should morally condemn them (which could be operationalized via shaming, or via the legal system).

Now that I've made all these comments, I realize I should have just asked Ben if his post was his true rejection of EA-style giving. My comments have all been motivated by suspicion that Ben just isn't convinced by arguments about giving enough to give himself, but he feels like he has to prove them wrong on their own terms instead of just acting as he sees fit. (That's a lot of assumptions on my part.) If that particular scenario happens to be true for him or anyone reading, my message is that you are in charge of these decisions and you don't have to justify yourself to EAs.

The broader issue that concerns me here is people thinking that the only way to do the things they want to make them is happy is to convince everyone else that those things are objectively right. There are a lot of us here with perilously high need for consistency. When we don't respect personal freedom and freedom of conscience, people will start to hijack EA ideas to make them more pallatable for them without having to admit to being inconsistent or failing to live up to their ideals. This happens all the time in religious movements.

I can't promise Ben that no one will judge him morally inferior for not giving. But I can promote respect for people in the community feeling empowered to follow their own judgment within their own domains. EA benefits from debate, but much more so if that debate is restricted to true rejections and not coming from a need for self-justification. Reminding people that all EA lifestyle decisions are choices is thus a means of community epistemic hygiene.

I'm fairly confident, based on reading other stuff Ben Hoffman has written, that this post has much less to do with Ben wanting to justify a rejection of EA style giving, and and much more to do with Ben being frustrated by what he sees as bad arguments/reasoning/deception in the EA sphere.

So you think he's worried about other people being misled?

Other people being mislead is how I read "Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense. Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits. And try to fix the underlying systems problems that got you so confused in the first place."

Also worried about the overall epistemic health of EA – if it's reliably misleading people, it's much less useful as a source of information.

[anonymous]5y3
0
0

I don't think the two reasons for Ben's actions you suggested are mutually inconsistent. He may want to emotionally reject EA style giving arguments, think of arguments that could justify this, and then get frustrated by what he sees as poor arguments for EA or against his arguments. This outcome (frustration and worry with the EA community's epistemic health) seems likely to me for someone who starts off emotionally wanting to reject certain arguments. He could also have identified genuine flaws in EA that both make him reject EA and make him frustrated by the epistemic health of EA.

I think if you've read Ben's writings, it's obvious that the prime driver is about epistemic health.

[anonymous]5y8
0
0

I don't feel inclined to get into this, but FWIW I have read a reasonable amount of Ben's writings on both EA and non-EA topics, and I do not find it obvious that his main, subconscious motivation is epistemic health rather than a need to reject EA.

When you say "you don't need to justify your actions to EAs", then I have sympathy with that, because EAs aren't special, we're no particular authority and don't have internal consensus anyway. But you seem to be also arguing "you don't need to justify your actions to yourself / at all". I'm not confident that's what you're saying, but if it is I think you're setting too low a standard. If people aren't required to live in accordance with even their own values, what's the point in having values?

I actually think even justifying yourself only to yourself, being accountable only to yourself, is probably still too low a standard. No-one is an island, so we all have a responsibility to the communities we interact with, and it is to some extent up to those communities, not the individuals in isolation, what that means. If Ben Hoffman wants to have a relationship with EAs (individually or collectively), it's necessary to meet the standards of those individuals or the community as a whole about what's acceptable.

But you seem to be also arguing "you don't need to justify your actions to yourself / at all"

Kinda. More like "nobody can make you act in accordance with your own true values-- you just have to want to."

If people aren't required to live in accordance with even their own values, what's the point in having values?

To fully explain my position would require a lot of unpacking. But, in brief, no-- how could people be required to live in accordance with their own values? Other people might try to enforce value-aligned living, but they can't read your mind or fully control you-- hardly makes it a "requirement." If what you're getting at is that people **should** live according to their values, then, sure, maybe (not sure I would make this a rule on utilitarian grounds because a lot of people's values or attempts to live up to their values would be harmful).

Suffice to say that, if Ben does not want to give money, he does not have to explain himself to us. The natural consequence of that may be losing respect from EAs he knows, like his former colleagues at GiveWell. He may be motivated to come up with spurious justifications for his actions so that it isn't apparent to others that either his values have changed or he's failing to live up to them. I would like to create conditions where Ben can be honest with himself. That way he either realizes that he still believes it's best to give even though the effects or giving are more abstract or he faces up to the fact that his values have changed in an unpopular way but is able to stay in alignment with them. (This is all assuming that his post did not represent his true rejection, which it very well might have.)

I think Singer would argue we should shame or lock up people if and only if that did the most good. It's not at all clear, as a fact of the matter, that would be the best option

That accords with my model of Singer's view.

I just wanted to point out that he wasn't arguing against shaming or deploying the legal system. Those routes probably wouldn't do the most good, in practice, but they're definitely on the menu of things to be considered.

My point is that Ben is in fact able to do whatever legal thing he wants. He doesn't need to make us wrong to do so. It's interesting that he feels the need to. Whether EA or Peter Singer has suggested that it's morally wrong not to give, Ben is free to follow his own conscience/desires and does not need our approval. If his real argument is that he should be respected by EAs for his decision not to give, I think that should be distinguished from a pseudo-factual argument that we're deceived about the need to give money.


He's setting a threshold of $5k for how much we'd be willing to pay to avert a death, which is much too low. I do agree there is some threshold at which you'd be very reasonable to stop trying to help others and just do what makes you happy. Where this threshold is depends on many things, especially how well-off you are, but I would expect it to be more in the $100k range than the $5k range for rich-country effective altruists.

It would be interesting to see some data on this. Maybe the EA survey could ask something about it? Something like:

What is the most you would be willing to spend in order to save the life of a randomly chosen human? Assume that AMF and other charities do not exist - the alternative is spending the money on yourself.

I can imagine the distribution being quite wide.

I agree the distribution would be interesting! But it depends how many such opportunities there might be, no? What about:

"Imagine that over time the low hanging fruit is picked and further opportunities for charitable giving get progressively more expensive in terms of cost per life saved equivalents (CPLSE). At what CPLSE, in dollars, would you no longer donate?"

Do you mean the number of opportunities in the future, or the ability to donate larger amounts of money right now? We could do:

What is the most dollars you would be willing to donate in order to save the life of a randomly chosen human? Assume this is the only opportunity you'll ever get to save a life by donating - all other money you have must be spent on yourself and your family.

and also the endowment effect reversal:

If offered a choice between saving a random stranger's life and an amount of money, what is the smallest number of dollars you would have to be offered to choose that option? Assume you will only be offered this once, and do not have any opportunities to spend or donate money to save other people.

Your version seems good too, though I would worry that introducing the temporal element and background of progress might bias things in some way.


It seems like that question would interact weirdly with expectations of future income: as a college student I donate ~1% of expenses, but if I could only save one life, right now, I would probably try to take out a large, high interest loan to donate a large sum. That depends on availability of loans, risk aversion, expectations of future income, etc. much more than it does on my moral values.

Well, firstly, how much credence should we assign the actual analysis in that post?

Before we begin talking about how we should behave "even if" the cost per life saved is much higher than 5k - is there some consensus as to whether the actual facts and analysis of that post are actually true or even somewhat credible? (separate from the conclusions, which, I agree, seem clearly wrong for all the reasons you said).

As in, if they had instead titled the post "Givewell's Cost-Per-Life-Saved Estimates are Impossibly Low" and concluded "if the cost per life saved estimate was truly that low, we could have already gone ahead and saved all the cheap lives, and the cost would be higher - so there's something deeply wrong here"... would people be agreeing with it?

(Because if so, shouldn't the relevant lower bound for cpls on the impact evaluations be updated if they're wrong, and shouldn't that probably be the central point of discussion?

And if not...we should probably add a note clarifying for any reader joining the discussion late, that we're not actually sure whether the post is correct or not, before going into the implications of the conclusions. We certainly wouldn't want to start thinking that there aren't lives that can be saved at low cost if there actually are)

Dropping in late to note that I really like the meta-point here: It's easy to get caught up in arguing with the "implications" section of a post or article before you've even checked the "results" section. Many counterintuitive arguments fall apart when you carefully check the author's data or basic logic.

(None of the points I make here are meant to apply to Ben's points -- these are just my general thoughts on evaluating ideas.)

Put another way, arguments often take the form:

  • If A, then B
  • A
  • Therefore, B

It's tempting to attack "Therefore, B" with anti-B arguments C, D, and E, but I find that it's usually more productive to start by checking the first two points. Sometimes, you'll find issues that render "Therefore, B" moot; other times, you'll see that the author's facts check out and find yourself moving closer to agreement with "Therefore, B". Both results are valuable.

I don't think the post is correct in concluding that the current marginal cost-per-life-saved estimates are wrong. Annual malaria deaths are around 450k, and if you gave the Against Malaria Foundation $5k * 450k ($2.3B) they would not be able to make sure no one died from malaria in 2020, but still wouldn't give much evidence that $5k was too low an estimate for the marginal cost. It just means that AMF would have lots of difficulty scaling up so much, that some deaths can't be prevented by distributing nets, that some places are harder to work in, etc.

It does mean that big funders have seen the current cost-per-life saved numbers and decided not to give those organizations all the money they'd be able to use at that cost-effectiveness. But there are lots of reasons other than what Ben gives for why you might decide to do that, including:

  • You have multiple things you care about and are following a strategy of funding each of them some. For example, OpenPhil has also funded animal charities and existential risk reduction.
  • You don't want a dynamic where you're responsible for the vast majority of a supposedly independent organization's funding.
  • You think better giving opportunities may become available in the future and want to have funds if that happens.

The reasons/politics for why good ventures or gates foundation does not try to spend their money for effective causes is not important. If we go down that path we can also ask why don't rich countries plug gaps for basic health, basic education, or even ubi programs like that of give directly.

Whatever their reasons, we as individuals can have relatively big impacts, and as long as the cheaper interventions are not funded, we can help.

Health interventions work and child mortality dropped dramatically https://ourworldindata.org/child-mortality-globally

Curated and popular this week
Relevant opportunities