Hide table of contents

I think the Forum should have a collection of posts ("sequence") on global health and development. What posts should we include?

Here's a very rough preliminary list:

Moral foundations

Ord, Toby (2019) The moral imperative toward cost-effectiveness in global health, in Hilary Greaves & Theron Pummer (eds.) Effective Altruism: Philosophical Issues, Oxford: Oxford University Press, pp. 29–36.

Ord, Toby (2012) Global poverty and the demands of morality, in John Perry (ed.) God, the Good, and Utilitarianism, Cambridge: Cambridge University Press, pp. 177–191.

Singer, Peter (1972) Famine, affluence, and morality, Philosophy & Public Affairs, vol. 1, pp. 229–243.

Giving

GiveWell (2010) Your donation can change someone’s life, GiveWell.

GiveWell (2016) The wrong donation can accomplish nothing, GiveWell.

GiveWell (2010) Your dollar goes further overseas, GiveWell, September.

Randomista debate

Halstead, John & Hauke Hillebrandt (2020) Growth and the case against randomista development, Effective Altruism Forum, January 16.

Ogden, Timothy (2020) RCTs in development economics, their critics and their evolution, in Florent Bédécarrats, Isabelle Guérin & François Roubaud (eds.) Randomized Control Trials in the Field of Development, Oxford: Oxford University Press, pp. 126–151.

Aid skepticism

Karnofsky, Holden (2015) The lack of controversy over well-targeted aid, The GiveWell Blog, November 6.

MacAskill, William (2019) Aid scepticism and effective altruism, Journal of Practical Ethics, vol. 7, pp. 49–60.

Misc

Kuhn, Ben (2019) Why Nations Fail and the long-termist view of global poverty, Ben Kuhn’s Blog, July 16.

Kaufman, Jeff (2015) Why global poverty?, Jeff Kaufman’s Blog, August 11.

Ord, Toby (2017) The value of money going to different groups, Centre for Effective Altruism, May 2 (updated 19 February 2020).

13

0
0

Reactions

0
0
New Answer
New Comment


4 Answers sorted by

Banerjee & Duflo 'Foreign affairs' article is pretty bad, and contains an interesting error, so maybe it should be removed:

"Between 2014 and 2016, a total of 582 million insecticide-treated mosquito nets were delivered globally. Of these, 75 percent were given out through mass distribution campaigns of free bed nets, saving tens of millions of lives."

They actually repeat this mistake in their recent book 'Good economics for hard times': 

"The magazine Nature concluded that insecticide-treated net distributions averted 450 million malaria deaths between 2000 and 2015."

which probably based on an old GWWC article, but they mix up deaths and cases.

(Says something about their priors that they believe that bed nets have saved half almost half a billion lives and they're off by two orders of magnitude. It's the Nobel prize in economics equivalent of believing that Michael Bloomberg could give every American $1m.)

Maybe include 'Givewell's Top Charities are increasingly hard to beat' instead?

Banerjee & Duflo 'Foreign affairs' article is pretty bad, and contains an interesting error, so maybe it should be removed

Interestingly, I considered removing it after reading it and being unimpressed by it, but distrusted my judgment since (I think) I saw it recommended by a reputable social scientist. The error escaped my attention, though. I will remove it. Thanks.

Per word - and for a particular kind of person - Piper (2015) is one of the most powerful things ever written on the topic. I think about it most months of my life. But I understand why you might not include it in a curriculum.

just so you know, there are people who are angry about global inequality, people who want to end all of the bad things in the world, people who feel the same pain and anger that you feel. But we don’t treat mass murder as inevitable. We don’t call people weak for disagreements. We don’t admire people for their willingness to kill for the cause, or even for their willingness to suffer for the cause - just for their ability to change stuff so there’s no more cause and we can all retire happily to a world without poverty. And we’d love to have you. If you ever get tired, come join us, we milquetoast autistic rationalist liberals, because you don’t have to rant on the internet about killing people to earn our esteem, you just have to fix stuff.

Roodman (2007)

"On balance, the quantitative approach to exploring grand questions about aid effectiveness, which began 40 years ago, was worth trying and may be worth pursuing somewhat further. But the literature will probably continue to disappoint as often as it offers hope. The biggest challenge is to go beyond documenting correlations to demonstrating causation -- to show not just that aid went hand-in-hand with economic growth, but caused it. Aid has eradicated diseases, prevented famines, and done many other good things. But given the limited and noisy data available, its effects on growth in particular probably cannot be detected."

As an alternative to "Famine, Affluence, and Morality," there is Peter Unger's Living High and Letting Die, of which Chapter 2 is particularly relevant.  It's more philosophical (this could be a bad thing) and much more comprehensive than Singer's article.

This is the first of our cases:

The Vintage Sedan. Not truly rich, your one luxury in life is a vintage Mercedes sedan that, with much time, attention and money, you've restored to mint condition. In particular, you're pleased by the auto's fine leather seating. One day, you stop at the intersection of two small country roads, both lightly travelled. Hearing a voice screaming for help, you get out and see a man who's wounded and covered with a lot of his blood. Assuring you that his wound's confined to one of his legs, the man also informs you that he was a medical student for two full years. And, despite his expulsion for cheating on his second year final exams, which explains his indigent status since, he's knowledgeably tied his shirt near the wound so as to stop the flow. So, there's no urgent danger of losing his life, you're informed, but there's great danger of losing his limb. This can be prevented, however, if you drive him to a rural hospital fifty miles away. “How did the wound occur?” you ask. An avid bird‐watcher, he admits that he trespassed on a nearby field and, in carelessly leaving, cut himself on rusty barbed wire. Now, if you'd aid this trespasser, you must lay him across your fine back seat. But, then, your fine upholstery will be soaked through with blood, and restoring the car will cost over five thousand dollars. So, you drive away. Picked up the next day by another driver, he survives but loses the wounded leg.

Except for your behavior, the example's as realistic as it's simple.

Even including the specification of your behavior, our other case is pretty realistic and extremely simple; for convenience, I'll again display it:

The Envelope. In your mailbox, there's something from (the U.S. Committee for) UNICEF. After reading it through, you correctly believe that, unless you soon send in a check for $100, then, instead of each living many more years, over thirty more children will die soon. But, you throw the material in your trash basket, including the convenient return envelope provided, you send nothing, and, instead of living many years, over thirty more children soon die than would have had you sent in the requested $100.

Taken together, these contrast cases will promote the chapter's primary puzzle.

Thanks. A related option would be to list The Singer solution to world poverty, which describes both Singer's drowning child example and some of Unger's thought experiments. (I thought that article was pretty powerful when I first read it, but that was over a decade ago.)

Comments3
Sorted by Click to highlight new comments since:

I wrote up a brief overview of EA & Global Development, there may be some resources in there that are useful to add.

Great! Have you considered publishing this as a Forum post? Then we can include it in the "sequence" (besides listing a bunch of readings, you have a list of relevant orgs, courses, and other resources which aren't posts and so can't be included in the collection, despite being valuable and relevant).

Sure, I can make it into a post, thanks for suggesting it.

Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the