Global health R&D strikes me as having very high expected value, but might be difficult for governments in low income countries to justify to voters when it could spent on urgent object level health interventions which produce benefits more quickly.

Does that mean donors should focus more on R&D (eg - give more funding to CEPI than to the Pandemic Fund)? Is this idea fleshed out in better detail somewhere in the global health world?

20

0
0

Reactions

0
0
Comments3


Sorted by Click to highlight new comments since:

Normally I'd say yes, but my AGI timelines are now 50% in ~4 years, so there isn't much time for R&D to make a difference. I'd recommend interventions that pay off quickly, therefore. Bed nets, GiveDirectly, etc.

I think that the recent 80,000 Hours Podcast on high-impact climate philanthropy discusses this. See this section in the transcript, potentially. 

And there's also this recent sequence (e.g. one post is about Estimating the cost-effectiveness of previous R&D projects). 

[anonymous]2
0
0

Great post!

R&D about health in slums may be promising and neglected:

"Importance: Approximately 1 billion people currently live in slums, and it is estimated that a quarter of the world’s population will live in a slum by 2030. Slum conditions currently do not support a healthy or high-quality life. This is a very important issue. Tractability: Slum policy interventions appear relatively intractable. In contrast, In situ slum upgrading interventions largely deliver cost-effective results, but more research is needed.

[...]

As slums grow at an alarming rate, a better understanding of problems uniquely faced by slum populations is required. For this to happen, governments must consider slums as spatial entities and collect more extensive census data, while academia may contribute by focusing research directly on slum health."

Source: https://forum.effectivealtruism.org/posts/sgkvhJLFMDzBPgovp/shallow-investigation-slums

Curated and popular this week
 ·  · 16m read
 · 
Applications are currently open for the next cohort of AIM's Charity Entrepreneurship Incubation Program in August 2025. We've just published our in-depth research reports on the new ideas for charities we're recommending for people to launch through the program. This article provides an introduction to each idea, and a link to the full report. You can learn more about these ideas in our upcoming Q&A with Morgan Fairless, AIM's Director of Research, on February 26th.   Advocacy for used lead-acid battery recycling legislation Full report: https://www.charityentrepreneurship.com/reports/lead-battery-recycling-advocacy    Description Lead-acid batteries are widely used across industries, particularly in the automotive sector. While recycling these batteries is essential because the lead inside them can be recovered and reused, it is also a major source of lead exposure—a significant environmental health hazard. Lead exposure can cause severe cardiovascular and cognitive development issues, among other health problems.   The risk is especially high when used-lead acid batteries (ULABs) are processed at informal sites with inadequate health and environmental protections. At these sites, lead from the batteries is often released into the air, soil, and water, exposing nearby populations through inhalation and ingestion. Though data remain scarce, we estimate that ULAB recycling accounts for 5–30% of total global lead exposure. This report explores the potential of launching a new charity focused on advocating for stronger ULAB recycling policies in low- and middle-income countries (LMICs). The primary goal of these policies would be to transition the sector from informal, high-pollution recycling to formal, regulated recycling. Policies may also improve environmental and safety standards within the formal sector to further reduce pollution and exposure risks.   Counterfactual impact Cost-effectiveness analysis: We estimate that this charity could generate abou
sawyer🔸
 ·  · 2m read
 · 
Note: This started as a quick take, but it got too long so I made it a full post. It's still kind of a rant; a stronger post would include sources and would have gotten feedback from people more knowledgeable than I. But in the spirit of Draft Amnesty Week, I'm writing this in one sitting and smashing that Submit button. Many people continue to refer to companies like OpenAI, Anthropic, and Google DeepMind as "frontier AI labs". I think we should drop "labs" entirely when discussing these companies, calling them "AI companies"[1] instead. While these companies may have once been primarily research laboratories, they are no longer so. Continuing to call them labs makes them sound like harmless groups focused on pushing the frontier of human knowledge, when in reality they are profit-seeking corporations focused on building products and capturing value in the marketplace. Laboratories do not directly publish software products that attract hundreds of millions of users and billions in revenue. Laboratories do not hire armies of lobbyists to control the regulation of their work. Laboratories do not compete for tens of billions in external investments or announce many-billion-dollar capital expenditures in partnership with governments both foreign and domestic. People call these companies labs due to some combination of marketing and historical accident. To my knowledge no one ever called Facebook, Amazon, Apple, or Netflix "labs", despite each of them employing many researchers and pushing a lot of genuine innovation in many fields of technology. To be clear, there are labs inside many AI companies, especially the big ones mentioned above. There are groups of researchers doing research at the cutting edge of various fields of knowledge, in AI capabilities, safety, governance, etc. Many individuals (perhaps some readers of this very post!) would be correct in saying they work at a lab inside a frontier AI company. It's just not the case that any of these companies as
 ·  · 1m read
 · 
The belief that it's preferable for America to develop AGI before China does seems widespread among American effective altruists. Is this belief supported by evidence, or it it just patriotism in disguise? How would you try to convince an open-minded Chinese citizen that it really would be better for America to develop AGI first? Such a person might point out: * Over the past 30 years, the Chinese government has done more for the flourishing of Chinese citizens than the American government has done for the flourishing of American citizens. My village growing up lacked electricity, and now I'm a software engineer! Chinese institutions are more trustworthy for promoting the future flourishing of humanity. * Commerce in China ditches some of the older ideas of Marxism because it's the means to an end: the China Dream of wealthy communism. As AGI makes China and the world extraordinarily wealthy, we are far readier to convert to full communism, taking care of everyone, including the laborers who have been permanently displaced by capital. * The American Supreme Court has established "corporate personhood" to an extent that is nonexistent in China. As corporations become increasingly managed by AI, this legal precedent will give AI enormous leverage for influencing policy, without regard to human interests. * Compared to America, China has a head start in using AI to build a harmonious society. The American federal, state, and municipal governments already lag so far behind that they're less likely to manage the huge changes that come after AGI. * America's founding and expansion were based on a technologically-superior civilization exterminating the simpler natives. Isn't this exactly what we're trying to prevent AI from doing to humanity?