Hide table of contents

It’s also dangerous to the health of thousands of trial participants

The path from the atrocities of Nuremberg and Tuskegee to today’s robust ethics of medical research has not been easy.

In response to these horrific events, a remarkable global consensus emerged, across countries in very different parts of the world and with very different types of governments: Medical researchers, and the institutions and companies who fund their work, are bound by a series of ethical obligations. These obligations are designed to protect and respect the interests of people who participate in their research. They include minimizing foreseeable harms to participants and keeping the promises that researchers make to study volunteers.  

While the fate of USAID remains murky, one thing absolutely is clear. The abrupt termination of critical USAID-funded clinical trials, with insufficient time to safeguard the welfare of people who are participating, is profoundly unethical and utterly inexcusable. Such actions threaten the health and lives of thousands of patients.

The stop-work orders affect research designed to answer important questions about HIV or TB treatments, where immediate withdrawal of drugs not only takes away what may be lifesaving treatments but also risks exacerbating or creating drug-resistant strains, leaving participants potentially worse off than if they had never joined the study and creating additional, unacceptable risks for others in the community.

The “pause” has also ensnared studies of experimental devices that, in accordance with scientific and ethics principles, require regular monitoring of participant well-being and opportunities to remove the device at an appropriate time. Telling a medical researcher they must abruptly abandon study participants is akin to telling a surgeon they cannot treat a patient who has a post-operative infection that resulted from a surgery they performed the week before. All medical ethics codes forbid this. And for obvious reason.

58

9
0
1

Reactions

9
0
1
Comments7


Sorted by Click to highlight new comments since:

I'm confused. The wording of the headline suggests that USAID is not conducting trials in-house, they are giving grants to other organizations that conduct trials. If that's the case, the only way this makes sense is if the researchers went ahead and started treating patients before the government's check hit their bank accounts - that the researchers started the experiment without having enough money already in their own bank accounts to safely wrap up their work with their already enrolled subjects. That can't be ethical, can it? For precisely this reason? Shouldn't the headline be grossly unethical conduct by government-funded researchers?

I'm assuming the grants are paid in installments. If a research organization has a contract with the government that says they will be paid a total of $X in Y installments over Z years, it seems completely normal that they would start the trial after receiving the first installment (indeed, it seems likely that the timeline in their grant proposal to USAID would have promised that they would do this).

That seems reasonable up to a certain point. It seems reasonable for long-term grants to be paid out on some schedule and for a researcher to arrange a study such that a loss of funding would force them to wrap up early and without the data they were hoping for. But I think a researcher should still have an obligation to have enough money in their own bank account that, if funding gets cut, they can wrap up the study in a way that is safe for the subjects - they should have enough cash to ween the subjects off drugs or remove devices early or whatever else is involved in wrapping up. Funding getting cut is a risk that I would think would be fairly obvious when planning a study - especially if your source of funding is a government agency in a country that you know will have an election before the study concludes.

This is a great question. For some of the trials it wasn’t an issue of the funding freeze but the abrupt and unprecedented “stop work order” issued by Secretary of State Marco Rubio (who is also acting Administrator of USAID). It was so immediate and sweeping that the research staff would have been violating it if they helped remove experimental devices (but some did anyway). Many of the trials were partnerships with U.S. drug companies who were testing products they hoped to sell to commercial markets overseas. It also affected a malaria vaccine trial at Oxford. 

The funding situation is similar to described above - multi year contracts/agreements with USAID which investigators/partners expected the government to honor. Many studies probably had contingency plans for early termination, but those would depend on  adequate warning (weeks if not months/years) to wind down activities.


Nothing like this has happened before and it will fundamentally change how the US government does business with companies - in sectors beyond health/aid. 

NYT has a great article on that goes into more detail - https://www.nytimes.com/2025/02/06/health/usaid-clinical-trials-funding-trump.html?smid=nytcore-ios-share&referringSource=articleShare

There is a word that educated liberals need to learn or relearn: evil.

Flagrant disregard for human life is evil.

The cruelty is heartbreaking.

See here (https://forum.effectivealtruism.org/posts/FTTPCtkizkAQ9fkvM/unicode-wvyp) for the Rapid Response Fund: https://www.founderspledge.com/funds/rapid-response-fund. It's an opportunity to donate to help mitigate the worst immediate consequences of the aid freeze.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 8m read
 · 
In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward.  The ideas I think could have the highest impact are:  1. Government placements/secondments in key GHW areas (e.g. international development), and 2. Expanded (ultra) high-net-worth ([U]HNW) advising Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I’m excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups.  I can’t commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to articulate a few of the ideas I’ve been considering.  I’d love to hear your thoughts, both on these ideas and any other gaps you see in the space! Introduction I’m Mel, a Senior Program Associate at Open Philanthropy, where I lead grantmaking for the Effective Giving and Careers program[1] (you can read more about the program and our current strategy here). Throughout my time in this role, I’ve encountered great ideas, but have also noticed gaps in the space. This post shares a list of projects I’d like to see pursued, and would potentially want to support. These ideas are drawn from existing efforts in other areas (e.g., projects supported by our GCRCB team), suggestions from conversations and materials I’ve engaged with, and my general intuition. They aren’t meant to be a definitive roadmap, but rather a starting point for discussion. At the moment, I don’t have capacity to more actively explore these ideas and find the right founders for related projects. That may change, but for now, I’m interested in
Recent opportunities in Global health & development
20
Eva
· · 1m read