Do you think this charity is legitimate? It seems like they are but I'd like your opinion before I donate a few thousand more dollars to them. https://taimaka.org/

They say they can save a human life for about 1600$ using the same formula as givewell. What do you think? Other effective altruist charities can't save a life for less than 3500$ and usually more like 5000 $

 

I've read some research on them https://www.happierlivesinstitute.org/taimaka-summary/ but it was more focused on how they improve lives I'm curious about whether or not they will save a life of someone who would have otherwise died for 1600 ish dollars. 

41

0
0

Reactions

0
0
Comments7


Sorted by Click to highlight new comments since:

Hi! I'm Justin - I run Taimaka. We're an EA org, but pretty quiet on the forum - keep meaning to get around to writing up something about our work, but hasn't happened yet, so this is a good excuse to say hello!

This is a good question, and our cost-per-life-saved figure is also obviously a bold claim, so I'll share a bit about our thinking here. One disclaimer I'll make for clarity is that while our work is supported by GiveWell, our cost-effectiveness model is our own and the thoughts I'm sharing here are my own - I don't speak for GiveWell's team and their views. Our CEA is built off of their past work on acute malnutrition, but the end results + claims are ours. 

Generally, the way I think about our model and our $1.6k per life saved estimate is that this is the most accurate + true estimate we have for our program, but that you should probably read this estimate as having higher error bars surrounding it than estimates for current GiveWell top charities. I think there are two primary reasons for this:

  1. Taimaka is a younger charity with a shorter track record. We're extrapolating from a smaller data set than an organization like, say, New Incentives. Our costs could change (up or down) as we grow over the next few years, or things like the baseline conditions in Gombe State (like prevalence or coverage of acute malnutrition and treatment) where we work could change in ways we are not expecting. We try to build some of this into the model, like how we expect costs to change, but we are extrapolating. Our model is accurate to our current program, but things may change over time and the model may need to be revised.
  2. The evidence base for acute malnutrition treatment is more limited than we'd like it to be/for current top charities. There is no direct causal evidence (in the form of RCTs) for the mortality reduction benefits from acute malnutrition treatment, because treatment was developed before RCTs/clinical trials became popular in the 2000s, and now it's considered unethical to withhold treatment that works from kids to study treatment effect on mortality. Instead, GiveWell (and Taimaka by extension) model mortality reduction through a meta-analysis of historical data that provides information on mortality rate by anthropometric status, combined with data on change in anthropometric status from before/after treatment. This is just a worse evidence base than the RCTs we have for bed nets for instance, and so brings in higher error bars.
    1. We're working on some studies now to supplement this evidence base, but it's going to be a while before those bear fruit.

Hopefully this is helpful! In summary: we take this figure seriously, and stand by our modeling. We haven't put our thumb on the scales anywhere to make this number lower, it's the true result of a good faith effort to adapt GiveWell's model of other acute malnutrition treatment programs to our own. That said, expect higher error bars in this than you would in models for current top charities, both because of vagaries in Taimaka being younger + because of limitations in what we currently know about acute malnutrition treatment. If you're willing to accept that higher level of risk, I think Taimaka is a great donation option to do a lot of good, potentially even more cost-effectively than other places. Happy to have a call to chat this through in more detail if you'd like, feel free to shoot me an email! (My first name at taimaka.org). 

Thank you very much Justin. I really appreciate everything you're doing and your response. This does make me feel more comfortable about donating and I think I'll make another donation soon so I can save a second life. 

Always happy to answer questions! Thanks for your support + belief in what we do! Means a lot (which sounds very "charity language" but really is true for us and for the other people running charities hanging out around here). 

[comment deleted]*1
0
0
[comment deleted]*1
0
0

My 2 cents, from decent quality second hand information, yes! Tamaika is a  legitimate charity that is doing fantastic work treating malnutrition cost-effectively.

I'll also piggyback off this great question and @JustinGraham's fantastic response below and point out there are many smaller orgs that have performed their own cost-effectiveness analysis (Introducing Lafiya Nigeria, Tamaika etc.) and judge ourselves to be cost-effective compared to top  GiveWell orgs - without having the direct RCTs on the exact work we do to be able to qualify for GiveWell's top charity list, nor necessarily having external bodies assess us. (Rethink did an analysis for Lafiya). I would think almost all CE charities will have an analysis along these lines performed in the first few years of operation, and some that weren't judged to be cost-effective might be shut down.

Unfortunately like @JustinGraham says, doing direct RCTs on the life-saving evfect of our work might be close-to-impossible now either for ethical reasons, or because the size of study needed these days to detect mortalty differences is very large, so studies powered for mortality have become rare. This is largely because far less kids die than in the past  - which is great. This doesn't mean though that we can't do high quality research on proxy measures though (for us at OneDay Health quality of care and healthcare access) which we are currently doing in collaboration with top universities.

I'm co-founder of OneDay Health and we've done a cost-effectiveness analysis which might put us between $800 and $1800 per life saved. Early stage analysis (and self performed) analysiss though often grossly overestimates cost-effectiveness, so this cost-effectiveness would likely be hugely reduced if others or GiveWell did their own analysis.
 

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
Recent opportunities in Effective giving