I currently work with CE/AIM-incubated charity ARMoR on research distillation, quantitative modelling, consulting, and general org-boosting to support policy advocacy for market-shaping tools to incentivise innovation and ensure access to antibiotics to help combat AMR.
I previously did AIM's Research Training Program, was supported by a FTX Future Fund regrant and later Open Philanthropy's affected grantees program, and before that I spent 6 years doing data analytics, business intelligence and knowledge + project management in various industries (airlines, e-commerce) and departments (commercial, marketing), after majoring in physics at UCLA and changing my mind about becoming a physicist. I've also initiated some local priorities research efforts, e.g. a charity evaluation initiative with the moonshot aim of reorienting my home country Malaysia's giving landscape towards effectiveness, albeit with mixed results.
I first learned about effective altruism circa 2014 via A Modest Proposal, Scott Alexander's polemic on using dead children as units of currency to force readers to grapple with the opportunity costs of subpar resource allocation under triage. I have never stopped thinking about it since, although my relationship to it has changed quite a bit; I related to Tyler's personal story (which unsurprisingly also references A Modest Proposal as a life-changing polemic):
I thought my own story might be more relatable for friends with a history of devotion – unusual people who’ve found themselves dedicating their lives to a particular moral vision, whether it was (or is) Buddhism, Christianity, social justice, or climate activism. When these visions gobble up all other meaning in the life of their devotees, well, that sucks. I go through my own history of devotion to effective altruism. It’s the story of [wanting to help] turning into [needing to help] turning into [living to help] turning into [wanting to die] turning into [wanting to help again, because helping is part of a rich life].
Ivan Gayton was formerly mission head at Doctors Without Borders. His interview (60 mins, transcript here) with Elizabeth van Nostrand is full of eye-opening anecdotes, no single one is representative of the whole interview so it's worth listening to / reading it all. Here's one, on the sheer level of poverty and how giving workers higher wages (even if just $1/day vs the local market rate of $0.25/day "for nine hours on the business end of a shovel") distorted the local economy to the point of completely messing up society:
[00:06:07] Ivan: I had a real moment when I had this construction crew that was rebuilding a wing of the hospital and there were 30 people on this construction crew. And at some point, my boss, the project coordinator says to me, "Ivan, why are you just so obsessed with the construction crew always working? Constantly working and, and you know, never lacking for something to do." And I'm like, "well, because you know, you have a whole crew of 30 people, it's terribly expensive if they're doing nothing, I mean, if they sit there and do nothing all day, that costs, oh wait, $30, huh? Maybe I'll just relax about that."
'Cause you know, my last gig I'd been a forestry project manager, crew of 75 people who cost $450 a day each. So if they, you know, lose an hour of productivity, that's like huge money. A day of productivity is unthinkable.
(Aside: I'm having trouble believing the $450/person-day cost for forestry crew back in the late 90s and early 00s, isn't that $90k/year or $155k today?)
[00:06:56] Ivan: So I bring that to this, you know, African construction crew and the construction crew themselves are kind of exhausted. Like, good lord, this guy's nuts. but that realization that... 30 people on the business end of sledgehammers and shovels and travels cost way less than one hour of my time for an entire day. Wow. That was shocking.
And we were paying more than the local market rate for unskilled labor. I mean, at that time, this is 2003, the, the local market rate for nine hours on the business end of a shovel was a shiny new quarter, 25 cents. We were paying a dollar. So we had this huge lineup of people to work. I kind of rotated through all the villagers, to give as many people as possible a chance for the real unskilled labor. I think the head construction crew guy was getting two bucks a day.
[00:07:52] Elizabeth: yeah, so maybe let's get into the economics of this. On one hand, it seems very generous to pay people four times their normal wage, and it's, you know, a trivial cost to MSF. On the other hand, that does distort the local economy.
[00:08:07] Ivan: distort is putting it mildly. It just completely messes up the local society. I mentioned that I had done this back in the envelope calculation that we were 75% of the local economy. I mean, what that actually means is we destroyed and distorted the local economy completely; as development practice that would've been utterly and completely unethical.
The only justification for doing something like that is an acute emergency, which it was, it was nigh on a hundred thousand people with literally no access to healthcare whatsoever. The amount of avoidable suffering and death that was going on that we could actually alleviate was something that, you know, in sort of humanitarian practice, I guess we arrogate to ourselves the idea that we can, in a sufficiently emergency situation, justify doing things that would be unethical development practice.
[00:09:06] Elizabeth: Do you think the village was worse off for having the hospital located in their village?
[00:09:11] Ivan: oh yeah. Because we obviously brought this flood of money in, but where does the money go? The doctors and nurses, they're not even local. They're from the capital city. So you're bringing in people from the capitol who then lord it over the local people, price of food jumps up, price of accommodation goes insane. The trickle down opportunities are to be sex workers and cleaners and, you know, servants for these, for these newly created royalty.
[00:09:46] Elizabeth: you might hope that if the price of food goes up, but their wages are also going up because they're working for the hospital or tangentially, then that would compensate?
[00:09:54] Ivan: Well yeah. For the people who are already, you know, have access to the labor market and are already able to sort of get in on that. Sure. I mentioned that I actually, I deliberately kind of rotated through the villagers to give lots of people a chance, but still, if you're not one of the people who gets a chance or even ever had a chance, or was somebody who's, you know, on the outs with the local powerful people, then we, as these foreigners providing these jobs, we never even see those people.
They don't even get to apply for a job with us. We never even know of their existence. So those people, now, the price of everything is jumped. There's a bunch of newly, much more wealthy people around them, and they're excluded from that. They don't see any of the benefit and all of the harm. So it's, it's terrible.
I remember Rob Wiblin speaking about episode length at some point, arguing that longer episodes are really valuable in that they allow a much more in-depth conversation than would otherwise be possible; I agree!, but there is a tradeoff with conciseness and use of time.
I’ve looked into our own data and, contrary to the expectations of many, people just keep listening to long episodes, at least so long as they’re good.
People do indeed drop off as episodes get longer, but two-thirds as many people are still with me between 3h30m and 4h as were with me between 30m and 1h. So the benefit of incrementally longer recordings remains high. ...
Another possible objection: maybe fewer people are willing to start listening to longer episodes? Not as far as we could see (see figure 2). There’s no relationship between episode length and the number of people who start playing it.
That said I do agree with Nick that I wish they tightened up their editing. This seems doable in a way that still gets the benefits Rob mentioned in his essay, like getting to new questions the guest hasn’t been asked before, and the guests easing into the conversation over time as Rob et al build chemistry with them ("I find the best moments on the show are often past the 2h30m mark, when we’re both more likely to be at ease, let our guard down, be authentic and go off script").
I am however wary of using marginal listener acquisition (i.e. listener growth) as the main "steer" for 80K podcast fine-tuning, because of the tyranny of the marginal user, which leads to the enshittification of all once-great products.
Thanks Nick, and great take as usual (for others' convenience, here it is)
I myself work at a CE-incubated charity, so I'm of course inclined to agree with you on the reasons you listed as to how CE's approach mitigates the disadvantages smaller orgs and individuals have vs larger ones.
(As a tangent this is also why I have incredible respect for what you've managed to build at OneDay Health, AFAICT you don't have any of those advantages we benefit from! Seriously: since 2017, 53(!) nurse-led health centers launched leading to 340k patients treated, >$600k saved by patients, 165k malaria cases treated, 125k under-5s treated is phenomenal. I wish you gave a talk at EAG on how you and the team did this, lots of lessons for aspiring "moral entrepreneurs" I'm sure. Sorry btw if this makes you feel awkward I've always wanted to express this)
That said I do think Scott is pointing to a slightly different thing than big vs small orgs, which is traditionally impressive credentials and ways of working vs non-traditional credentials or the lack thereof. I took Scott's hope (which I shared) to be that there are a lot more people than we think who are "diamonds in the rough" — they may not have gone to Oxbridge / Ivy etc or have training & experience in medicine / law / consulting / tech / whatever prestigious career, and their ideas for making the world better may not be the usual ideas everyone agrees is "best" but oddball ones that make you go "... huh?", and most talent-spotters filter for these kinds of markers and would exclude them — but Scott (who doesn't have a traditionally-impressive background himself) would see their potential and give them a shot, and the follow-up would hopefully prove him right. He's disappointed that this doesn't seem to be true, which suggests that those traditionally impressive credentials really do give a lot of hard-to-fake signal of your projects panning out. I mean I don't think this is all that surprising, but also this is grist for the mill of the discussion around EA feeling elitist and exclusive to people with more "relatable" or less privileged backgrounds who nevertheless really want to contribute meaningfully to the whole "doing good better" project.
This isn't really a substantive comment, I just wanted to express my appreciation for your model critiques / replications / analyses, both this one and the RP one. More generally I find your critiques of EA-in-practice routinely first-rate and great fodder for further reflection, so thanks.
My favorite midsized grantmaker is Scott Alexander's ACX Grants, mainly because I've enjoyed his blog for over a decade and it's been really nice to see the community that sprang up around his writing grow and flourish, especially the EA stuff. His recent ACX Grants 1-3 Year Updates is a great read in this vein. Some quotes:
The first cohort of ACX Grants was announced in late 2021, the second in early 2024. In 2022, I posted one-year updates for the first cohort. Now, as I start thinking about a third round, I’ve collected one-year updates on the second and three-year updates on the first. ...
The total cost of ACX Grants, both rounds, was about $3 million. Do these outcomes represent a successful use of that amount of money? ...
It’s harder to produce Inside View estimates, because so many of the projects either produce vague deliverables (eg a white paper that might guide future action) or intermediate results only (eg getting a government to pass AI safety regulations is good, but can’t be considered an end result unless those regulations prevent the AI apocalypse). Because we tend towards incubating charities and funding research (rather than last-mile causes like buying bednets), achieved measurable deliverables are thin on the ground. But here are things that ACX grantees have already accomplished:
Improved the living/slaughter conditions of 30 million fish.
Helped create Manifold Markets, a prediction market site with thousands of satisfied users, whose various spinoffs play a central role in the rationalist/EA community.
Helped create thousands of jobs in Rwanda and other developing countries
Passed an instant runoff vote proposition in Seattle.
Saved between a few dozen and a few hundred lives in Nigeria through better obstetric care.
And here are some intermediate deliverables from grantees:
Made Australian government take AI x-risk more seriously (estimated from 50th percentile to 60th percentile outcome)
Gotten the End Kidney Deaths Act (could save >1000 lives and billions of dollars per year) in front of Congress, with decent odds of passing by 2026.
Plausibly saved 2 billion chickens from painful death over next decade2.
Antiparasitic medication oxfendazole continues to advance through the clinical trial process.
And here are some things that have not been delivered yet but that I remain especially optimistic about:
Creation of anti-mosquito drones that provide a second level of defense along with bednets.
Revolutionize diagnosis of traumatic brain injury
Improve dietary guidelines in developing countries
Continue to support research and adoption of far UV light for pandemic prevention
Reduce lead poisoning in Nigeria
I think these underestimate success since many projects have yet to pay off (or to convince me to be especially optimistic), and others have paid off in vague hard-to-measure ways.
This is a beautifully crosswise oriented slice of the entire collective endeavor of effective altruism, and quite a lot of good done (or poised to be done) helped by a not-that-large sum of $3M over 2 cohorts given that GW and OP move 2 OOMs more $ per year.
It's also been quite intellectually enriching to just see the sheer diversity of proposals to make the world better in these cohorts; e.g. I was a bit let down to learn that the Far Out Initiative didn't pan out ($50k to fund a team to work on pharmacologic and genetic interventions to imitate the condition of Jo Cameron, a 77-year old Scottish woman who is both incapable of experiencing any physical or psychological suffering and has lived an astonishingly well-adjusted life despite that, by creating painkillers to splice into farm animals to promote cruelty-free meat and "end all suffering in the world forever").
Of Scott's lessons learned, this one stood out to me in light of the recent elitism in EA survey I just took, I think because I was leaning towards the same hope he had:
One disappointing result was that grants to legibly-credentialled people operating in high-status ways usually did better than betting on small scrappy startups (whether companies or nonprofits). For example, Innovate Animal Ag was in many ways overdetermined as a grantee - former Yale grad and Google engineer founder, profiled in NYT, already funded by Open Philanthropy - and they in fact did amazing work. On the other hand, there were a lot of promising ACX community members with interesting ideas who were going to turn them into startups any day now, but who ended up kind of floundering (although this also describes Manifold, one of our standout successes). One thing I still don't understand is that Innovate Animal Ag seemed to genuinely need more funding despite being legibly great and high status - does this screen off a theoretical objection that they don't provide ACX Grants with as much counterfactual impact? Am I really just mad that it would be boring to give too many grants to obviously-good things that even moron could spot as promising?
The other takeaway of his that gave me mixed feelings was this one, I think because I'd been secretly hoping for some form of work-life balance compatibility with really effective (emphasis) direct-work altruism:
Someone (I think it might be Paul Graham) once said that they were always surprised how quickly destined-to-be-successful startup founders responded to emails - sometimes within a single-digit number of minutes regardless of time of day. I used to think of this as mysterious - some sort of psychological trait? Working with these grants has made me think of it as just a straightforward fact of life: some people operate an order of magnitude faster than others. The Manifold team created something like five different novel institutions in the amount of time it's taken some other grantees to figure out a business plan; I particularly remember one time when I needed something, sent out a request to talk about it with two or three different teams, and the Manifold team had fully created the thing and were pestering me to launch a trial version before some of the other people had even gotten back to me. I take no pleasure in reporting this - I sometimes take a week or two to answer emails, and all of the predictions about my personality that this implies would be correct - but it's increasingly something that I look for and respect. A lot of the most successful grants succeeded quickly, or at least were quick to get on a promising track. Since everything takes ten times longer than people expect, only someone who moves ten times faster than people expect can get things done in a reasonable amount of time.
Edited to add: I appreciated this comment by Alex Toussaint, an ACX grantee:
Tornyol (anti-mosquito drones) is based in France and we couldn't have got the support from ACX Grants from a local VC. ...
VCs, like potential employees or clients, have reading grids (i.e. rubrics, a transliteration of « une grille de lecture ») to evaluate pitches. The great thing I found about ACX Grants is that the grid is different, and encourages different kinds of projects. Founder obsession for a problem seems to be encouraged in ACX Grants, although it's clearly discouraged for very early VC funding. VCs like very well made slides, communication abilities, and beautiful people in general, while I've found no such bias for ACX Grants. Being based outside the US is a big minus for American VCs, but ACX Grants almost seems to be favoring it. VCs tend to think a lot by analogy (the Uber for X, the Cursor for Y ...) while I found ACX Grants to be much more thinking from first principles than the median VC I met.
I'm not criticizing the VC reading grid. It obviously comes from experience and it tends to work financially for them. But you have to remember that a large part of the decision comes down to the potential for a quite early (3-4 years) and billion-dollar exit option. Not all projects fit that and it's a good thing to support the other. The other advantage of it is that it selects founders that can go through the hoops of making their project fit the grid. That proves VCs the founders are capable of adapting their message to their interlocutors, which is highly necessary when raising further money, recruiting or discussing with any partner. That's something ACX Grants does not seem to value much.
All in all, ACX Grants is great in that it provides funding with a very unique reading grid, so it helps projects that could get no help anywhere else.
Aidan's response to Vasco's comment you quoted is a starting point:
I would actually expect our marginal multiplier to be much closer to our average multiplier than the CEARCH method implies. Most importantly, I expect most of our marginal resources are dedicated to identifying and executing on scalable pledge growth strategies. I think this work, in expectation, provides a pretty strong multiplier. By comparison our average multiplier includes some major fixed costs (e.g., related to running our donation platform).
The best one-stop summary I know of is still Scott Alexander's In Continued Defense Of Effective Altruism from late 2023. I'm curious to see if anyone has an updated take, if not I'll keep steering folks there:
Here’s a short, very incomplete list of things effective altruism has accomplished in its ~10 years of existence. I’m counting it as an EA accomplishment if EA either provided the funding or did the work, further explanations in the footnotes. I’m also slightly conflating EA, rationalism, and AI doomerism rather than doing the hard work of teasing them apart:
Global Health And Development
Saved about 200,000 lives total, mostly from malaria1
Treated 25 million cases of chronic parasite infection.2
Given 5 million people access to clean drinking water.3
Supported clinical trials for both the RTS.S malaria vaccine (currently approved!) and the R21/Matrix malaria vaccine (on track for approval)4
Supported additional research into vaccines for syphilis, malaria, helminths, and hepatitis C and E.5
Supported teams giving development economics advice in Ethiopia, India, Rwanda, and around the world.6
Animal Welfare:
Convinced farms to switch 400 million chickens from caged to cage-free.7
Freed 500,000 pigs from tiny crates where they weren’t able to move around8
Gotten 3,000 companies including Pepsi, Kelloggs, CVS, and Whole Foods to commit to selling low-cruelty meat.
AI:
Developed RLHF, a technique for controlling AI output widely considered the key breakthrough behind ChatGPT.9
…and other major AI safety advances, including RLAIF and the foundations of AI interpretability10.
Founded the field of AI safety, and incubated it from nothing up to the point where Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Bill Gates, and hundreds of others have endorsed it and urged policymakers to take it seriously.11
Helped convince OpenAI to dedicate 20% of company resources to a team working on aligning future superintelligences.
Gotten major AI companies including OpenAI to work with ARC Evals and evaluate their models for dangerous behavior before releasing them.
Got two seats on the board of OpenAI, held majority control of OpenAI for one wild weekend, and still apparently might have some seats on the board of OpenAI, somehow?12
Helped found, and continue to have majority control of, competing AI startup Anthropic, a $30 billion company widely considered the only group with technology comparable to OpenAI’s.13
Helped (probably, I have no secret knowledge) the Biden administration pass what they called "the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.”
Won the PR war: a recent poll shows that 70% of US voters believe that mitigating extinction risk from AI should be a “global priority”.
Other:
Helped organize the SecureDNA consortium, which helps DNA synthesis companies figure out what their customers are requesting and avoid accidentally selling bioweapons to terrorists14.
Provided a significant fraction of all funding for DC groups trying to lower the risk of nuclear war.15
Played a big part in creating the YIMBY movement - I’m as surprised by this one as you are, but see footnote for evidence17.
I think other people are probably thinking of this as par for the course - all of these seem like the sort of thing a big movement should be able to do. But I remember when EA was three philosophers and few weird Bay Area nerds with a blog. It clawed its way up into the kind of movement that could do these sorts of things by having all the virtues it claims to have: dedication, rationality, and (I think) genuine desire to make the world a better place.
One detail that caught my eye from your post was this chart:
I was a bit surprised to see the global average WTP being $67k per QALY or ~5x world GDP per capita, while for these individual countries they seem closer to 0.5-2x.
Eyeballing the chart below from OP's 2021 technical update makes me wonder if that discrepancy is driven by the higher WTP multipliers in LMICs:
But contra my own guess, the authors say
As we’ll discuss in Appendix A, there are empirical and theoretical reasons to think that the exchange rate at which people trade off mortality risks against income gains differs systematically across income levels, with richer people valuing mortality more relative to income.
Might Nuno's previous work Shallow evaluations of longtermist organizations be useful or relevant? I'm guessing you've probably seen it, just flagging in case you haven't (e.g. it wasn't in fn2).
Ivan Gayton was formerly mission head at Doctors Without Borders. His interview (60 mins, transcript here) with Elizabeth van Nostrand is full of eye-opening anecdotes, no single one is representative of the whole interview so it's worth listening to / reading it all. Here's one, on the sheer level of poverty and how giving workers higher wages (even if just $1/day vs the local market rate of $0.25/day "for nine hours on the business end of a shovel") distorted the local economy to the point of completely messing up society:
(Aside: I'm having trouble believing the $450/person-day cost for forestry crew back in the late 90s and early 00s, isn't that $90k/year or $155k today?)