Bio

Participation
4

I currently work with CE/AIM-incubated charity ARMoR on research distillation, quantitative modelling, consulting, and general org-boosting to support policy advocacy for market-shaping tools to incentivise innovation and ensure access to antibiotics to help combat AMR

I previously did AIM's Research Training Program, was supported by a FTX Future Fund regrant and later Open Philanthropy's affected grantees program, and before that I spent 6 years doing data analytics, business intelligence and knowledge + project management in various industries (airlines, e-commerce) and departments (commercial, marketing), after majoring in physics at UCLA and changing my mind about becoming a physicist. I've also initiated some local priorities research efforts, e.g. a charity evaluation initiative with the moonshot aim of reorienting my home country Malaysia's giving landscape towards effectiveness, albeit with mixed results. 

I first learned about effective altruism circa 2014 via A Modest Proposal, Scott Alexander's polemic on using dead children as units of currency to force readers to grapple with the opportunity costs of subpar resource allocation under triage. I have never stopped thinking about it since, although my relationship to it has changed quite a bit; I related to Tyler's personal story (which unsurprisingly also references A Modest Proposal as a life-changing polemic):

I thought my own story might be more relatable for friends with a history of devotion – unusual people who’ve found themselves dedicating their lives to a particular moral vision, whether it was (or is) Buddhism, Christianity, social justice, or climate activism. When these visions gobble up all other meaning in the life of their devotees, well, that sucks. I go through my own history of devotion to effective altruism. It’s the story of [wanting to help] turning into [needing to help] turning into [living to help] turning into [wanting to die] turning into [wanting to help again, because helping is part of a rich life].

How others can help me

I'm looking for "decision guidance"-type roles e.g. applied prioritization research.

How I can help others

Do reach out if you think any of the above piques your interest :)

Comments
273

Topic contributions
3

Thanks Nick, and great take as usual (for others' convenience, here it is)

I myself work at a CE-incubated charity, so I'm of course inclined to agree with you on the reasons you listed as to how CE's approach mitigates the disadvantages smaller orgs and individuals have vs larger ones. 

(As a tangent this is also why I have incredible respect for what you've managed to build at OneDay Health, AFAICT you don't have any of those advantages we benefit from! Seriously: since 2017, 53(!) nurse-led health centers launched leading to 340k patients treated, >$600k saved by patients, 165k malaria cases treated, 125k under-5s treated is phenomenal. I wish you gave a talk at EAG on how you and the team did this, lots of lessons for aspiring "moral entrepreneurs" I'm sure. Sorry btw if this makes you feel awkward I've always wanted to express this)

That said I do think Scott is pointing to a slightly different thing than big vs small orgs, which is traditionally impressive credentials and ways of working vs non-traditional credentials or the lack thereof. I took Scott's hope (which I shared) to be that there are a lot more people than we think who are "diamonds in the rough" — they may not have gone to Oxbridge / Ivy etc or have training & experience in medicine / law / consulting / tech / whatever prestigious career, and their ideas for making the world better may not be the usual ideas everyone agrees is "best" but oddball ones that make you go "... huh?", and most talent-spotters filter for these kinds of markers and would exclude them — but Scott (who doesn't have a traditionally-impressive background himself) would see their potential and give them a shot, and the follow-up would hopefully prove him right. He's disappointed that this doesn't seem to be true, which suggests that those traditionally impressive credentials really do give a lot of hard-to-fake signal of your projects panning out. I mean I don't think this is all that surprising, but also this is grist for the mill of the discussion around EA feeling elitist and exclusive to people with more "relatable" or less privileged backgrounds who nevertheless really want to contribute meaningfully to the whole "doing good better" project. 

This isn't really a substantive comment, I just wanted to express my appreciation for your model critiques / replications / analyses, both this one and the RP one. More generally I find your critiques of EA-in-practice routinely first-rate and great fodder for further reflection, so thanks.

My favorite midsized grantmaker is Scott Alexander's ACX Grants, mainly because I've enjoyed his blog for over a decade and it's been really nice to see the community that sprang up around his writing grow and flourish, especially the EA stuff. His recent ACX Grants 1-3 Year Updates is a great read in this vein. Some quotes: 

The first cohort of ACX Grants was announced in late 2021, the second in early 2024. In 2022, I posted one-year updates for the first cohort. Now, as I start thinking about a third round, I’ve collected one-year updates on the second and three-year updates on the first. ...

The total cost of ACX Grants, both rounds, was about $3 million. Do these outcomes represent a successful use of that amount of money? ...

It’s harder to produce Inside View estimates, because so many of the projects either produce vague deliverables (eg a white paper that might guide future action) or intermediate results only (eg getting a government to pass AI safety regulations is good, but can’t be considered an end result unless those regulations prevent the AI apocalypse). Because we tend towards incubating charities and funding research (rather than last-mile causes like buying bednets), achieved measurable deliverables are thin on the ground. But here are things that ACX grantees have already accomplished:

  • Improved the living/slaughter conditions of 30 million fish.
  • Helped create Manifold Markets, a prediction market site with thousands of satisfied users, whose various spinoffs play a central role in the rationalist/EA community.
  • Helped create thousands of jobs in Rwanda and other developing countries
  • Passed an instant runoff vote proposition in Seattle.
  • Saved between a few dozen and a few hundred lives in Nigeria through better obstetric care.

And here are some intermediate deliverables from grantees:

  • Made Australian government take AI x-risk more seriously (estimated from 50th percentile to 60th percentile outcome)
  • Gotten the End Kidney Deaths Act (could save >1000 lives and billions of dollars per year) in front of Congress, with decent odds of passing by 2026.
  • Plausibly saved 2 billion chickens from painful death over next decade2.
  • Antiparasitic medication oxfendazole continues to advance through the clinical trial process.

And here are some things that have not been delivered yet but that I remain especially optimistic about:

  • Creation of anti-mosquito drones that provide a second level of defense along with bednets.
  • Revolutionize diagnosis of traumatic brain injury
  • Improve dietary guidelines in developing countries
  • Continue to support research and adoption of far UV light for pandemic prevention
  • Reduce lead poisoning in Nigeria

I think these underestimate success since many projects have yet to pay off (or to convince me to be especially optimistic), and others have paid off in vague hard-to-measure ways.

This is a beautifully crosswise oriented slice of the entire collective endeavor of effective altruism, and quite a lot of good done (or poised to be done) helped by a not-that-large sum of $3M over 2 cohorts given that GW and OP move 2 OOMs more $ per year. 

It's also been quite intellectually enriching to just see the sheer diversity of proposals to make the world better in these cohorts; e.g. I was a bit let down to learn that the Far Out Initiative didn't pan out ($50k to fund a team to work on pharmacologic and genetic interventions to imitate the condition of Jo Cameron, a 77-year old Scottish woman who is both incapable of experiencing any physical or psychological suffering and has lived an astonishingly well-adjusted life despite that, by creating painkillers to splice into farm animals to promote cruelty-free meat and "end all suffering in the world forever").

Of Scott's lessons learned, this one stood out to me in light of the recent elitism in EA survey I just took, I think because I was leaning towards the same hope he had:

One disappointing result was that grants to legibly-credentialled people operating in high-status ways usually did better than betting on small scrappy startups (whether companies or nonprofits). For example, Innovate Animal Ag was in many ways overdetermined as a grantee - former Yale grad and Google engineer founder, profiled in NYT, already funded by Open Philanthropy - and they in fact did amazing work. On the other hand, there were a lot of promising ACX community members with interesting ideas who were going to turn them into startups any day now, but who ended up kind of floundering (although this also describes Manifold, one of our standout successes). One thing I still don't understand is that Innovate Animal Ag seemed to genuinely need more funding despite being legibly great and high status - does this screen off a theoretical objection that they don't provide ACX Grants with as much counterfactual impact? Am I really just mad that it would be boring to give too many grants to obviously-good things that even moron could spot as promising?

The other takeaway of his that gave me mixed feelings was this one, I think because I'd been secretly hoping for some form of work-life balance compatibility with really effective (emphasis) direct-work altruism:

Someone (I think it might be Paul Graham) once said that they were always surprised how quickly destined-to-be-successful startup founders responded to emails - sometimes within a single-digit number of minutes regardless of time of day. I used to think of this as mysterious - some sort of psychological trait? Working with these grants has made me think of it as just a straightforward fact of life: some people operate an order of magnitude faster than others. The Manifold team created something like five different novel institutions in the amount of time it's taken some other grantees to figure out a business plan; I particularly remember one time when I needed something, sent out a request to talk about it with two or three different teams, and the Manifold team had fully created the thing and were pestering me to launch a trial version before some of the other people had even gotten back to me. I take no pleasure in reporting this - I sometimes take a week or two to answer emails, and all of the predictions about my personality that this implies would be correct - but it's increasingly something that I look for and respect. A lot of the most successful grants succeeded quickly, or at least were quick to get on a promising track. Since everything takes ten times longer than people expect, only someone who moves ten times faster than people expect can get things done in a reasonable amount of time.


Edited to add: I appreciated this comment by Alex Toussaint, an ACX grantee: 

Tornyol (anti-mosquito drones) is based in France and we couldn't have got the support from ACX Grants from a local VC. ...

VCs, like potential employees or clients, have reading grids (i.e. rubrics, a transliteration of « une grille de lecture ») to evaluate pitches. The great thing I found about ACX Grants is that the grid is different, and encourages different kinds of projects. Founder obsession for a problem seems to be encouraged in ACX Grants, although it's clearly discouraged for very early VC funding. VCs like very well made slides, communication abilities, and beautiful people in general, while I've found no such bias for ACX Grants. Being based outside the US is a big minus for American VCs, but ACX Grants almost seems to be favoring it. VCs tend to think a lot by analogy (the Uber for X, the Cursor for Y ...) while I found ACX Grants to be much more thinking from first principles than the median VC I met.

I'm not criticizing the VC reading grid. It obviously comes from experience and it tends to work financially for them. But you have to remember that a large part of the decision comes down to the potential for a quite early (3-4 years) and billion-dollar exit option. Not all projects fit that and it's a good thing to support the other. The other advantage of it is that it selects founders that can go through the hoops of making their project fit the grid. That proves VCs the founders are capable of adapting their message to their interlocutors, which is highly necessary when raising further money, recruiting or discussing with any partner. That's something ACX Grants does not seem to value much.

All in all, ACX Grants is great in that it provides funding with a very unique reading grid, so it helps projects that could get no help anywhere else.

Aidan's response to Vasco's comment you quoted is a starting point:

I would actually expect our marginal multiplier to be much closer to our average multiplier than the CEARCH method implies. Most importantly, I expect most of our marginal resources are dedicated to identifying and executing on scalable pledge growth strategies. I think this work, in expectation, provides a pretty strong multiplier. By comparison our average multiplier includes some major fixed costs (e.g., related to running our donation platform). 

(The recording coefficient section of their report discusses this extensively if I've interpreted your question correctly.)

historical and current EA wins

The best one-stop summary I know of is still Scott Alexander's In Continued Defense Of Effective Altruism from late 2023. I'm curious to see if anyone has an updated take, if not I'll keep steering folks there:

Here’s a short, very incomplete list of things effective altruism has accomplished in its ~10 years of existence. I’m counting it as an EA accomplishment if EA either provided the funding or did the work, further explanations in the footnotes. I’m also slightly conflating EA, rationalism, and AI doomerism rather than doing the hard work of teasing them apart:

Global Health And Development

  • Saved about 200,000 lives total, mostly from malaria1
  • Treated 25 million cases of chronic parasite infection.2
  • Given 5 million people access to clean drinking water.3
  • Supported clinical trials for both the RTS.S malaria vaccine (currently approved!) and the R21/Matrix malaria vaccine (on track for approval)4
  • Supported additional research into vaccines for syphilis, malaria, helminths, and hepatitis C and E.5
  • Supported teams giving development economics advice in Ethiopia, India, Rwanda, and around the world.6

Animal Welfare:

  • Convinced farms to switch 400 million chickens from caged to cage-free.7
  • Freed 500,000 pigs from tiny crates where they weren’t able to move around8
  • Gotten 3,000 companies including Pepsi, Kelloggs, CVS, and Whole Foods to commit to selling low-cruelty meat.

AI:

  • Developed RLHF, a technique for controlling AI output widely considered the key breakthrough behind ChatGPT.9
  • …and other major AI safety advances, including RLAIF and the foundations of AI interpretability10.
  • Founded the field of AI safety, and incubated it from nothing up to the point where Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Bill Gates, and hundreds of others have endorsed it and urged policymakers to take it seriously.11
  • Helped convince OpenAI to dedicate 20% of company resources to a team working on aligning future superintelligences.
  • Gotten major AI companies including OpenAI to work with ARC Evals and evaluate their models for dangerous behavior before releasing them.
  • Got two seats on the board of OpenAI, held majority control of OpenAI for one wild weekend, and still apparently might have some seats on the board of OpenAI, somehow?12
  • Helped found, and continue to have majority control of, competing AI startup Anthropic, a $30 billion company widely considered the only group with technology comparable to OpenAI’s.13
  • Become so influential in AI-related legislation that Politico accuses effective altruists of having “[taken] over Washington” and “largely dominating the UK’s efforts to regulate advanced AI”.
  • Helped (probably, I have no secret knowledge) the Biden administration pass what they called "the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.”
  • Helped the British government create its Frontier AI Taskforce.
  • Won the PR war: a recent poll shows that 70% of US voters believe that mitigating extinction risk from AI should be a “global priority”.

Other:

I think other people are probably thinking of this as par for the course - all of these seem like the sort of thing a big movement should be able to do. But I remember when EA was three philosophers and few weird Bay Area nerds with a blog. It clawed its way up into the kind of movement that could do these sorts of things by having all the virtues it claims to have: dedication, rationality, and (I think) genuine desire to make the world a better place.

One detail that caught my eye from your post was this chart:

I was a bit surprised to see the global average WTP being $67k per QALY or ~5x world GDP per capita, while for these individual countries they seem closer to 0.5-2x. 

Eyeballing the chart below from OP's 2021 technical update makes me wonder if that discrepancy is driven by the higher WTP multipliers in LMICs:

Figure1.png

But contra my own guess, the authors say

As we’ll discuss in Appendix A, there are empirical and theoretical reasons to think that the exchange rate at which people trade off mortality risks against income gains differs systematically across income levels, with richer people valuing mortality more relative to income.

so I remain confused.

Might Nuno's previous work Shallow evaluations of longtermist organizations be useful or relevant? I'm guessing you've probably seen it, just flagging in case you haven't (e.g. it wasn't in fn2).

What do you think of OWID's dissolution of the Easterlin paradox? In short:

  • OWID say Easterlin and other researchers relied on data from the US and Japan, but...
  • In Japan, life satisfaction questions in the ‘Life in Nation surveys’ changed over time; within comparable survey periods, the correlation is positive (graphic below visualises this for ~50 years of data from 1958-2007, cf. your multidecade remark)
  • In the US, growth has not benefitted the majority of people; income inequality has been rising in the last four decades, and the income and standard of living of the typical US citizen have not grown much in the last couple of decades

so there's no paradox to explain.

GDP per capita vs. Life satisfaction across survey questions

(I vaguely recall having asked you this before and you answering but may be confabulating; if that's happened and you feel annoyed I'm asking again, feel free to ignore)

self-reported happiness over a short period (like 1 day)

Not exactly what you meant, but you may be interested in Jeff Kaufman's notes on his year-long happiness logging self-experiment. My main takeaway was to be mildly more bearish of happiness logging than when I first came across the idea, based on his

Overall my experience with logging has made me put less trust in "how happy are you right now" surveys of happiness. Aside from the practical issues like logging unexpected night wake-time, I mostly don't feel like the numbers I'm recording are very meaningful. I would rather spend more time in situations I label higher than lower on average, so there is some signal there, but I don't actually have the introspection to accurately report to myself how I'm feeling.

Scattered quotes that made me go "huh":

When I first started rating my happiness on a 1-10 scale I didn't feel like I was very good at it. At the time I thought I might get better with practice, but I think I'm actually getting worse at it. Instead of really thinking "how do I feel right now?" it's really hard not to just think "in past situations like this I've put down '6' so I should put down '6' now".

Being honest to myself like this can also make me less happy. Normally if I'm negative about something I try not to dwell on it. I don't think about it, and soon I'm thinking about other things and not so negative. Logging that I'm unhappy makes me own up to being unhappy, which I think doesn't help. Though it's hard to know because any other sort of measurement would seem to have the same problem.

Load more