Bio

Feedback welcome: www.admonymous.co/mo-putera 

I work with CE/AIM-incubated charity ARMoR on research distillation, quantitative modelling, consulting, MEL, and general org-boosting to support policies that incentivise innovation and ensure access to antibiotics to help combat AMR. I was previously an AIM Research Program fellow, was supported by a FTX Future Fund regrant and later Open Philanthropy's affected grantees program, and before that I spent 6 years doing data analytics, business intelligence and knowledge + project management in various industries (airlines, e-commerce) and departments (commercial, marketing), after majoring in physics at UCLA and changing my mind about becoming a physicist. I've also initiated some local priorities research efforts, e.g. a charity evaluation initiative with the moonshot aim of reorienting my home country Malaysia's giving landscape towards effectiveness, albeit with mixed results. 

I first learned about effective altruism circa 2014 via A Modest Proposal, Scott Alexander's polemic on using dead children as units of currency to force readers to grapple with the opportunity costs of subpar resource allocation under triage. I have never stopped thinking about it since, although my relationship to it has changed quite a bit; I related to Tyler's personal story (which unsurprisingly also references A Modest Proposal as a life-changing polemic):

I thought my own story might be more relatable for friends with a history of devotion – unusual people who’ve found themselves dedicating their lives to a particular moral vision, whether it was (or is) Buddhism, Christianity, social justice, or climate activism. When these visions gobble up all other meaning in the life of their devotees, well, that sucks. I go through my own history of devotion to effective altruism. It’s the story of [wanting to help] turning into [needing to help] turning into [living to help] turning into [wanting to die] turning into [wanting to help again, because helping is part of a rich life].

Comments
366

Topic contributions
3

Any examples of interventions EA might overlook that int/a rates highly in your view? (No need to speak for others)

Can you say more about why you think that's the right benchmark to clear for climate funding, and where I could donate to if I were so persuaded?

Model covariance in cost-effectiveness analyses is a good call-out, and I don't know of anything that's been shared on the EA forum, although apparently in health economics this is a solved problem so there's an angle of attack there for anyone reading this who's keen to give it a try. Quoting froolow:

... you'll be pleased to know that this is basically a solved problem in Health Economics which I just skimmed over in the interests of time. The 'textbook' method of solving the problem is to use a 'Cholesky Decomposition' on the covariance matrix and sample from that. In recent years I've also started experimenting with microsimulating the underlying process which generates the correlated results, with some mixed success (but it is cool when it works!).

Practitioner input, e.g. from folks like you who've noticed this and have a sense of how much assumptions move together, would be needed to quantify the model covariance so it jives with what's being seen.

Alexander Berger's 2026 CoeffG annual letter describes their shift from marginalism to "inframarginal" funding, emphasis mine:

Our mission is to help others as much as we can with the resources available to us, and historically, that’s meant operating in a “marginalist” mode, thinking hard about how to equalize marginal returns within and sometimes across cause areas. (The idea is that each additional dollar we spend should produce roughly the same amount of good if allocated to any of our funds, because if it doesn’t we should be shifting resources toward the higher-return area until it does.)

The trade-offs that drive a focus on equalizing marginal returns remain. Funding still falls well short of the opportunity set across our focus areas, and we still face some zero-sum budget decisions across worldviews and funds.

But with nearly every fund working with a higher budget in 2026, and more growth expected in the future, we need to shift more of our attention away from marginal trade-offs, and toward more ambitious goals. Many of our best grants have been deeply inframarginal: we had to recruit a founder or — effectively — incubate an organization, but once we had and there was finally something fundable, the eventual grant was far above our bar. The kind of dedicated and proactive ownership from program staff required to enable these grants can pay off in a big way, especially over time, but it trades off with a focus on optimizing the marginal dollar in the near term.

I thought the importance of taking full responsibility for a problem was captured well by Nan Ransohoff ’s pieceThere should be ‘general managers’ for more of the world’s important problems.” Lewis [Bollard], who Nan also points to in her post, exemplifies this in his work on farm animal welfare. Another colleague who really embodies this spirit is Andrew Snyder-Beattie, who runs our biosecurity and pandemic preparedness work. Andrew joined Coefficient in 2019 and has taken it upon himself to reduce risks from worst-case pandemics with unusual and impressive dedication and purpose. Recently, this has included work on reducing risks from mirror bacteria and a four-pillared plan to avoid engineered biological catastrophes.⁹

I’m inspired by these examples and want Coefficient to continue being a place where outstanding people’s responsibilities and ambitions can grow to match the scale of the world’s most important problems. However, I also want us to avoid the common “strategic funder” trap of thinking we have all the answers and just need to slot grantees into our vision, almost like subcontractors. An important virtue of the marginalist approach to funding is that it’s relatively strategy-agnostic: if an opportunity clears the cost-effectiveness bar, you should fund it. That tends to facilitate a healthy openness to grantees’ stronger local context and knowledge.

I want us to navigate this tension as thoughtfully as possible, bringing the ambition and ownership of general managers without losing the curiosity and humility of marginal funders. The right balance will vary across focus areas and individuals, but we should be intentional about trying to strike it.

(I do wish Berger gave a bit more detail than just "we should be intentional about trying to strike the right balance between GM and marginalist approaches", but I suppose the annual letter isn't the right place for this.)

Nan Ransohoff's piece on how there should be more GMs owning delivery of specific outcomes is a great read too (emphasis mine):

There’s a surprisingly big category of problems that are ‘orphaned.’ By ‘orphaned’ I mean: you can’t point to a specific person or organization who thinks it’s their responsibility to deliver the outcome in its entirety. Lots of people talk about the problem, and often many work on slices of it. But if you asked: ‘is there a hyper-competent person waking up every day feeling accountable for making sure this gets solved?’—the answer is very often, ‘no.’ ...

In my opinion, there should be ‘general managers’—GMs—for problems like these. These are founder-types who feel personally responsible for delivering a specific outcome (vs field-building generally); hyper-competent leaders who will pull whatever levers necessary to achieve the defined outcome. Most companies wouldn’t let an important initiative go unmanned or without a ‘directly responsible individual’ (DRI) — why are we OK not having GMs for even more wide-reaching problems? ...

[These GMs are] flexible on the details, constantly zooming in and out to readjust strategy and tactics as the problem evolves. They’re fast learners and quick to develop (or hire for) whatever skills are needed at the time. They feel deep personal responsibility to solve the problem, which means they’re likely to stick with it for years if not decades (the smallpox eradication effort took 11 years, marriage equality took 30). Great GMs have to carry the torch even when (especially when) political winds inevitably shift, public interest wanes, and funding environments worsen. They possess the conviction and stamina to lead through conditions both good and bad. 

As I've gotten more work experience (year 10 now, jeez) I've become increasingly a fan of the DRI approach, and by extension the GM ("super-senior-DRI") approach. You could think of incubators like AIM and SMA as "GM factories for orphaned problems". 

Awhile back I came across this slide from the Money for Good project, which I thought was a sobering quantification of the rarity of donor decision-making based on nonprofit outperformance (cost-effectiveness etc). Hope Consulting got this data by surveying 4,000 US individuals with household incomes >$80k (top 30% income back in 2009, comprising 75% of overall individual donations), of which 2,000 were in the >$300k bracket.

Opportunity size for US retail donors in 2009 was ~$45B, so this works out to ballpark $1-1.5B which is still sizeable, e.g. it's more than total annual EA grantmaking has ever been:

How did Hope Consulting get the 3% figure? Top of funnel:

Middle of funnel steep drop-off:

and:

Bottom of funnel has even steeper drop-off, because confirmation bias is the default:

How to raise the 3% figure for donors who give based on nonprofit outperformance? Hope Consulting suggest this framing:

(I disagree with Hope Consulting on that last point, but the rest seems useful.)

What are midsized retail donors like? I used to work in marketing analytics so this piqued my interest. Max / diff to elicit donor value trade-offs -> cluster analysis (a few rounds) yielded these "donor personas":

The lack of demographic variation somewhat surprised me:

As a closing note, the Money for Good project was a major undertaking: 6 months, 4 major funders (including Rockefeller), 4 research orgs (!) partnering with Hope Consulting, etc. This makes me wonder what the 80/20 version of this could look like, with judicious use of Claude Code and such.

I really like that phrase, "working at the border of the trivial and the profound". 

The Ross Summer Mathematics Program's motto "think deeply of simple things" also seems apt with respect to your work; I thought of it when reading your scaling series. You have a knack for finding fertile perspectives on ostensibly well-trodden topics and conveying them in luminously clear ways to nonexperts like myself, so thank you.

To add to the 1st bullet, in this 2013 Q&A with Holden he talked about how GiveWell focused on "proven" interventions like bednets over "speculative" ones like biomedical research because they were easier to evaluate ("easier" being relative, even bednets were pretty hard); even back then he was saying the speculative interventions were better and that the partnership with Cari Tuna and Dustin Moskovitz that created GiveWell Labs (which turned into OP/CG) enabled this pivot. 

a few months ago we had in this very forum a discussion, if memory serves, of the terrible philanthropic choices of MacKenzie Scott.

I couldn't find this discussion, would you mind pointing me to it? 

You're right that it isn't a WBE. Also, incentives:

To be fair, we’re not unsympathetic to why Eon used the language they did. Their careful blog post on ‘How the Eon Team Produced a Virtual Embodied Fly’ would likely have only been read by a few hundred neuroscientists, while “We’ve uploaded a fruit fly” reached millions. Startup survival requires investment, funding follows excitement, and excitement follows headlines - not careful caveats. This bold approach may even feel obligatory when an organisation’s stated mission is “solving brain emulation as an engineering sprint, not a decades-long research program.”

Load more