Numerous EA organizations use a “multiplier” model in which they try to leverage each dollar they spend on their own operations by fundraising multiple dollars for other effective charities. My strong impression is that the number of donors who give to effective charities doing direct work is much larger than the number of donors who give to organizations that fundraise for effective charities doing direct work. I would like to understand why this is the case.
Below, I’ve listed some of the most common objections to the multiplier model I’ve heard in the EA community, and in my own experience pitching The Life You Can Save (where I work) and other multiplier organizations. I’ve put each of these objections as its own comment, please upvote if it applies to you. If you have a substantively different objection to the multiplier model, please add your own comment.
- I don’t believe the multipliers that fundraising organizations report (e.g. because they don’t appropriately adjust for money that would have been donated counterfactually, rely on aggressive assumptions, or ignore the opportunity cost of having people working at the multiplier organization)
- I feel an emotional “warm glow” when I give to charities that do direct work, but not when I give to multiplier organizations
- Multiplier organizations typically raise funds for a lot of different charities, and I only care about money that’s raised for the charity with the highest absolute impact
- There aren’t multiplier organizations available in the cause areas I care about
- I think multiplier organizations are significantly riskier than organizations doing direct work
- I think multiplier organizations have provided leverage in the past, but think that going forward the marginal multiplier will be lower than the average multiplier
- I’m generally skeptical of the multiplier model because it seems too good to be true
Thanks to everyone who voted and commented! It was helpful to learn more about how EAs think about multiplier orgs, and I hope it was helpful to hear my perspective from inside one of those orgs.
Here are my biggest takeaways from the discussion, apologies that it took me so long to post this:
Outcomes I’d like to see going forward:
I’d love to see someone write up an overview of the multiplier space, similar to Larks’ annual AI Alignment Literature Review and Charity Comparison. Consolidating information would make it much easier for donors to engage with the space. Something as simple as a list of organizations with a few sentences about their work, their multiplier data, and links to more info would go a long way. (Ideally this would be done by someone who doesn’t work at a multiplier org; I’ll post this as a volunteer project on EA Work Club.)
I’d hope that overview would encourage more EAs dip their toes in the water by making a small donation to one or more multiplier orgs and/or subscribing to their mailing lists (I just did this to put some skin in the game). This is less about the actual money, and more about making it more likely you’ll stay informed about their work going forward. The more you do that, the more you’ll be able to make your own informed decision about whether their model is working.
A final note… While I’d love to see more people donating to multiplier orgs, I’d hate to see donors naively donating to the organization with the highest multiplier or otherwise incentivizing multiplier orgs to prioritize maximizing their short term multiplier. Ideally, both donors and organizations will prioritize strategies that maximize long run impact, and prioritize the magnitude of that long run impact (money moved – expenses) rather than the efficiency of that impact (money moved / expenses). For donors, I’d recommend asking 1) “do I believe in the strategy?” and 2) “do I believe the team can execute the strategy?”