Cross posted on my blog.

 

If you wanted to make a lot of money, you’d accept the need to make high-risk high-reward business decisions, like founding a company or investing in stocks, right?

Ok, what about for charitable donations? If you wanted to do a lot of good, would you donate to charities that might not have any impact, but could have a large impact?

Many people are willing to take on large risks in business, yet almost no one donates to charity in this manner. Basic scientific research gets very little in donations despite an impressive history of results. And even when people do donate, the possibly crazy yet potentially groundbreaking research – like cold fusion or curing aging – is usually left out.

The most likely people to make risky donations are effective altruists – people who pride themselves on being both rational and philanthropic in an effort to do the most good for the most people. Yet even these “warm and calculated” effective altruists tend to favor safer charities like the Against Malaria Foundation – which can pretty reliably save one life for around $3,000 – over riskier bets like the Machine Intelligence Research Institute (MIRI) – which is working to ensure human level artificial intelligence doesn’t lead to human extinction.

 

Well, of course people shy away from riskier bets. Isn’t this risk aversion a simple irrationality, pervasive to all areas of life?

Actually, there are good reasons to be risk-averse in many areas of life, but charitable donations really isn’t one of them. If anything, people should be a lot riskier with their donations than with their investments.

 

Wait, how is risk aversion in business a good thing?

Due to the law of decreasing marginal utility. This law states that all goods decrease in value (to you) the more you have of them. While walking home from the lab last Friday after a long week of research, I passed by a pastry shop that I frequent occasionally. Feeling like I had earned a treat, I decided to get three donuts. The first one was amazing! The next one was still pretty good. I only got halfway through the third before deciding to stop eating it. From that experience, I can tell you that I’d prefer one donut with 100% certainty to 3 donuts with 33% certainty, or even 3 with 50% certainty.

The funny thing is, the law of decreasing marginal utility even applies to money – the first million dollars in your bank matters more than the next, and so on.

 

Ok, but if risk aversion is rational, why is it bad if people are risk-averse with their charitable donations?

Because your charitable donations aren’t primarily about you. Even though donating can feel good, the main point is furthering some cause. If you’re donating to a cause that helps a bunch of different people, each of those people has their own decrease in marginal utility (in what’s known as their “utility function”). If you save ten lives, you quite literally do ten times as much good as if you save one life. Consider saving the life of a student named Jane. Jane will be forever grateful to you, and the fact that you’ve already saved nine people before saving her won’t decrease the value in you saving her life.

After a point, charitable donations actually do face a decrease in marginal utility. This is because whatever cause they address, or method they use in addressing it, starts to actually solve the cause. With low hanging fruit gone, it’s harder to make more gains. But the amounts that would have to be donated to significantly see this effect is huge – typically much larger than what anyone who isn’t wealthy could donate.

 

Ok, so the risk from high-risk high-reward charities shouldn’t be as off-putting as the risk in our personal lives?

Exactly. But there’s also another reason high-risk high-reward charities make a ton of sense. This time we’re looking at the reward.

When personal business risks pay off, they typically don’t increase your personal wealth by several orders of magnitude (with the obvious exceptions of successful high-tech entrepreneurship and winning the mega-lottery).

Charities, on the other hand, can vary vastly. Even considering the most effective charities at saving lives today, each life will cost a few thousand dollars to save. A dedicated person can likely save dozens of lives through donations in her lifetime.

And that’s amazing! If you saved one person from a burning building, you’d be a hero. Donating to effective charities can allow you to be a hero dozens of times over!

But consider the scale of good you could do donating to a riskier cause.

A cure for aging would save 100,000 lives every single day. Since this field has relatively little research, it’s conceivable that donating to institutions working on curing aging could advance the field more than a day.

A single donation to a charity that focuses on making sure humans don’t go extinct almost definitelywill not be the deciding factor in the extinction of our species. But it might. And that could possibly be the difference between humans going extinct and colonizing most of the observable universe.

 

Ok, that all makes sense, but I really want to make sure my charitable donations make some positive difference, and the riskier ones might have zero benefit…

That’s an understandable impulse. And the best way to decrease risk here might be the same way people decrease risk in the financial markets – diversification. You might want to split up your donations and give some money to safer charities to ensure you do some good, and then give more to riskier charities that are expected to do a lot more good.

Of the riskier charities, I’ve donated to MIRI and the Future of Humanity Institute. Both are working on making sure smarter than human artificial intelligence doesn’t lead to human extinction, both have made impressive advancements in the past, and both have relatively low budgets as is.

Comments12


Sorted by Click to highlight new comments since:

For discussion of risk-aversion in altruism, also see Carl's Salary or startup? How do-gooders can gain more from risky careers.

Yeah, I agree it doesn't just apply to where to donate, but also how to get money to donate, founding non-profits, etc. Which, taken to it's logical conclusion, means maybe I should angle to run for president?

Carl already explored this question too, noting that it is relatively easy to go for PM of the UK in another 2012 article.

Far more people should read Carl's old blog posts.

Thanks for the link - hopefully 80000hours is able to convince some EAs to go into politics.

While it likely is true of some EAs, it it's a simplistic strawman to assume that those of us who favor donating to AMF (though in practice I prefer donating to research and meta-charity more) do so due to risk aversion. Saying that would require knowing, with confidence, the expected value of a donation to MIRI.

I certainly would prefer to donate to a 0.01% chance of saving 11K lives than a 100% chance of saving a life. But I don't actually know that MIRI actually represents a superior expected value bet.

(See some discussion about MIRI's chance of success here and here).

Obviously different people have different motivations for their donations. I disagree that it's a straw man, though, because I wasn't trying to misrepresent any views and I think risk aversion actually is one of the main reasons that people tend to support causes such as AMF that help people "one at a time" over causes that are larger scale but less likely to succeed. MIRI's chance of success wasn't central to my argument - if you think it has basically zero net positive then substitute in whatever cause you think actually is positive (in-vitro meat research, CRIPSR research, politics, etc). Perhaps you've already done that and think that AMF still has higher expected value, in which case I would say you're not risk averse (per se), but then I'd also think that you're in the minority.

For a third perspective, I think most EAs who donate to AMF do so neither because of an EV calculation they've done themselves, nor because of risk aversion, but rather because they've largely-or-entirely outsourced their donation decision to Givewell. Givewell has also written about this in some depth, back in 2011 and probably more recently as well.

http://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/

Key quote:

"This view of ours illustrates why – while we seek to ground our recommendations in relevant facts, calculations and quantifications to the extent possible – every recommendation we make incorporates many different forms of evidence and involves a strong dose of intuition. And we generally prefer to give where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good – a preference that I believe is inconsistent with the approach of giving based on explicit expected-value formulas (at least those that (a) have significant room for error (b) do not incorporate Bayesian adjustments, which are very rare in these analyses and very difficult to do both formally and reasonably)."

An added reason to not take expected value estimates literally (which applies to some/many casual donors, but probably not to AGB or GiveWell) is if you believe that you are not capable of making reasonable expected value estimates under high uncertainty yourself, and you're leery of long casual chains because you've developed a defense mechanism against your values being Eulered or Dutch-Booked.

Apologies for the weird terminology, see: http://slatestarcodex.com/2014/08/10/getting-eulered/ and: https://en.wikipedia.org/wiki/Dutch_book

I think it's true that many outsource their thinking to GW, but I think there could still be risk aversion in the thought process. Many of these people have also been exposed to arguments for higher risk higher reward charities such as X-risks or funding in-vitro meat research, and I think a common thought process is "I'd prefer to go with the safer and more established causes that GW recommends." Even if they haven't explicitly done the EV calculation themselves, qualitatively similar thought processes may still occur.

Donating to FHI is still extremely safe on the weirdness spectrum. They're part of Oxford. Actual risky stuff would be paying promising researchers directly in non-tax deductible ways. But this is weird enough to trip people's alarms. You get no accolades for doing this, in fact quite the opposite, you will lose status when the 'obviously crazy' thing fails. We see the same thing in VC funding, where this supposed bastion of frontier challenging risk takers mostly engages in band-wagoning.

Is there any tax-deductible way to give promising researchers money directly (or through some 3rd party that didn't take a cut)? Seems like someone could set up a 501c3 that allowed for that pretty easily.

One thing which makes me more confident that object-level risk[1] is important in for-profit investing, but expect it to be less central in charitable work, is that I'm more confident that for-profit risk is priced correctly, or at least not way out of line with what it should be. It seems more plausible to me that there are low-risk high-return charitable opportunities, because people are generally worse at identifying and saturating those opportunities. (Although per GiveWell's post on Broad market efficiency I now believe this effect is much less striking than I first guessed).

[1] I'm not sure this is a correct application of "object-level", but I mean actual risk that a given investment will succeed or fail, rather than the "meta" risk that we'll fail to analyse its value correctly. I'm not super confident the distinction is meaningful.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f