In April when we released my interview with SBF, I attempted to very quickly explain his views on expected value and risk aversion for the episode description, but unfortunately did so in a way that was both confusing and made them sound more like a description of my views rather than his.
Those few paragraphs have gotten substantial attention because Matt Yglesias pointed out where it could go wrong, and wasn't impressed, thinking that I'd presented an analytic error as "sound EA doctrine".
So it seems worth clarifying what I actually do think. In brief, I entirely agree with Matt Yglesias that:
- Returns to additional money are certainly not linear at large scales, which counsels in favour of risk aversion.
- Returns become sublinear more quickly when you're working on more niche cause areas like longtermism, relative to larger cause areas such as global poverty alleviation.
- This sublinearity becomes especially pronounced when you're considering giving on the scale of billions rather than millions of dollars.
- There are other major practical considerations that point in favour of risk-aversion as well.
(SBF appears to think the effects above are smaller than Matt or I do, but it's hard to know exactly what he believes, so I'll set that aside here.)
———
The offending paragraphs in the original post were:
"If you were offered a 100% chance of $1 million to keep yourself, or a 10% chance of $15 million — it makes total sense to play it safe. You’d be devastated if you lost, and barely happier if you won.
But if you were offered a 100% chance of donating $1 billion, or a 10% chance of donating $15 billion, you should just go with whatever has the highest expected value — that is, probability multiplied by the goodness of the outcome [in this case $1.5 billion] — and so swing for the fences.
This is the totally rational but rarely seen high-risk approach to philanthropy championed by today’s guest, Sam Bankman-Fried. Sam founded the cryptocurrency trading platform FTX, which has grown his wealth from around $1 million to $20,000 million.
The point from the conversation that I wanted to highlight — and what is clearly true — is that for an individual who is going to spend the money on themselves, the fact that one quickly runs out of any useful way to spend the money to improve one's well-being makes it far more sensible to receive $1 billion with certainty than to accept a 90% chance of walking away with nothing.
On the other hand, if you plan to spend the money to help others, such as by distributing it to the world's poorest people, then the good one does by dispersing the first dollar and the billionth dollar are much closer together than if you were spending them on yourself. That greatly strengthens the case for taking the risk of receiving nothing in return for a larger amount on average, relative to the personal case.
But: the impact of the first dollar and the billionth dollar aren't identical, and in fact could be very different, so calling the approach 'totally rational' was somewhere between an oversimplification and an error.
———
Before we get to that though, we should flag a practical consideration that is as important, or maybe more so, than getting the shape of the returns curve precisely right.
As Yglesias points out, once you have begun a foundation and people are building organisations and careers in the expectation of a known minimum level of funding for their field, there are particular harms to risking your entire existing endowment in a way that could leave them and their work stranded and half-finished.
While in the hypothetical your downside is meant to be capped at zero, in reality, 'swinging for the fences' with all your existing funds can mean going far below zero in impact.
The fact that many risky actions can result in an outcome far worse than what would have happened if you simply did nothing, is a reason for much additional caution, one that we wrote about in a 2018 piece titled 'Ways people trying to do good accidentally make things worse, and how to avoid them'. I regret that I failed to ask any questions that highlighted this critical point in the interview.
(This post won't address the many other serious issues raised by the risk-taking at FTX, which, according to news reports, have gone far beyond accepting the possibility of not earning much profit, and which can't be done justice here.
If those reports are accurate, the risk-taking at FTX was not just a coin flip that came up tails — it was immoral and perhaps criminal itself due to the misappropriation of other people's money for the purpose of risky investments. This has resulted in incalculable harm to customers, investors, trust in broader society, and has set back all the causes some of FTX's staff said they wanted to help.)
———
To return to the question of declining returns and risk aversion — just as one slice of pizza is delicious but a tenth may not be enjoyable to eat at all, people trying to use philanthropy to do good do face 'declining marginal returns' as they incrementally try to give away more and more money.
How fast that happens is a difficult empirical question.
But if one is funding the fairly niche and neglected problems SBF said he cared the most about, it's fair to say that any foundation would find it difficult to disperse $15 billion to projects they were incredibly excited about.
That's because a foundation with $15 billion would end up being a majority of funding for those areas, and so effectively increase the resources going towards them by more than 2-fold, and perhaps as much as 5-fold, depending on how broad a net they tried to cast. That 'glut' of funding would result in some more mediocre projects getting the green light.
Assuming someone funded projects starting with the ones they believed would have the most impact per dollar, and then worked down — the last grant made from such a large pot of money will be clearly worse, and probably have less than half the expected social impact per dollar as the first.
So between $1 billion with certainty versus a 10% chance of $15 billion, one could make a theoretical case for either option — but if it were me I would personally lean towards taking the $1 billion with certainty.[1]
Notice that by contrast, if I were weighing up a guaranteed $1 million against a 10% chance of $15 million, the situation would be very different. For the sectors I'd be most likely to want to fund, $15 million from me, spread out over a period of years, would represent less than a 1% increase, and so wouldn't overwhelm their capacity to sensibly grow, leading the marginal returns to decline more slowly. So in that case, setting aside my personal interests, I would opt for the 10% chance of $15 million.
———
Another massive real-world consideration we haven't mentioned yet which pushes in favour of risk aversion is the following: how much you are in a position to donate is likely to be strongly correlated with how much other donors are able to donate.
In practice risk-taking around philanthropy will mostly centres on investing in businesses. But businesses tend to do well and poorly together, in cycles, depending on broad economic conditions. So if your bets don't pay off, say, because of a recession, there's a good chance other donors will have less to give as well. As a result, you can't just take the existing giving of other donors for granted.
This is one reason for even small donors to have a reasonable degree of risk aversion. If they all adopt a risk-neutral strategy they may all get hammered at once and have to massively reduce their giving simultaneously, adding up to a big negative impact in aggregate.
This is a huge can of worms that has been written about by Christiano and Tomasik as far back as 2013, and more recently by my colleague Benjamin Todd.
———
This post has only scratched the surface of the analysis one could do on this question, and attempted to show how tricky it can be. For instance, we haven't even considered:
- Uncertainty about how many other donors might join or drop out of funding similar work in future.
- Indirect impacts from people funding similar work on adjacent projects.
- Uncertainty about which problems you'll want to fund solutions to in future.
I regret having swept those and other complications under the rug for the sake of simplicity in a way that may well have confused some listeners to the show and seemed like an endorsement of an approach that is risk-neutral with respect to dollar returns, which would in fact be severely misguided.
(If you'd like to hear my thoughts on FTX more generally as opposed to this technical question you can listen to some comments I put out on The 80,000 Hours Podcast feed.)
Thanks for the question Pseudonym — I had a bunch of stuff in there defending the honour of 80k/my colleagues, but took it out as it sounded too defensive.
So I'm glad you've given me a clear chance to lay out how I was thinking about the episode and the processes we use to make different kinds of content so you can judge how much to trust them.
Basically, yes — I did hold the views above about risk aversion for as long as I can recall. I could probably go find supporting references for that, but I think the claim should be believable because the idea that one should be truly risk neutral with respect to dollars at very large amounts just obviously makes no sense and would be in direct conflict with our focus on neglected areas (e.g. IIRC if you hold the tractability term of our problem framework constant then you get logarithmic returns to additional funding).
When I wrote that SBF's approach was 'totally rational', in my mind I was referring to thinking in terms of expected value in general, not to maximizing expected $ amounts, though I appreciate that was super unclear which is my fault.
Podcast interviews and their associated blog posts do not lay out 80,000 Hours staff's all-things-considered positions and never have (with the possible exception of Benjamin Todd talking about our 'key ideas').
They're a chance to explore ideas — often with people I partially disagree with — and to expose listeners to the diversity of views out there. For an instance of that from the same interview, I disagree with SBF on broad vs narrow longtermism but I let him express his views to provide a counterpoint to the ones listeners will be familiar with hearing from me.
The blog posts I or Keiran write to go with the episodes are rarely checked by anyone else on the team for substance. They're probably the only thing on the site that gets away with that lack of scrutiny, and we'll see whether that continues or not after this experience. So blame for errors should fall on us (and in this case, me).
Reasons for that looser practice include:
By comparison our articles like key ideas or our AI problem profile are debated over and commented on endlessly. On this issue there's our short piece on 'How much risk to take'.
Not everyone agrees with every sentence of course, but little goes out without substantial review.
We could try to make the show as polished as articles, more similar to say, a highly produced show like Planet Money. But that would involve reducing output by more than half, which I think the audience would overall dislike (and would also sabotage the role the podcast plays in exposing people to ideas we don't share).
You or other readers might be curious as to what was going through my head when I decided to prioritise the aspect of expected value that I did during the interview itself:
I do think I made a mistake ex ante not to connect personal and professional downside risk more into this discussion. We had mentioned it in previous episodes and an article I read which went out in audio form on the podcast feed itself, but at the time I thought of seeking upside potential, and the risk of doing more harm than good, as more conceptually and practically distinct issues than I do now after the last month.
Overall I think I screwed up a bunch about this episode. If you were inclined to think that the content of these interviews is reliable the great majority of the time, then this is a helpful reminder that ideas are sometimes garbled on the show — and if something sounds wrong to you, it might well just be because it's wrong. I'm sorry about that, and we try to keep it at reasonable levels, though with the format we have we'll never get it to zero.
But if it were me I wouldn't update much on the quality of the written articles as they're produced pretty differently and by different people.