In April when we released my interview with SBF, I attempted to very quickly explain his views on expected value and risk aversion for the episode description, but unfortunately did so in a way that was both confusing and made them sound more like a description of my views rather than his.

Those few paragraphs have gotten substantial attention because Matt Yglesias pointed out where it could go wrong, and wasn't impressed, thinking that I'd presented an analytic error as "sound EA doctrine".

So it seems worth clarifying what I actually do think. In brief, I entirely agree with Matt Yglesias that:

  • Returns to additional money are certainly not linear at large scales, which counsels in favour of risk aversion.
  • Returns become sublinear more quickly when you're working on more niche cause areas like longtermism, relative to larger cause areas such as global poverty alleviation.
  • This sublinearity becomes especially pronounced when you're considering giving on the scale of billions rather than millions of dollars.
  • There are other major practical considerations that point in favour of risk-aversion as well.

(SBF appears to think the effects above are smaller than Matt or I do, but it's hard to know exactly what he believes, so I'll set that aside here.)

———

The offending paragraphs in the original post were:

"If you were offered a 100% chance of $1 million to keep yourself, or a 10% chance of $15 million — it makes total sense to play it safe. You’d be devastated if you lost, and barely happier if you won.

But if you were offered a 100% chance of donating $1 billion, or a 10% chance of donating $15 billion, you should just go with whatever has the highest expected value — that is, probability multiplied by the goodness of the outcome [in this case $1.5 billion] — and so swing for the fences.

This is the totally rational but rarely seen high-risk approach to philanthropy championed by today’s guest, Sam Bankman-Fried. Sam founded the cryptocurrency trading platform FTX, which has grown his wealth from around $1 million to $20,000 million.

The point from the conversation that I wanted to highlight — and what is clearly true — is that for an individual who is going to spend the money on themselves, the fact that one quickly runs out of any useful way to spend the money to improve one's well-being makes it far more sensible to receive $1 billion with certainty than to accept a 90% chance of walking away with nothing.

On the other hand, if you plan to spend the money to help others, such as by distributing it to the world's poorest people, then the good one does by dispersing the first dollar and the billionth dollar are much closer together than if you were spending them on yourself. That greatly strengthens the case for taking the risk of receiving nothing in return for a larger amount on average, relative to the personal case.

But: the impact of the first dollar and the billionth dollar aren't identical, and in fact could be very different, so calling the approach 'totally rational' was somewhere between an oversimplification and an error.

———

Before we get to that though, we should flag a practical consideration that is as important, or maybe more so, than getting the shape of the returns curve precisely right.

As Yglesias points out, once you have begun a foundation and people are building organisations and careers in the expectation of a known minimum level of funding for their field, there are particular harms to risking your entire existing endowment in a way that could leave them and their work stranded and half-finished.

While in the hypothetical your downside is meant to be capped at zero, in reality, 'swinging for the fences' with all your existing funds can mean going far below zero in impact.

The fact that many risky actions can result in an outcome far worse than what would have happened if you simply did nothing, is a reason for much additional caution, one that we wrote about in a 2018 piece titled 'Ways people trying to do good accidentally make things worse, and how to avoid them'. I regret that I failed to ask any questions that highlighted this critical point in the interview.

(This post won't address the many other serious issues raised by the risk-taking at FTX, which, according to news reports, have gone far beyond accepting the possibility of not earning much profit, and which can't be done justice here.

If those reports are accurate, the risk-taking at FTX was not just a coin flip that came up tails — it was immoral and perhaps criminal itself due to the misappropriation of other people's money for the purpose of risky investments. This has resulted in incalculable harm to customers, investors, trust in broader society, and has set back all the causes some of FTX's staff said they wanted to help.)

———

To return to the question of declining returns and risk aversion — just as one slice of pizza is delicious but a tenth may not be enjoyable to eat at all, people trying to use philanthropy to do good do face 'declining marginal returns' as they incrementally try to give away more and more money.

How fast that happens is a difficult empirical question.

But if one is funding the fairly niche and neglected problems SBF said he cared the most about, it's fair to say that any foundation would find it difficult to disperse $15 billion to projects they were incredibly excited about.

That's because a foundation with $15 billion would end up being a majority of funding for those areas, and so effectively increase the resources going towards them by more than 2-fold, and perhaps as much as 5-fold, depending on how broad a net they tried to cast. That 'glut' of funding would result in some more mediocre projects getting the green light.

Assuming someone funded projects starting with the ones they believed would have the most impact per dollar, and then worked down — the last grant made from such a large pot of money will be clearly worse, and probably have less than half the expected social impact per dollar as the first.

So between $1 billion with certainty versus a 10% chance of $15 billion, one could make a theoretical case for either option — but if it were me I would personally lean towards taking the $1 billion with certainty.[1]

Notice that by contrast, if I were weighing up a guaranteed $1 million against a 10% chance of $15 million, the situation would be very different. For the sectors I'd be most likely to want to fund, $15 million from me, spread out over a period of years, would represent less than a 1% increase, and so wouldn't overwhelm their capacity to sensibly grow, leading the marginal returns to decline more slowly. So in that case, setting aside my personal interests, I would opt for the 10% chance of $15 million.

———

Another massive real-world consideration we haven't mentioned yet which pushes in favour of risk aversion is the following: how much you are in a position to donate is likely to be strongly correlated with how much other donors are able to donate.

In practice risk-taking around philanthropy will mostly centres on investing in businesses. But businesses tend to do well and poorly together, in cycles, depending on broad economic conditions. So if your bets don't pay off, say, because of a recession, there's a good chance other donors will have less to give as well. As a result, you can't just take the existing giving of other donors for granted.

This is one reason for even small donors to have a reasonable degree of risk aversion. If they all adopt a risk-neutral strategy they may all get hammered at once and have to massively reduce their giving simultaneously, adding up to a big negative impact in aggregate.

This is a huge can of worms that has been written about by Christiano and Tomasik as far back as 2013, and more recently by my colleague Benjamin Todd.

———

This post has only scratched the surface of the analysis one could do on this question, and attempted to show how tricky it can be. For instance, we haven't even considered:

  • Uncertainty about how many other donors might join or drop out of funding similar work in future.
  • Indirect impacts from people funding similar work on adjacent projects.
  • Uncertainty about which problems you'll want to fund solutions to in future.

I regret having swept those and other complications under the rug for the sake of simplicity in a way that may well have confused some listeners to the show and seemed like an endorsement of an approach that is risk-neutral with respect to dollar returns, which would in fact be severely misguided.

(If you'd like to hear my thoughts on FTX more generally as opposed to this technical question you can listen to some comments I put out on The 80,000 Hours Podcast feed.)

Comments25
Sorted by Click to highlight new comments since: Today at 10:22 PM

Apologies for maybe sounding harsh: but I think this is plausibly quite wrong and nonsubstantive. I am also somewhat upset that such an important topic is explored in a context where substantial personal incentives are involved.

One reason is that the post that gives justice to the topic should explore possible return curves, and this post doesn't even contextualize betting with how much money EA had at the time (~$60B)/has now(~$20B) until the middle of the post where it mentions it in passing: "so effectively increase the resources going towards them by more than 2-fold, and perhaps as much as 5-fold." Arguing that some degree of risk aversion is, indeed, implied by diminishing returns is trivial and has little implications on practicalities.

I wish I had time to write about why I think altruistic actors probably should take a 10% chance of 15B vs. a 100% chance of 1B. Reverse being true would imply a very roughly ≥3x drop in marginal cost-effectiveness upon adding 15B of funding. But I basically think there would be ways to spend money scalably and at current "last dollar" margins.

In GH, this sorta follows from how OP's bar didn't change that drastically in response to a substantial change to OP funds (short of $15B, but still), and I think OP's GH last dollar cost-effectiveness changed even less.

In longtermism, it's more difficult to argue. But a bunch of grants that pass the current bar are "meh," and I think we can probably have some large investments that are better than the current ones in the future. If we had much more money in longtermism, buying a big stake in ~TSMC might be a good thing to do (and it preserves option value, among other things). And it's not unimaginable that labs like Anthropic might want to spend $10Bs in the next decade(s) to match the potential AI R&D expenses of other corporate actors (I wouldn't say it's clearly good, but having the option to do so seems beneficial).

I don't think the analysis above is conclusive or anything. I just want to illustrate what I see as a big methodological flaw of the post (not looking at actual returns curves when talking about diminishing returns) and make a somewhat grounded in reality case for taking substantial bets with positive EV.

Hi Misha — with this post I was simply trying to clarify that I understood and agreed with critics on the basic considerations here, in the face of some understandable confusion about my views (and those of 80,000 Hours).

So saying novel things to avoid being 'nonsubstantial' was not the goal.

As for the conclusion being "plausibly quite wrong" — I agree that a plausible case can be made for both the certain $1 billion or the uncertain $15 billion, depending on your empirical beliefs. I don't consider the issue settled, the points you're making are interesting, and I'd be keen to read more if you felt like writing them up in more detail.[1]

The question is sufficiently complicated that it would require concentrated analysis by multiple people over an extended period to do it full justice, which I'm not in a position to do.

That work is most naturally done by philanthropic program managers for major donors rather than 80,000 Hours.

I considered adding in some extra math regarding log returns and what that would imply in different scenarios, but opted not to because i) it would take too long to polish, ii) it would probably confuse some readers, iii) it could lead to too much weight being given to a highly simplified model that deviates from reality in important ways. So I just kept it simple.


  1. I'd just note that maintaining a controlling stake in TSMC would tie up >$200 billion. IIRC that's on the order of 100x as much as has been spent on targeted AI alignment work so far. For that to be roughly as cost-effective as present marginal spending on AI or other existential risks, it would have to be very valuable indeed (or you'd have to think current marginal spending was of very poor value). ↩︎

Rob,
Thanks for this clarification and acknowledgement of what happened with the podcast. Hope you're doing better since your last post.

One question on how I should be interpreting the statements describing your views:

So it seems worth clarifying what I actually do think. In brief, I entirely agree with Matt Yglesias that:

  • Returns to additional money are certainly not linear at large scales, which counsels in favour of risk aversion.
  • Returns become sublinear more quickly when you're working on more niche cause areas like longtermism, relative to larger cause areas such as global poverty alleviation.
  • This sublinearity becomes especially pronounced when you're considering giving on the scale of billions rather than millions of dollars.
  • There are other major practical considerations that point in favour of risk-aversion as well.

———

While in the hypothetical your downside is meant to be capped at zero, in reality, 'swinging for the fences' with all your existing funds can mean going far below zero in impact.

The fact that many risky actions can result in an outcome far worse than what would have happened if you simply did nothing, is a reason for much additional caution, one that we wrote about in a 2018 piece titled 'Ways people trying to do good accidentally make things worse, and how to avoid them'.

———

So between $1 billion with certainty versus a 10% chance of $15 billion, one could make a theoretical case for either option — but if it were me I would personally lean towards taking the $1 billion with certainty.

———

I regret having swept those and other complications under the rug for the sake of simplicity in a way that may well have confused some listeners to the show and seemed like an endorsement of an approach that is risk-neutral with respect to dollar returns, which would in fact be severely misguided.


Just wanted to clarify whether I'm meant to be interpreting these as "these are my views and they were my views at the time of the SBF podcast", or "In hindsight, I agree with these views now, but didn't hold this view at the time", or "I think I always believed this, but just didn't really think about this when we published the podcast", or something else?

The reason I ask is because the post makes it sound like the first interpretation, but if these were your views and always have been, to the point where you are saying an approach that is risk-neutral with respect to dollar returns would be "severely misguided", it seems difficult to reconcile that with the justification of publishing the relevant quote[1] as "for the sake of simplicity".

If you are happy to publish things like "you should just go with whatever has the highest expected value", "this is the totally rational approach" for the sake of simplicity when you actually don't endorse the claim (or even consider it severely misguided), what does that mean about other content on 80,000 hours? What else has been published for the sake of "simplicity" that you actually don't endorse, or consider severely misguided? I find this option hard to believe because it's not consistent with the publication/editorial standards I expect from 80,000 hours, nor its Director of Research, and it's an update I'm rather hesitant about making.

Sorry if this wasn't worded as politely or kindly as it could have been, and I hope you interpret me seeking clarification here as charitable. I'm aware there may be other possibilities I'm not thinking of, and wanted to ask because I didn't want to jump to any conclusions. I'm hoping this gives you an opportunity to clarify things for me and others who might be similarly confused.

Thanks!

 

Edit: Added this quote from the podcast, taken from davidc's comment below:
"But when it comes to doing good, you don’t hit declining returns like that at all. Or not really on the scale of the amount of money that any one person can make. So you kind of want to just be risk neutral."

  1. ^

    "If you were offered a 100% chance of $1 million to keep yourself, or a 10% chance of $15 million — it makes total sense to play it safe. You’d be devastated if you lost, and barely happier if you won.

    But if you were offered a 100% chance of donating $1 billion, or a 10% chance of donating $15 billion, you should just go with whatever has the highest expected value — that is, probability multiplied by the goodness of the outcome [in this case $1.5 billion] — and so swing for the fences.

    This is the totally rational but rarely seen high-risk approach to philanthropy championed by today’s guest, Sam Bankman-Fried. Sam founded the cryptocurrency trading platform FTX, which has grown his wealth from around $1 million to $20,000 million."

     

     

Thanks for the question Pseudonym — I had a bunch of stuff in there defending the honour of 80k/my colleagues, but took it out as it sounded too defensive.

So I'm glad you've given me a clear chance to lay out how I was thinking about the episode and the processes we use to make different kinds of content so you can judge how much to trust them.

Basically, yes — I did hold the views above about risk aversion for as long as I can recall. I could probably go find supporting references for that, but I think the claim should be believable because the idea that one should be truly risk neutral with respect to dollars at very large amounts just obviously makes no sense and would be in direct conflict with our focus on neglected areas (e.g. IIRC if you hold the tractability term of our problem framework constant then you get logarithmic returns to additional funding).

When I wrote that SBF's approach was 'totally rational', in my mind I was referring to thinking in terms of expected value in general, not to maximizing expected $ amounts, though I appreciate that was super unclear which is my fault.

Podcast interviews and their associated blog posts do not lay out 80,000 Hours staff's all-things-considered positions and never have (with the possible exception of Benjamin Todd talking about our 'key ideas').

They're a chance to explore ideas — often with people I partially disagree with — and to expose listeners to the diversity of views out there. For an instance of that from the same interview, I disagree with SBF on broad vs narrow longtermism but I let him express his views to provide a counterpoint to the ones listeners will be familiar with hearing from me.

The blog posts I or Keiran write to go with the episodes are rarely checked by anyone else on the team for substance. They're probably the only thing on the site that gets away with that lack of scrutiny, and we'll see whether that continues or not after this experience. So blame for errors should fall on us (and in this case, me).

Reasons for that looser practice include:

  • They're usually more clearly summarising a guest's opinions rather than ours.
  • They have to be imprecise, as podcast RSS feeds set a 4000 character limit for episode descriptions (admittedly we overrun these from time to time).
  • They're written primarily to highlight the content of the episode so interested people can subscribe and/or listen to the episode.
  • Even if the blog post is oversimplified, the interview itself should hopefully provide more subtlety.

By comparison our articles like key ideas or our AI problem profile are debated over and commented on endlessly. On this issue there's our short piece on 'How much risk to take'.

Not everyone agrees with every sentence of course, but little goes out without substantial review.

We could try to make the show as polished as articles, more similar to say, a highly produced show like Planet Money. But that would involve reducing output by more than half, which I think the audience would overall dislike (and would also sabotage the role the podcast plays in exposing people to ideas we don't share).

You or other readers might be curious as to what was going through my head when I decided to prioritise the aspect of expected value that I did during the interview itself:

  • We hadn't explained the concept of expected value and ambition in earning to give and other careers very much before. Many listeners won't have heard of expected value or if they have heard of it, not know what it is. So the main goal I had in mind was to get us off the ground floor and explain the basic case there. As such these explanations were aimed at a different audience than Effective Altruism Forum regulars, who would probably benefit more from advanced material like the interview with Alan Hajék.
  • The great majority of listeners (99.9%+) are not dealing with resources on the scale of billions of dollars, and so in my mind the priority was to get them to see the case for not being very risk averse, inasmuch as they're a small fraction of all the effort going into their problem and aren't at risk of causing massive harm to others.
  • I do wish I had pointed out that this only applies if they're not taking the same correlated risks as everyone else in the field — that was a predictable mistake in my view and something that wasn't as prominent in my mind as it ought to have been or is today.
  • Among the tiny minority of people who are dealing with resources or careers at scales over $100 million, by that point they're now mostly thinking about these issues full-time or have advisors who do, and are likely to think up or be told the case for risk aversion (it should become obvious through personal experience to someone sensible in such a situation).

I do think I made a mistake ex ante not to connect personal and professional downside risk more into this discussion. We had mentioned it in previous episodes and an article I read which went out in audio form on the podcast feed itself, but at the time I thought of seeking upside potential, and the risk of doing more harm than good, as more conceptually and practically distinct issues than I do now after the last month.

Overall I think I screwed up a bunch about this episode. If you were inclined to think that the content of these interviews is reliable the great majority of the time, then this is a helpful reminder that ideas are sometimes garbled on the show — and if something sounds wrong to you, it might well just be because it's wrong. I'm sorry about that, and we try to keep it at reasonable levels, though with the format we have we'll never get it to zero.

But if it were me I wouldn't update much on the quality of the written articles as they're produced pretty differently and by different people.

Overall I think I screwed up a bunch about this episode. If you were inclined to think that the content of these interviews is reliable the great majority of the time, then this is a helpful reminder that ideas are sometimes garbled on the show — and if something sounds wrong to you, it might well just be because it's wrong. I'm sorry about that, and we try to keep it at reasonable levels, though with the format we have we'll never get it to zero.

FWIW I've generally assumed that the content in those interviews are wrong pretty often, certainly I'd expect the average interview to have at least one egregious mistake. 

I don't think this should be too surprising, being fully accurate for 2h+ on interesting topics is very hard.

Rob,
Thanks, I appreciated this response. I have a few thoughts but I don't want the focus on pushbacks to give the impression I think negatively of what you said-I think overall it was a positive update. It's also easier for me to sit and push back and say things that just sound like hindsight bias, but I'm erring on the side of sharing them because I'm taking you at face value RE: these being views you have already held for as long as you recall. 

As you allude to below, I think it's really hard in a podcast setting to cover all the nuances and be super precise with language, and I think that's understandable. OTOH, from the 2020 EA survey: "more than half (50.7%) of respondents cited 80,000 Hours as important for them getting involved in EA." 80,000 hours is one of the most public facing EA organizations, and what 80,000 hours publishes will often be seen as "what EA thinks", and I think one initial reaction when this happened was something like "Maybe 80,000 hours don't really take that seriously enough" or something (the pushback Ben received online when tweeting the climate change problem profile was another example of how these kinds of public facing concerns seemed to be underrated, especially because the tweet was later deleted) and I hope this will be considered more seriously when deciding what (if any) changes are appropriate going forward.

Another point: it seems a little weird to say the blog post gets away with less scrutiny because the interview provides more subtlety and then not actually provide more subtlety in the interview, which is I think what happened here? Like if you can't explore the nuance during the podcast because of the podcast format, that's understandable, but it doesn't seem reasonable to then also say that you don't cover it in the accompanying blog post because you intend for the subtlety to be covered in the podcast. It's also not like you're deciding about whether to include layer 5 and 6 of the nuance, but whether to include a disclaimer about a view that you personally find severely misguided.

I guess one possible suggestion might be to review the transcript/blog post and add relevant caveats and disclaimers after the podcast (especially since there's already a relevant article you've already published on it). I think a general disclaimer would be an even lower cost version, but less helpful in this specific case where you appear to be putting aside your disagreement with SBF's views and actively not pushing back on it for the express purpose of better communication with the viewers? 

The great majority of listeners (99.9%+) are not dealing with resources on the scale of billions of dollars, and so in my mind the priority was to get them to see the case for not being very risk averse, inasmuch as they're a small fraction of all the effort going into their problem and aren't at risk of causing massive harm to others.

I do think it's important to consider harm to themselves and possibly their dependents as a consideration here, even if they aren't operating in the scale of billions. Also while I agree with the point about tiny minority etc, you probably don't want to stake reputational risks to 80,000 hours or the EA movement more broadly on whether or not your listeners or guests are 'sensible'.

I agree it seems valuable to let guests talk about points of disagreement, but where you do this it seems important to be clear at some stage whether you are letting them talk about their views because you want to showcases a different viewpoint, or at least that you aren't endorsing their message, and especially if the message is a potentially harmful one. It also minimizes scenarios where you pretty reasonably justify yourself but people from the outside or those who are less charitable find it hard to tell the difference between how you've justified yourself in this comment VS a world where you were endorsing SBF's views followed by some combination of post-hoc rationalization/hindsight bias when things turned out poorly (in this case, I wouldn't consider it uncharitable if people thought you were in fact endorsing SBF's stated views, based just on the podcast and blog). I think this could be harmful not only for you, but also for 80,000 hours and the EA movement more broadly.

Again, thanks for all your work, and I'm aware it's easier for me to sit behind a pseudonym and throw critical comments over than actually do the work you have to do-but I'm doing this with the intention of hopefully contributing to something constructive.

Seems worthwhile to quote the relevant bit of the interview:

====

Sam Bankman-Fried: If your goal is to have impact on the world — and in particular if your goal is to maximize the amount of impact that you have on the world — that has pretty strong implications for what you end up doing. Among other things, if you really are trying to maximize your impact, then at what point do you start hitting decreasing marginal returns? Well, in terms of doing good, there’s no such thing: more good is more good. It’s not like you did some good, so good doesn’t matter anymore. But how about money? Are you able to donate so much that money doesn’t matter anymore? And the answer is, I don’t exactly know. But you’re thinking about the scale of the world there, right? At what point are you out of ways for the world to spend money to change?

Sam Bankman-Fried: There’s eight billion people. Government budgets run in the tens of trillions per year. It’s a really massive scale. You take one disease, and that’s a billion a year to help mitigate the effects of one tropical disease. So it’s unclear exactly what the answer is, but it’s at least billions per year probably, so at least 100 billion overall before you risk running out of good things to do with money. I think that’s actually a really powerful fact. That means that you should be pretty aggressive with what you’re doing, and really trying to hit home runs rather than just have some impact — because the upside is just absolutely enormous.

Rob Wiblin: Yeah. Our instincts about how much risk to take on are trained on the fact that in day-to-day life, the upside for us as individuals is super limited. Even if you become a millionaire, there’s just only so much incrementally better that your life is going to be — and getting wiped out is very bad by contrast.

Rob Wiblin: But when it comes to doing good, you don’t hit declining returns like that at all. Or not really on the scale of the amount of money that any one person can make. So you kind of want to just be risk neutral. As an individual, to make a bet where it’s like, “I’m going to gamble my $10 billion and either get $20 billion or $0, with equal probability” would be madness. But from an altruistic point of view, it’s not so crazy. Maybe that’s an even bet, but you should be much more open to making radical gambles like that.

Sam Bankman-Fried: Completely agree. ...

Hey David, yep not our finest moment, that's for sure.

The critique writes itself so let me offer some partial explanation:

  1. Extemporaneous speech is full of imprecision like this where someone is focused on highlighting one point (in this case the contrast between appropriate individual vs altruistic risk aversion) and misses others. With close scrutiny I'm sure you could find many other cases of me presenting ideas as badly as that, and I'd imagine the same is true for all interview shows edited at the same level as ours.

Fortunately one upside of the conversation format is I think people don't give it undue weight, because they accurately perceive it as being scrappy in this way. (That said, I certainly do wish I had been more careful here and hopefully alarm bells will be more likely to go off in my head in a future similar case!)

I don't recall people criticising this passage earlier, and I suspect that's because prior to the FTX crash it was natural to interpret it less literally and as more pointing towards a general issue.

  1. You can hear that with the $10b vs $0/20b comparison as soon as I said it I realised it wasn't right and wanted to pare it back ("Maybe that’s an even bet"), because there's no expected financial gain there. I should have compared it against $5b or something but couldn't come up with the right number on the spot.

  2. I was primarily trying to think in terms of the sorts of sums the great majority of listeners could end up dealing with, which is only very rarely above >$1b, which led me to add "or not really on the scale of the amount of money that any one person can make".

If you'd criticised me for saying this in May I would have said that I was highlighting the aspect of the issue that was novel and relevant for most listeners, and that by the time someone is a billionaire donor they will have / should have already gotten individualised advice and not be relying on an introductory interview like this to guide them. They're also likely to have become aware of the risk aversion issue just through personal experience and common sense (all the super-donors I know of certainly are aware of these issues, though I'm sure they each give it different weight).

All that said, the above passage is pretty cringe, and hopefully this experience will help us learn to steer clear of similar mistakes in future.

Thanks!

While it isn't remotely what you were talking about, a point that I often confuse in my head is about how you "win"/"lose" this money. Losing financially is one thing, losing through crime is another.

If FTX had lost $10bn on a standard trade (like Meta's recent foray into VR, which may or may not turn out well) we'd have a completely different discussion to what happened. In the FTX case, their behaviour looks to lose far more than just the capital - they caused lots of harm and lost people's respect and goodwill also. In that sense, the trade took their capital to 0 and then caused a load of damage besides. Ex-ante it was much worse than it looked, even without a discussion of utility curves.

A confusion is introduced in the quoted passage by the shift from the personal to the general.  You personally cannot lose more than all your assets, because of bankruptcy.  But bankruptcy just shifts any further losses to your creditors, so once we shift to thinking about global benefits and harms, the loss is no longer capped in that way.  

More generally, the assumption that returns can't be negative is the worrying assumption. Either SBF didn't realize that or he thought that depositors didn't matter.

One under-discussed aspect of this: What shape does your utility curve look like for negative dollar returns?

I've been trying to figure out why SBF seems very much to have not just been risk-neutral in his business approach, but quite probably was actively risk-seeking, seeking to make correlated bets (mainly amounting to longs on crypto in general) that all crashed this year.

It seems quite possible to me that SBF saw the downside of Alameda/FTX losing $10B as not nearly as bad as the upside of them making $10B would be good. Consider:

  • Depositors losing their money means that you're taking from people mostly in developed countries who likely have some cash to spare.
  • SBF's parents are law professors who could probably help him legally if he ran into trouble.
  • Even if SBF and the rest of his leadership end up in jail, that's only harm to a small number of people, compared to the many he could help in a positive situation.
  • The ensuing media firestorm has at least made a larger number of people aware of the ideas of EA, which are compelling on their own independent of the goodness of their practitioners.

To be clear, I'm not endorsing this perspective at all... I'm just trying to see if SBF could have been reasoning along these lines, even if he wasn't doing so publicly.

For the rest of us, particularly those trying to act based on the funding being provided, I think it would have been far more helpful to actually examine the potential downside risk that SBF himself was already highlighting with his approach to risk.

This would have meant Rob asking questions like: "If you endorse these sort of high-risk double-or-nothing bets, and you've made it clear that you're not letting up on that even now that you've made billions, should we anticipate a decent likelihood of hearing that FTX has gone bankrupt sometime soon?" Visualizing, and more broadly discussing that very real possibility would have hopefully muted the impact on the EA community when it actually came to pass. And then, after dwelling on the seemingly-zero downside possibility, the natural follow-up question would dive into SBF's valuation of negative returns.

I feel like the story that Rob told fell into the classic winner's fallacy mindset of highlighting a risk someone took seemingly after it was successful. The issue was that those risks weren't just in the past.

Do you think this presentation influenced/would have influenced SBF at all?

At the time, I assumed (and got the sense through his discussion) that he was extremely sophisticated, and given his quantitative and finance skills and background, would have already been taking these points on board (but just simplifying for presentation). But it's also possible for very smart and sophisticated people to overlook some obvious things, particular if their brains are occupied in many areas at once.

I struggle to follow the logic that would permit this risk taking in the first place, even without all these caveats. As you said:

a foundation with $15 billion would end up being a majority of funding for those areas, and so effectively increase the resources going towards them by more than 2-fold, and perhaps as much as 5-fold... by contrast... $15 million from me, spread out over a period of years, would represent less than a 1% increase.

This is indeed a big difference. If you're looking at a small-ish donation, it makes sense to ask if it's uncorrelated with other similar donations, and of yes to take the option with the higher expected value, because over a large number of such choices it's probable that the average donation would indeed have that value. In contrast, if you're looking at a donation in the billions of dollars, this EV logic is almost entirely irrelevant - even if it were uncorrelated with other donations, you don't have a hundred or a thousand donations of this size! The idea that we can actually expect do get the EV is just wrong. We in fact never get it.

So you can decide to be more or less risk averse, but you can't really pretend you're not risking a billion dollars here and hide behind EV maximisation.

When I listened to the interview, I briefly thought to myself that that level of risk-neutrality didn't make sense. But I didn't say anything about that to anyone, and I'm pretty sure I also didn't play through in my head anything about the actual implications if Sam were serious about it.

I wonder if we could have taken that as a red flag. If you take seriously what he said, it's pretty concerning (implies a high chance of losing everything, though not necessarily anything like what actually happened)!

I just went down a medium-size Matthew Yglesias' Substack-posts-related-to-EA/longtermism rabbit hole and have to say I'm extremely disappointed by the quality of his posts.

I can't comment on them directly to give him feedback because I'm not a subscriber, so I'm sharing my reaction here instead.

e.g. This one has a click bait title and doesn't answer the question in the post, nor argue that the titular question assumes a false premise, which makes the post super annoying: https://www.slowboring.com/p/whats-long-term-about-longtermism

But after reading Will MacAskill’s book “What We Owe The Future” and the surge of media coverage it generated, I think I’ve talked myself into my own corner of semi-confusion over the use of the name “longtermist” to describe concerns related to advances in artificial intelligence. Because at the end of the day, the people who work in this field and who call themselves “longtermists” don’t seem to be motivated by any particularly unusual ideas about the long term. And it’s actually quite confusing to portray (as I have previously) their main message in terms of philosophical claims about time horizons. The claim they’re making is that there is a significant chance that current AI research programs will lead to human extinction within the next 20 to 40 years. That’s a very controversial claim to make. But appending “and we should try really hard to stop that” doesn’t make the claim more controversial.

This paragraph in the introduction to that post seems to answer the question in the post, and concisely argue that "longtermism" as it manifests as x-risk is not actually related to the long-term? I don't follow what bothers you about the post.

I am admittedly biased, because this is far and away the thing that most annoys me about longtermist EA marketing - caring about x-risk is completely common sense if you buy the weird empirical beliefs about AI x-risk, which have nothing to do with moral philosophy. But I thought he made the point coherently and well. So long as you're happy with the (IMO correct) statement that "longtermism" in practice mostly manifests as working on x-risk.

EDIT: This paragraph is a concise, one sentence summary of the argument in the post.

In other words, there’s nothing philosophically controversial about the idea that averting likely near-term human extinction ought to be a high priority — the issue is a contentious empirical claim.

Thanks for the reply, Neel.

First I should note that I wrote my previous comment on my phone in the middle of the night when I should have been asleep long before, so I wasn't thinking fully about how others would interpret my words. Seeing the reaction to it I see that the comment didn't add value as written and I probably should just just waited to write it later when I could unambiguously communicate what bothered me about it at length (as I do in this comment).

To clarify, I agree with you an Yglesias that most longtermists are working on things like preventing AI from causing human extinction only a few decades from now, meaning the work is also very important from a short-term perspective that doesn't give weight to what happens after say, 2100. So I agree with you that ""longtermism" in practice mostly manifests as working on [reducing near-term] x-risk."

I also agree that there's an annoying thing about "longtermist EA marketing" related to the above. (I liked your Simplify EA Pitches to "Holy Shit, X-Risk".)

To explain what bothered me about Yglesias' post more clearly, let me first say that my answer to "What's long-term about "longtermism"?" is the (my words:) "giving significant moral weight to the many potential beings that might come to exist over the course of the long-term future (trillions upon trillions of years)" part of longtermism. Since that "part" of longtermism actually is wholly what long-termism is, one could also just answer "longtermism is long-term".

In other words, the question sounds similar to (though not exactly like) "What's liberal about liberalism?" or "What's colonial about colonialism?"

I therefore would expect a post with the title "What's long-term about "longtermism"?" to explain that longtermism is a moral view that  gives enough moral weight to the experiences of future beings that might come to exist such that the long-term future of life matters a lot in expectation given how long that future might be (trillions upon trillions of years) and how much space in the universe it might make use of (a huge number of resources beyond this pale blue dot).

But instead, Yglesias' post points out that the interventions that people who care about beings in the long-term future think are most worthwhile often look like things that people who didn't care about future generations would also think are important (if they held the same empirical beliefs about near-term AI x-risk, as some of them do).

And my reaction to that is, okay, yes Yglesias, I get it and agree, but you didn't actually argue that longtermism isn't "long term" like your title suggested you might. Longtermism absolutely is "long-term" (as I described above). The fact that some interventions favored by longtermists also look good from non-longtermist moral perspectives doesn't change that.

Yglesias:

Because at the end of the day, the people who work in this field and who call themselves “longtermists” don’t seem to be motivated by any particularly unusual ideas about the long term.

This statement is a motte in that he says "any particularly unusual ideas about the long term" rather than "longtermism".

(I think the vast majority of people care about future generations in some capacity, e.g. they care about their children and their friends' children before the children are born. Where we draw the line between this and some form of "strong longtermism" that actually is "particularly unusual" is unclear to me. E.g. I think most people also actually care about their friends' unborn children's unborn children too, though people often don't make this explicit so it's unclear to me how unusual the longtermism moral view actually is.)

If we replace the "any particularly unusual ideas about the long term" with "longtermism" then Yglesias' statement seems to become an easily-attackable bailey.

In particular, I would say that the statement seems false and uncharitable and unsubstantiated. Yglesias is making a generalization, and obviously it's a generalization that's true of some people working on reducing x-risks posed by AI, but I know it's definitely not true of many others working on x-risks. E.g. There are definitely many self-described longtermists working on reducing AI x-risk who are in fact motivated by wanting to make sure that humanity doesn't go extinct so that future people can come to exist.

While I'm not an AI alignment researcher, I've personally donated a substantial fraction of my earnings to people doing this work and do many things that fall in the movement building / field building category to try to get other people to work on reducing AI risk, and I can personally attest to the fact that I care a lot more about preventing extinction to ensure that future beings are able to come to exist and live great lives than I care about saving my own life and everyone I know and love today. It's not that I don't care about my own life and everyone else alive today--I do a tremendous amount--but rather that as Derrick Parfit says the worst part about everyone dying today would by far be the loss of all future value, not 8 billion humans lives being cut short.

I hope this clarifies my complaint about Yglesias' What's long-term about "longtermism"? post.

The last thing that I'll say in this comment is that I found the post via Yglesias' Some thoughts on the FTX collapse post that Rob responded to in the OP. Here's how Yglesias cited his "What's long-term about "longtermism"?" in the FTX collapse post:

If you are tediously familiar with the details of EA institutions, I think you’ll see my list is closer to the priorities of Open Philanthropy (the Dustin Moskovitz / Cari Tuna EA funding vehicle) than to those of the FTX Future Fund. In part, that’s because as you can see in the name, SBF was very publicly affiliated with promoting the “longtermism” idea, which I find to be a little bit confused.

As I've explained at length in this comment, I think longtermism is not confused. Contra Yglesias (though again Yglesias doesn't actually argue against the claim, which is what I found annoying), longtermism is in fact "long-term."

Yglesias is actually the one who is confused, both in his failure to recognize that longtermism is in fact "long-term" and because he confuses/conflates  the motivations of some people working on reducing near-term extinction risk from AI with "longtermism."

Again: Longtermism is a moral view that emphasizes the importance of future generations throughout the long term future. People who favor this view (self-identified "longtermist" EAs) often end up favoring working on reducing the risk of near-term human extinction from AI. People who are only motivated by what happens in the near term may also view working on this problem to be important. But that does not mean that longtermism is not "long term", because "the motivation of some people working on reducing near-term extinction risk from AI" is not "longtermism."

I want to say "obviously!" to this (because that's what I was thinking when I read Yglesias' post late last night and which is why I was annoying by it), but I also recognize that EAs' communications related to "longtermism" have been far from perfect and it's not surprising that some smart people like Yglesias are confused.

In my view it probably would have been better to have and propagate a term for the general idea of "creating new happy beings is a morally good as opposed to morally neutral matter" rather than "longtermism," and then we could just talk about the obvious fact that under this moral view it seems very important to not miss out on the opportunity to put the extremely large stock of resources available in our galaxy and beyond to use producing happy beings for trillions upon trillions of years to come, by e.g. allowing human extinction in the near term or otherwise not becoming grabby and enduring for a long time. But this would be the subject of another discussion.

Edited to add: Sorry this post is so long. Whenever I feel like I wasn't understood in writing I have a tendency to want to write a lot more to overexplain my thoughts. In other words I've written absurdly long comments like this before in similar circumstances. Hopefully it wasn't annoying to read it all. Obviously the time cost to me of writing it is much more than the time-cost to you or others for reading it, but I also I'm wary of putting out lengthy text for others to read where shorter text could have sufficed. I just know I have trouble keeping my comments concise under conditions like this and psychologically it was easier for me to just write everything out as I wrote it. (To share, I also think doing this generally isn't a very good use of my time and I'd like to get better at not doing this, or at least not as often.)

First I should note that I wrote my previous comment on my phone in the middle of the night when I should have been asleep long before, so I wasn't thinking fully about how others would interpret my words. Seeing the reaction to it I see that the comment didn't add value as written and I probably should just just waited to write it later when I could unambiguously communicate what bothered me about it at length (as I do in this comment).

No worries! I appreciate the context and totally relate :) (and relate with the desire to write a lot of things to clear up a confusion!)

For your general point, I would guess this is mostly a semantic/namespace collision thing? There's "longtermism" as the group of people who talk a lot about x-risk, AI safety and pandemics because they hold some weird beliefs here, and there's longtermism as the moral philosophy of future people matter a lot.

I saw Matt's point as saying that the "longtermism" group, doesn't actually need to have much to do with the longtermism philosophy, and that thus it's weird that they call themselves longtermists. Because they are basically the only people working on AI X-risk and thus are the group associated with that worldview, and try hard to promote it. Even though this is really an empirical belief and not much to do with their longtermism.

I mostly didn't see his post as an attack or comment on the philosophical movement of longtermism.

But yeah, overall I would guess that we mostly just agree here?

There's "longtermism" as the group of people who talk a lot about x-risk, AI safety and pandemics because they hold some weird beliefs here

Interesting--When I think of the group of people "longtermists" I think of the set of people who subscribe to (and self-identify with) some moral view that's basically "longtermism," not people who work on reducing existential risks. While there's a big overlap between these two sets of people, I think referring to e.g. people who reject caring about future people as "longtermists" is pretty absurd, even if such people also hold the weird empirical beliefs about AI (or bioengineered pandemics, etc) posing a huge near-term extinction risk. Caring about AI x-risk or thinking the x-risk from AI is large is simply not the thing that makes a person a "longtermist."

But maybe people have started using the word "longtermist" in this way and that's the reason Yglesias' worded his post as he did? (I haven't observed this, but it sounds like you might have.)

But maybe people have started using the word "longtermist" in this way and that's the reason Yglesias' worded his post as he did? (I haven't observed this, but it sounds like you might have.)

Yeah this feels like the crux, my read is that "longtermist EA" is a term used to encompass holy shit x risk EA too

Also in the Yglesias post Rob wrote the OP in response to, Yglesias misrepresents SBF's view then cites the 80k podcast as supporting this mistaken view when in fact it does not. That's just bad journalism.

Until very recently, for example, I thought I had an unpublishable, off-the-record scoop about his weird idea that someone with his level of wealth should be indifferent between the status quo and a double-or-nothing bet with 50:50 odds.

There's no way that is or ever has been SBF's view. I don't buy it and think Yglesias is just misrepresenting SBF's view. Of course SBF wouldn't be completely indifferent between keeping whatever his net worth was and taking a 50% chance of doubling it and a 50% chance of losing it all.

That I had this information made me nervous on behalf of people making plans based on his grants and his promises of money — I didn’t realize this is actually something he’s repeatedly said publicly and on the record.

Yglesias then links to the allegedly offending passage, but I have to say that the passage does not support Yglesias' assertion than SBF is/was completely risk neutral about money. Choosing a 10% chance of $15 billion over a 100% chance of $1 billion is not risk neutral. It still allows for quite a bit of risk aversion.

I didn't relisten to the full 80k interview to see if something SBF said does justify Yglesias' assertion but from memory I feel quite sure it doesn't exist.

It still doesn't fully entail Matt's claim, but the content of the interview gets a lot closer than that description. You don't need to give it a full listen, I've quoted the relevant part:

https://forum.effectivealtruism.org/posts/THgezaPxhvoizkRFy/clarifications-on-diminishing-returns-and-risk-aversion-in?commentId=ppyzWLuhkuRJCifsx

Thanks for finding and sharing that quote. I agree that it doesn't fully entail Matt's claim, and would go further to say that it provides evidence against Matt's claim.

In particular, SBF's statement...

At what point are you out of ways for the world to spend money to change? [...] [I]t’s unclear exactly what the answer is, but it’s at least billions per year probably, so at least 100 billion overall before you risk running out of good things to do with money.

... makes clear that SBF was not completely risk neutral.

At the end of the excerpt Rob says "So you kind of want to just be risk neutral." To me the "kind of" is important to understanding his meaning. Relative to the individual making the "gamble my $10 billion and either get $20 billion or $0, with equal probability" bet, for the altruistic actor it's "it’s not so crazy". Obviously it's still crazy, but also clearly Rob's point that it's not as crazy as the madness of an individual doing this for their own self-interested gain is completely valid, given the difference in how steeply returns to spending diminish for a single individual versus all moral patients in the world (present and future) combined.

Yglesias' statement that SBF thought "someone with his level of wealth should be indifferent between the status quo and a double-or-nothing bet with 50:50 odds" is clearly false, though only a few words different than SBF's agreement with Rob that an altruist doing this is "not so crazy" as a person doing it for self-interested reasons. So I agree "the content of the interview gets a lot closer than that description," but I also think Yglesias just did a bad job interpreting the interview. But who knows, maybe SBF misspoke to Yglesias in-person and most of the reason Yglesias had for believing SBF took that view was actually the words SBF spoke to him in person.

Curated and popular this week
Relevant opportunities