Hi Pagw — in case you haven't seen it here's my November 2022 reply to Oli H re Sam Bankman-Fried's lifestyle:
"I was very saddened to hear that you thought the most likely explanation for the discussion of frugality in my interview with Sam was that I was deliberately seeking to mislead the audience.
I had no intention to mislead people into thinking Sam was more frugal than he was. I simply believed the reporting I had read about him and he didn’t contradict me.
It’s only in recent weeks that I learned that some folks such as you thought the impression about his lifestyle was misleading, notwithstanding Sam's reference to 'nice apartments' in the interview:
"I don’t know, I kind of like nice apartments. ... I’m not really that much of a consumer exactly. It’s never been what’s important to me. And so I think overall a nice place is just about as far as it gets."
Unfortunately as far as I can remember nobody else reached out to me after the podcast to correct the record either.
In recent years, in pursuit of better work-life balance, I’ve been spending less time socialising with people involved in the EA community, and when I do, I discuss work with them much less than in the past. I also last visited the SF Bay Area way back in 2019 and am certainly not part of the 'crypto' social scene. That may help to explain why this issue never came up in casual conversation.
Inasmuch as the interview gave listeners a false impression about Sam I am sorry about that, because we of course aim for the podcast to be as informative and accurate as possible."
Hi Misha — with this post I was simply trying to clarify that I understood and agreed with critics on the basic considerations here, in the face of some understandable confusion about my views (and those of 80,000 Hours).
So saying novel things to avoid being 'nonsubstantial' was not the goal.
As for the conclusion being "plausibly quite wrong" — I agree that a plausible case can be made for both the certain $1 billion or the uncertain $15 billion, depending on your empirical beliefs. I don't consider the issue settled, the points you're making are interesting, and I'd be keen to read more if you felt like writing them up in more detail.[1]
The question is sufficiently complicated that it would require concentrated analysis by multiple people over an extended period to do it full justice, which I'm not in a position to do.
That work is most naturally done by philanthropic program managers for major donors rather than 80,000 Hours.
I considered adding in some extra math regarding log returns and what that would imply in different scenarios, but opted not to because i) it would take too long to polish, ii) it would probably confuse some readers, iii) it could lead to too much weight being given to a highly simplified model that deviates from reality in important ways. So I just kept it simple.
I'd just note that maintaining a controlling stake in TSMC would tie up >$200 billion. IIRC that's on the order of 100x as much as has been spent on targeted AI alignment work so far. For that to be roughly as cost-effective as present marginal spending on AI or other existential risks, it would have to be very valuable indeed (or you'd have to think current marginal spending was of very poor value). ↩︎
Hey David, yep not our finest moment, that's for sure.
The critique writes itself so let me offer some partial explanation:
Fortunately one upside of the conversation format is I think people don't give it undue weight, because they accurately perceive it as being scrappy in this way. (That said, I certainly do wish I had been more careful here and hopefully alarm bells will be more likely to go off in my head in a future similar case!)
I don't recall people criticising this passage earlier, and I suspect that's because prior to the FTX crash it was natural to interpret it less literally and as more pointing towards a general issue.
You can hear that with the $10b vs $0/20b comparison as soon as I said it I realised it wasn't right and wanted to pare it back ("Maybe that’s an even bet"), because there's no expected financial gain there. I should have compared it against $5b or something but couldn't come up with the right number on the spot.
I was primarily trying to think in terms of the sorts of sums the great majority of listeners could end up dealing with, which is only very rarely above >$1b, which led me to add "or not really on the scale of the amount of money that any one person can make".
If you'd criticised me for saying this in May I would have said that I was highlighting the aspect of the issue that was novel and relevant for most listeners, and that by the time someone is a billionaire donor they will have / should have already gotten individualised advice and not be relying on an introductory interview like this to guide them. They're also likely to have become aware of the risk aversion issue just through personal experience and common sense (all the super-donors I know of certainly are aware of these issues, though I'm sure they each give it different weight).
All that said, the above passage is pretty cringe, and hopefully this experience will help us learn to steer clear of similar mistakes in future.
Thanks for the question Pseudonym — I had a bunch of stuff in there defending the honour of 80k/my colleagues, but took it out as it sounded too defensive.
So I'm glad you've given me a clear chance to lay out how I was thinking about the episode and the processes we use to make different kinds of content so you can judge how much to trust them.
Basically, yes — I did hold the views above about risk aversion for as long as I can recall. I could probably go find supporting references for that, but I think the claim should be believable because the idea that one should be truly risk neutral with respect to dollars at very large amounts just obviously makes no sense and would be in direct conflict with our focus on neglected areas (e.g. IIRC if you hold the tractability term of our problem framework constant then you get logarithmic returns to additional funding).
When I wrote that SBF's approach was 'totally rational', in my mind I was referring to thinking in terms of expected value in general, not to maximizing expected $ amounts, though I appreciate that was super unclear which is my fault.
Podcast interviews and their associated blog posts do not lay out 80,000 Hours staff's all-things-considered positions and never have (with the possible exception of Benjamin Todd talking about our 'key ideas').
They're a chance to explore ideas — often with people I partially disagree with — and to expose listeners to the diversity of views out there. For an instance of that from the same interview, I disagree with SBF on broad vs narrow longtermism but I let him express his views to provide a counterpoint to the ones listeners will be familiar with hearing from me.
The blog posts I or Keiran write to go with the episodes are rarely checked by anyone else on the team for substance. They're probably the only thing on the site that gets away with that lack of scrutiny, and we'll see whether that continues or not after this experience. So blame for errors should fall on us (and in this case, me).
Reasons for that looser practice include:
By comparison our articles like key ideas or our AI problem profile are debated over and commented on endlessly. On this issue there's our short piece on 'How much risk to take'.
Not everyone agrees with every sentence of course, but little goes out without substantial review.
We could try to make the show as polished as articles, more similar to say, a highly produced show like Planet Money. But that would involve reducing output by more than half, which I think the audience would overall dislike (and would also sabotage the role the podcast plays in exposing people to ideas we don't share).
You or other readers might be curious as to what was going through my head when I decided to prioritise the aspect of expected value that I did during the interview itself:
I do think I made a mistake ex ante not to connect personal and professional downside risk more into this discussion. We had mentioned it in previous episodes and an article I read which went out in audio form on the podcast feed itself, but at the time I thought of seeking upside potential, and the risk of doing more harm than good, as more conceptually and practically distinct issues than I do now after the last month.
Overall I think I screwed up a bunch about this episode. If you were inclined to think that the content of these interviews is reliable the great majority of the time, then this is a helpful reminder that ideas are sometimes garbled on the show — and if something sounds wrong to you, it might well just be because it's wrong. I'm sorry about that, and we try to keep it at reasonable levels, though with the format we have we'll never get it to zero.
But if it were me I wouldn't update much on the quality of the written articles as they're produced pretty differently and by different people.
Hi Oli — I was very saddened to hear that you thought the most likely explanation for the discussion of frugality in my interview with Sam was that I was deliberately seeking to mislead the audience.
I had no intention to mislead people into thinking Sam was more frugal than he was. I simply believed the reporting I had read about him and he didn’t contradict me.
It’s only in recent weeks that I learned that some folks such as you thought the impression about his lifestyle was misleading, notwithstanding Sam's reference to 'nice apartments' in the interview:
"I don’t know, I kind of like nice apartments. ... I’m not really that much of a consumer exactly. It’s never been what’s important to me. And so I think overall a nice place is just about as far as it gets."
Unfortunately as far as I can remember nobody else reached out to me after the podcast to correct the record either.
In recent years, in pursuit of better work-life balance, I’ve been spending less time socialising with people involved in the EA community, and when I do, I discuss work with them much less than in the past. I also last visited the SF Bay Area way back in 2019 and am certainly not part of the 'crypto' social scene. That may help to explain why this issue never came up in casual conversation.
Inasmuch as the interview gave listeners a false impression about Sam I am sorry about that, because we of course aim for the podcast to be as informative and accurate as possible.
"The image, both internally and externally, of SBF was that he lived a frugal lifestyle, which it turns out was completely untrue (and not majorly secret). Was this known when Rob Wiblin interviewed SBF on the 80000 Hours podcast and held up SBF for his frugality?"
Thanks for the question Gideon, I'll just respond to this question directed at me personally.
When preparing for the interview I read about his frugal lifestyle in multiple media profiles of Sam and sadly simply accepted it at face value. One that has stuck in my mind up until now was this video that features Sam and the Toyota Corolla that he (supposedly) drove.
I can't recall anyone telling me that that was not the case, even after the interview went out, so I still would have assumed it was true two weeks ago.
Yes I'd love to read about this too.
If I had to guess I'd say this is right and the case is even stronger when you consider the foregone impact during the extended training process when someone isn't directly doing any good.
But I'd expect people who start a charity earlier rather than seeking additional training first to be systematically different — to start with they're evidently more confident about their prospects, and that may be an indicator of higher underlying competence or enthusiasm. That makes direct comparison between the groups difficult.
It's sad to think how much this will set back the research agenda he was a part of. Sometimes one researcher really can move forward a field.
Bear will be missed by many including me.