4679Joined Aug 2014


Hi Misha — with this post I was simply trying to clarify that I understood and agreed with critics on the basic considerations here, in the face of some understandable confusion about my views (and those of 80,000 Hours).

So saying novel things to avoid being 'nonsubstantial' was not the goal.

As for the conclusion being "plausibly quite wrong" — I agree that a plausible case can be made for both the certain $1 billion or the uncertain $15 billion, depending on your empirical beliefs. I don't consider the issue settled, the points you're making are interesting, and I'd be keen to read more if you felt like writing them up in more detail.[1]

The question is sufficiently complicated that it would require concentrated analysis by multiple people over an extended period to do it full justice, which I'm not in a position to do.

That work is most naturally done by philanthropic program managers for major donors rather than 80,000 Hours.

I considered adding in some extra math regarding log returns and what that would imply in different scenarios, but opted not to because i) it would take too long to polish, ii) it would probably confuse some readers, iii) it could lead to too much weight being given to a highly simplified model that deviates from reality in important ways. So I just kept it simple.

  1. I'd just note that maintaining a controlling stake in TSMC would tie up >$200 billion. IIRC that's on the order of 100x as much as has been spent on targeted AI alignment work so far. For that to be roughly as cost-effective as present marginal spending on AI or other existential risks, it would have to be very valuable indeed (or you'd have to think current marginal spending was of very poor value). ↩︎

Hey David, yep not our finest moment, that's for sure.

The critique writes itself so let me offer some partial explanation:

  1. Extemporaneous speech is full of imprecision like this where someone is focused on highlighting one point (in this case the contrast between appropriate individual vs altruistic risk aversion) and misses others. With close scrutiny I'm sure you could find many other cases of me presenting ideas as badly as that, and I'd imagine the same is true for all interview shows edited at the same level as ours.

Fortunately one upside of the conversation format is I think people don't give it undue weight, because they accurately perceive it as being scrappy in this way. (That said, I certainly do wish I had been more careful here and hopefully alarm bells will be more likely to go off in my head in a future similar case!)

I don't recall people criticising this passage earlier, and I suspect that's because prior to the FTX crash it was natural to interpret it less literally and as more pointing towards a general issue.

  1. You can hear that with the $10b vs $0/20b comparison as soon as I said it I realised it wasn't right and wanted to pare it back ("Maybe that’s an even bet"), because there's no expected financial gain there. I should have compared it against $5b or something but couldn't come up with the right number on the spot.

  2. I was primarily trying to think in terms of the sorts of sums the great majority of listeners could end up dealing with, which is only very rarely above >$1b, which led me to add "or not really on the scale of the amount of money that any one person can make".

If you'd criticised me for saying this in May I would have said that I was highlighting the aspect of the issue that was novel and relevant for most listeners, and that by the time someone is a billionaire donor they will have / should have already gotten individualised advice and not be relying on an introductory interview like this to guide them. They're also likely to have become aware of the risk aversion issue just through personal experience and common sense (all the super-donors I know of certainly are aware of these issues, though I'm sure they each give it different weight).

All that said, the above passage is pretty cringe, and hopefully this experience will help us learn to steer clear of similar mistakes in future.

Thanks for the question Pseudonym — I had a bunch of stuff in there defending the honour of 80k/my colleagues, but took it out as it sounded too defensive.

So I'm glad you've given me a clear chance to lay out how I was thinking about the episode and the processes we use to make different kinds of content so you can judge how much to trust them.

Basically, yes — I did hold the views above about risk aversion for as long as I can recall. I could probably go find supporting references for that, but I think the claim should be believable because the idea that one should be truly risk neutral with respect to dollars at very large amounts just obviously makes no sense and would be in direct conflict with our focus on neglected areas (e.g. IIRC if you hold the tractability term of our problem framework constant then you get logarithmic returns to additional funding).

When I wrote that SBF's approach was 'totally rational', in my mind I was referring to thinking in terms of expected value in general, not to maximizing expected $ amounts, though I appreciate that was super unclear which is my fault.

Podcast interviews and their associated blog posts do not lay out 80,000 Hours staff's all-things-considered positions and never have (with the possible exception of Benjamin Todd talking about our 'key ideas').

They're a chance to explore ideas — often with people I partially disagree with — and to expose listeners to the diversity of views out there. For an instance of that from the same interview, I disagree with SBF on broad vs narrow longtermism but I let him express his views to provide a counterpoint to the ones listeners will be familiar with hearing from me.

The blog posts I or Keiran write to go with the episodes are rarely checked by anyone else on the team for substance. They're probably the only thing on the site that gets away with that lack of scrutiny, and we'll see whether that continues or not after this experience. So blame for errors should fall on us (and in this case, me).

Reasons for that looser practice include:

  • They're usually more clearly summarising a guest's opinions rather than ours.
  • They have to be imprecise, as podcast RSS feeds set a 4000 character limit for episode descriptions (admittedly we overrun these from time to time).
  • They're written primarily to highlight the content of the episode so interested people can subscribe and/or listen to the episode.
  • Even if the blog post is oversimplified, the interview itself should hopefully provide more subtlety.

By comparison our articles like key ideas or our AI problem profile are debated over and commented on endlessly. On this issue there's our short piece on 'How much risk to take'.

Not everyone agrees with every sentence of course, but little goes out without substantial review.

We could try to make the show as polished as articles, more similar to say, a highly produced show like Planet Money. But that would involve reducing output by more than half, which I think the audience would overall dislike (and would also sabotage the role the podcast plays in exposing people to ideas we don't share).

You or other readers might be curious as to what was going through my head when I decided to prioritise the aspect of expected value that I did during the interview itself:

  • We hadn't explained the concept of expected value and ambition in earning to give and other careers very much before. Many listeners won't have heard of expected value or if they have heard of it, not know what it is. So the main goal I had in mind was to get us off the ground floor and explain the basic case there. As such these explanations were aimed at a different audience than Effective Altruism Forum regulars, who would probably benefit more from advanced material like the interview with Alan Hajék.
  • The great majority of listeners (99.9%+) are not dealing with resources on the scale of billions of dollars, and so in my mind the priority was to get them to see the case for not being very risk averse, inasmuch as they're a small fraction of all the effort going into their problem and aren't at risk of causing massive harm to others.
  • I do wish I had pointed out that this only applies if they're not taking the same correlated risks as everyone else in the field — that was a predictable mistake in my view and something that wasn't as prominent in my mind as it ought to have been or is today.
  • Among the tiny minority of people who are dealing with resources or careers at scales over $100 million, by that point they're now mostly thinking about these issues full-time or have advisors who do, and are likely to think up or be told the case for risk aversion (it should become obvious through personal experience to someone sensible in such a situation).

I do think I made a mistake ex ante not to connect personal and professional downside risk more into this discussion. We had mentioned it in previous episodes and an article I read which went out in audio form on the podcast feed itself, but at the time I thought of seeking upside potential, and the risk of doing more harm than good, as more conceptually and practically distinct issues than I do now after the last month.

Overall I think I screwed up a bunch about this episode. If you were inclined to think that the content of these interviews is reliable the great majority of the time, then this is a helpful reminder that ideas are sometimes garbled on the show — and if something sounds wrong to you, it might well just be because it's wrong. I'm sorry about that, and we try to keep it at reasonable levels, though with the format we have we'll never get it to zero.

But if it were me I wouldn't update much on the quality of the written articles as they're produced pretty differently and by different people.

Hi Oli — I was very saddened to hear that you thought the most likely explanation for the discussion of frugality in my interview with Sam was that I was deliberately seeking to mislead the audience.

I had no intention to mislead people into thinking Sam was more frugal than he was. I simply believed the reporting I had read about him and he didn’t contradict me.

It’s only in recent weeks that I learned that some folks such as you thought the impression about his lifestyle was misleading, notwithstanding Sam's reference to 'nice apartments' in the interview:

"I don’t know, I kind of like nice apartments. ... I’m not really that much of a consumer exactly. It’s never been what’s important to me. And so I think overall a nice place is just about as far as it gets."

Unfortunately as far as I can remember nobody else reached out to me after the podcast to correct the record either.

In recent years, in pursuit of better work-life balance, I’ve been spending less time socialising with people involved in the EA community, and when I do, I discuss work with them much less than in the past. I also last visited the SF Bay Area way back in 2019 and am certainly not part of the 'crypto' social scene. That may help to explain why this issue never came up in casual conversation.

Inasmuch as the interview gave listeners a false impression about Sam I am sorry about that, because we of course aim for the podcast to be as informative and accurate as possible.

"The image, both internally and externally, of SBF was that he lived a frugal lifestyle, which it turns out was completely untrue (and not majorly secret). Was this known when Rob Wiblin interviewed SBF on the 80000 Hours podcast and held up SBF for his frugality?"

Thanks for the question Gideon, I'll just respond to this question directed at me personally.

When preparing for the interview I read about his frugal lifestyle in multiple media profiles of Sam and sadly simply accepted it at face value. One that has stuck in my mind up until now was this video that features Sam and the Toyota Corolla that he (supposedly) drove.

I can't recall anyone telling me that that was not the case, even after the interview went out, so I still would have assumed it was true two weeks ago.

I did not call blockchain a Ponzi scheme (though some Ponzi schemes have been operated there I'm sure).

I said some others are saying it's a Ponzi scheme at heart, while I "am pretty skeptical of crypto having many productive applications."

Yes I'd love to read about this too.

If I had to guess I'd say this is right and the case is even stronger when you consider the foregone impact during the extended training process when someone isn't directly doing any good.

But I'd expect people who start a charity earlier rather than seeking additional training first to be systematically different — to start with they're evidently more confident about their prospects, and that may be an indicator of higher underlying competence or enthusiasm. That makes direct comparison between the groups difficult.

Thanks for doing this Trish!

Glad you're finding the show useful enough to warrant summarising. 😊

@Linch Thanks for these questions, I will definitely use them. Two quick thoughts:

"where there is strong evidence of their internal feelings (e.g. autobiographies or other detailed biographies/interview) are pretty hard on themselves (John Stuart Mill, Maurice Hilleman, Elon Musk..."

Given the way self-compassion is used in this research I would actually expect Elon Musk to show up as self-compassionate (or at least in the middle of the scale), because I doubt that he spends much time ruminating and feeling ashamed of his past mistakes, or feeling "disapproving and judgmental about his flaws and inadequacies".

The issue is that this construct of self-compassion is related to but somewhat different than the way the term is used in ordinary speech.

For the same reason I wouldn't assume that adult Mill or Hilleman or other names being mentioned would have been mentioned would show up as lacking self-compassion the way it's defined here (though some surely would).

"self-compassion is pretty opposed to growth-mindset and "I want to be stronger" attitudes"

I don't think self-compassion in the way that she uses it is in any way opposed to growth-mindset or aiming for self-improvement. You can listen to the interview and see if you're convinced! :)

That's interesting. My personal observation is that the most productive/successful people I know are more self-compassionate on average.

One of course needs to also look at people who achieve the least (or otherwise have bad lives) to avoid selecting on the dependent variable.

Among that group lack of self-compassion seems to have very high prevalence.

(If I recall John Stuart Mill is famous or having a severe mental breakdown at 20 and radically adjusting his world-view in order to make life more liveable: .)

Load More