9 comments, sorted by Click to highlight new comments since: Today at 6:49 PM
New Comment

An effective mental health intervention, for me, is listening to a podcast which ideally (1) discusses the thing I'm struggling with and (2) has EA, Rationality or both in the background. I gain both in-the-moment relief, and new hypotheses to test or tools to try.

Esp since it would be scalable, this makes me think that creating an EA mental health podcast would be an intervention worth testing - I wonder if anyone is considering this?

In the meantime, I'm on the look out for good mental health podcasts in general.

This does sound like an interesting idea. And my impression is that many people found the recent mental health related 80k episode very useful (or at least found that it "spoke to them"). 

Maybe many episodes of Clearer Thinking could also help fill this role? 

Maybe one could promote specific podcast episodes of this type, see if people found them useful in that way, and if so then encourage those podcasts to have more such eps or a new such podcast to start?

Though starting a podcast is pretty low-cost, so it'd be quite reasonable to just try it without doing that sort of research first.

Incidentally, that 80k episode and some from Clearer Thinking are the exact examples I had in mind!

Maybe one could promote specific podcast episodes of this type, see if people found them useful in that way, and if so then encourage those podcasts to have more such eps or a new such podcast to start?

As a step towards this, and in case any other find it independently useful, here are the episodes of Clearer Thinking that I recall finding helpful for my mental health (along with the issues they helped with).

  • #11 Comfort Languages and Nuanced Thinking (for thinking through I what need, and what loved ones need, in difficult times)
  • #21 Antagonistic Learning and Civilization (had some useful thoughts about how education has taught me that breaking rules makes me bad, whereas in reality, breaking rules is just a cost to include in my calculation of what the best action is)
  • #22 Self-Improvement and Research Ethics (getting more traction on why my attempts at self-improvement often don't work)
  • #25 Happiness and Hedonic Adaptation (hedonic adaptation seems like a very important concept for living a happier life, and this is the best discussion of it that I've heard)
  • #26 Past / Future Selves and Intrinsic Values (I recall something being useful about how I relate to past and future me)
  • #43 Online and IRL Relationships (getting relationships are a big part of my happiness and this had a very dense collection of insights about how to do relationships well - other dense insights have come from reading Nonviolent Communication and doing Circling with partners)
  • #54 Self-Improvement and Behavior Change (lots of stuff, most important was realising that many "negative" behaviour patterns are actually bringing you some benefit in a convoluted way, and until you identify find a substitute for that benefit, they'll be very hard to change)
  • #60 Heaven and hell on earth (thinking about the value of "bad" mental states like anxiety and depression)
  • #65 Utopia on earth and morality without guilt (thinking through how I relate to my desire to do good, guilt vs bright desire; the handle of "clingy-ness" for a certain flavour of mental experiences)
  • #68 How to communicate better with the people in your life (getting more traction on why some social interactions leave me feeling disconnected/isolated)

I've been thinking about starting such an EA mental health podcast for a while now (each episode would feature a guest describing their history with EA and mental health struggles, similar to the 80k episode with Howie).

However, every EA whom I've asked to interview—only ~5 people so far, to be fair—was concerned that such an episode would be net negative for their career (by, e.g., becoming less attractive to future employers or collaborators). I think such concerns are not unreasonable though it seems easy to overestimate them.

Generally, there seems to be a tradeoff between how personal the episode is and how likely the episode is to backfire on the interviewee.

One could mitigate such concerns by making episodes anonymous (and perhaps anonymizing the voice as well). Unfortunately, my sense is that this would make such episodes considerably less valuable.

I'm not sure how to navigate this; perhaps there are solutions I don't see. I also wonder how Howie feels about having done the 80k episode. My guess is that he's happy that he did it; but if he regrets it that would make me even more hesitant to start such a podcast.

I thought about this a bunch before releasing the episode (including considering various levels of anonymity). Not sure that I have much to say that's novel but I'd be happy to chat with you about it if it would help you decide whether to do this.[1]

The short answer is:

  1. Overall, I'm very glad we released my episode. It ended up getting more positive feedback than I expected and my current guess is that in expectation it'll be sufficiently beneficial to the careers of other people similar to me that any damage to my own career prospects will be clearly worth it.
  2. It was obviously a bit stressful to put basically everything I've ever been ashamed of onto the internet :P, but overall releasing the episode has not been (to my knowledge) personally costly to me so far. 
    1. My guess is that the episode didn't do much harm to my career prospects within EA orgs (though this is in part because a lot of the stuff I talked about in the episode was already semi-public knowledge w/in EA and any  future EA  employer would have learned about them before deciding to hire me anyway). 
    2. My guess is that if I want to work outside of EA in the future, the episode will probably make some paths less accessible. For example, I'm less sure the episode would have been a good idea if it was very important to me to keep U.S. public policy careers on the table.

[1] Email me if you want to make that happen since the Forum isn't really integrated into my workflow. 

Thanks, Howie! Sent you an email.

Ways of framing EA that (extremely anecdotally*) make it seem less ick to newcomers. These are all obvious/boring; I'm mostly recording them here for my own consolidation

  • EA as a bet on a general way of approaching how to do good, that is almost certainly wrong in at least some ways—rather than a claim that we've "figured out" how to do the most good (like, probably no one claims the latter, but sometimes newcomers tend to get this vibe). Different people in the community have different degrees of belief in the bet, and (like all bets) it can make sense to take it even if you still have a lot of uncertainty.
  • EA as about doing good on the current margin. That is, we're not trying to work out the optimal allocation of altruistic resources in general, but rather: given how the rest of the world is spending its money and time to do good, which approaches could do with more attention? Corollary: you should expect to see EA behaviour changing over time (for this and other reasons). This is a feature not a bug.
  • EA as diverse in its ways of approaching how to do good. Some people work on global health and wellbeing. Others on animal welfare. Others on risks from climate change and advanced technology.

These frames can also apply to any specific cause area.

*like, I remember talking to a few people who became more sympathetic when I used these frames.

I like the thinking in some ways, but think there are also some risks. For instance, emphasising EA being diverse in its ways of doing good could make people expect it to be more so than it actually is, which could lead to disappointment. In some ways, it could be good to be upfront with some of the less intuitive aspects of EA.

Agreed, thanks for the pushback!