This is a special post for quick takes by evakat. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 2:01 PM

What I like about EA (other than the aim to find ways to help others better, and putting them into practice)

Since I‘ve got into EA, I‘ve struggled accepting some of the EA premises, their implications and also social norms  in the community. I‘ve come back from EAG Conferences completely exhausted and really pessimistic about EA, and considered „quitting“.However, over the last months (maybe the last year) I‘ve discovered that there are a few things which I like about EA, that are not part of the baseline philosophy.

Acknowledging these has helped me to interact with EA and longtermism more and made me feel less stressed when doing so. Funny enough, I‘m also less defensive when I see unfaithful criticisms of EA, because I genuinely know better why I like the community.

What I like about EA:

  • Openness about uncertainty, to the extent that it is a norm to communicate epistemic status and uncertainty clearly
  • Related: Beliefs are stated and questioned publicly, and despite it feeling a bit like a „sect“ sometimes, in reality it is pretty low-threshold to provide constructive criticism/feedback which may lead to change (even in pretty fundamental beliefs)
  • Very solution-oriented thinking (which I sometimes lack in e.g. Social Sciences academia-adjacent circles)
  • Lots of EAs are very approachable and nice
  • Although there are some creeps, EA events made me (identifying and being read as a woman) feel very safe (shoutout to the EAGxBerlin Rave afterparty)

This list is not exhaustive and I‘ve written it in about 15 minutes. These are not my main motivation to be interested in EA, but definitely helped me feel part of the community.

I might make an opposite list at one point.

About the Sleeping Beauty Problem.

Epistemic status: this is a quick reaction to the latest 80k Hours podcast episode with Joe Carlsmith. This has been my first encounter with the anthropogenic principle. I haven’t read up on this afterwards, so my argument might be easily debunked or the statement in question might be a misrepresentation of the thought experiment.
 

In the 80,000 hours podcast episode number 152 featuring Joe Carlsmith, Rob Wiblin states that if one thinks that Sleeping Beauty should put 2/3 credence on heads (or whatever option leads to the outcome of being waken up twice, and having the memory of the first awakening erased), this creates a problematic conclusion: An event which creates more observers - such as Sleeping Beauty, who observes the awakening twice in the Heads scenario - would thus be more likely.

However, it seems to me like this is a misguided interpretation of the view. In fact, putting 2/3 credence on Heads doesn’t make this more likely, but is rather just the better strategy for the observer who has to guess to which group of observers they belong.

More from evakat
Curated and popular this week
Relevant opportunities