Hide table of contents

Summary

My perspective is that EA as a community, movement, and philosophy perpetuates ideas and environments that are harmful for mental health, putting EAs at a disproportionate risk of having poor individual mental health. I argue that EA is bad for mental health for systemic reasons[1].

In this post, I present:

  1. Risk factors for psychological harm that I believe to be predictable, neglected, and tractable to address on a shorter-term scale without drastic systemic change.
  2. Suggestions for how to mitigate those risk factors.
  3. What I see as fundamental incompatibilities between EA as an ideology and basic principles of mental health. (Problems that can't be addressed while EA remains the same philosophy.)

Preamble

I attempt to describe risk factors as they are, and to propose solutions that are concrete and realistic. However, I am approaching this purely from a mental health background, without much consideration for impact or cost-effectiveness. I do this for two reasons:

  1. To emphasize what I believe is the nature of the problem (especially for readers who are not familiar with certain aspects of mental health); and
  2. To acknowledge the highly subjective nature of assessing the scale and impact of these problems and whether they warrant action. Many of these issues seem difficult or impossible to assess in scale/impact, but I think that having messy and vibe-based conversations about these topics is a better starting point than nothing.

While I also highlight psychological harm that I believe to be facilitated by or even directly caused by EA, it's worth remembering that not every instance of harm can be prevented, nor is every instance that can be prevented worth eliminating. I am not necessarily able to accurately distinguish these, and again my starting point is to raise awareness of these ideas and support my naive sentiment that we could do something to meaningfully address these if we deemed it important and decided to take action.

Risk factors and suggestions for mitigation

Introductory EA content promotes the harmful idea of maximization

I believe that the EA Handbook, introductory reading groups, and certain canonical EA books can easily give newcomers the impression that EA endorses maximization as an ideal. I believe that maximization is objectively unsustainable and unhealthy for most people[2], and that the glorification of maximization also contributes to imposter syndrome in the community, based on the idea that non-maximizers are somehow morally lacking and "not good enough" as human beings.

It seems plausible to me that only a small fraction of EAs identify as maximizers. (Well, either that or most of us here really are imposters.) Why does EA as a community promote maximization as canon if only a small number of us actually resonate with it? I believe our current inclusion of it is not purely academic. Imposter syndrome, shame, not feeling good enough, unhealthy comparison to unrealistic standards, and people needing to speak up against maximization: these are common and recurring themes in the community. There must be some reason for it, and even if it turns out maximization is a minor offender, I think these symptoms of poor mental health are worth thinking about and potentially addressing.

I also think it is easy to underestimate the impact of careless messaging about maximization. For example, many people are introduced to EA at an age where they're impressionable: during their early career, university years, or even during high school. I was in an introductory EA virtual reading group not that long ago, and students in my group were feeling the need to justify why they weren't doing more and aiming to have extraordinary impact, despite the fact that they were being exposed to completely new concepts while not having a career. Sure, the groups are designed to make us think critically, but I suspect that most graduates of these groups do not actually join the EA community[3], so they may have been left feeling a mix of inspiration and inadequacy that there is no further opportunity to positively shape. Perhaps we could improve psychological safety from the outset, across the spectrum of people who only stumble across an EA article one time, to highly engaged EAs who may still feel inadequate about their contributions in absence of effective messaging.

Suggestions: We could discuss maximization as a community and decide whether we really want to promote it in our messaging. If we decide otherwise, we could specifically normalize non-maximization by making a few high quality examples or recommendations with lifestyle balance and mental wellbeing in mind.  The EA Handbook could be tweaked to reflect this. CEA could establish a recommendation for introductory online and in-person groups to promote or at least reference these concepts.

EA fosters conditions that trigger imposter syndrome

I suspect that maximization is just one of multiple contributors to the prevalence of imposter syndrome in the community. Imposter syndrome remains more or less an understudied and unsolved problem in psychology, but that doesn't mean we can't do anything to target it, even if experimentally. One simple idea is that we can talk very specifically about it, because imposter syndrome has at least one root in shame, and shame is dispelled by sharing about it. Heck, why can't we make the most of it and make a study about imposter syndrome within EA? We might even discover some new insights about how to prevent or address it.

Suggestions: We could commission someone to make a high quality blog/article about imposter syndrome and promote that on the EA Forum or in groups. We could also design an in-person workshop on imposter syndrome that local groups could aim to run themselves once a year. These workshops could also be run at EAG/EAGx conferences. Someone could do an informal qualitative analysis of imposter syndrome among EAs as a project.

The EA Forum and other EA-adjacent online communities foster extreme perspectives

A lot of EA discourse happens asynchronously online. Especially on the EA Forum, EA-adjacent forums (such as LessWrong), and on various platforms of "EA+ influencers" (e.g. Astral Codex Ten). A common phenomenon with online forums is that they tend skew and amplify biases, normalize confrontational discussion norms, perpetuate misunderstandings, overrepresent controversies and extreme views, escalate interpersonal dramas, and encourage mob mentality[4]. It seems evident to me that the EA Forum exhibits all of these signs, and I'm not aware that anything meaningful has been done to target this natural phenomenon.

The important thing to note here is that in-person communications often directly avoid these issues, so it may be helpful to have some balance between synchronous and asynchronous interactions to avoid the worst of these issues. For example, a newcomer to EA might receive harmful notions of maximization from online content, then later shed these notions as a result of attending in-person EA events.

An extreme example of this is the online drama that unfolded regarding Nonlinear roughly a year ago. I think this is pretty much a textbook example of forums facilitating unnecessary drama. I think it's reasonable to estimate that EAs witnessing and getting sucked into the online discussion and investigations collectively spent thousands of hours for virtually no gain while having no relevant connection to any of the parties or outcomes involved in the first place.

While it's unhelpful to suggest that the primary people involved could have simply behaved differently, I think there are a couple of possible measures for reducing the amount of "collateral damage" in case of a similar event unfolding in the future.

Suggestions: For the sake of community norms, we could learn from past mistakes by acknowledging that the public-facing actions taken by certain individuals in the Nonlinear drama were probably unconditionally toxic and inappropriate. As in, all of their statements could have been 100% true, their intentions entirely good, and their chosen course of action would still have been objectively harmful to the community because of the natural consequences of that method of communication. We could promote awareness of this phenomenon in the first place, about the pitfalls of seeking trial/justice through public court of opinion, and recommendations for how to resolve seemingly impossible conflicts. The EA Forum could forbid posts that meet certain functional criteria of being a "hit piece". The admin team could also follow a guideline for recognizing and deleting such posts if intervening after some initial delay in which the drama spiral becomes evident.[5]

EA materials use violent language

From the perspective of Nonviolent Communication (NVC)[6], I believe that EA canon is often expressed using violent language, such as using "shoulds", absolutes, and relying on logic that implicit denies alternative moral perspectives. Although I'm not sure if this is deliberate, it also doesn't seem coincidental. Violent communication can go hand in hand with unsafe debate, overconfidence, epistemic misrepresentation, shame, and having polarizing effects on readers (including a tendency to increase cognitive bias). I suspect that it also contributes to the other risk factors already highlighted so far.

Suggestion: Continue encouraging one another to express epistemic uncertainties and truthiness, and acknowledge the unconditional validity of alternative perspectives (even irrational and "wrong" opinions) while avoiding "shoulds" and absolute claims about reality/truth.

EA as a movement appeals to people trying to fill a psychological void

Social movements often resonate with individuals seeking a sense of purpose or to meet certain social needs such as identity or belongingness, to the extent that sometimes these needs play a more critical role than whether the beliefs of the group are consistent with the individual's internal values and beliefs. EA is not exempt from this phenomenon, especially as EA targets and attracts young people. Young people are more impressionable and more likely to engage in movements in alignment with belongingness needs than older people[7].

I believe it is worth considering the psychological impact that EA may have on:

  1. People who hear about it casually as a one-off, e.g. stumbling across a random article
  2. People who commit to learning more about EA but do not end up joining the community
  3. People who become "moderately/highly engaged EAs" and then leave the community within a few years
  4. People who resonate with ideas from EA and continue to do so without any first-hand interaction with the community e.g. working alongside colleagues who happen to be EAs and being curious about it without specific action or commitment
  5. Long-term moderately/highly engaged EAs.

It seems to me that unhealthy messaging in EA can and does have long-term impacts on some people across most or all of these categories. But my point isn't just about unhealthy messaging in general, but the selection effects that a social movement may have and the impacts of that. EA has some level of exclusivity in how it selects people[8], which is well and normal, but the type of exclusivity that seems to be present and might be harmful is elitism. Belongingness is a natural "question" arising in any movement. Elitism modifies that question into some form of "Am I good enough? Will I be accepted despite my imperfections?"

An example is that whenever I talk to someone who wants to work in AI safety, we end up talking about their expected impact, how they feel about it, whether they think they have the raw talent to have a chance of doing impactful work, and how they feel about taking the spot for a specific opportunity, knowing that it might have been more morally correct to leave that spot to a hypothetical person who is smarter and more committed than them. While it is natural to wonder these things, it is also concerning that we have essentially normalized getting people to question whether they're good enough. EA messaging and ideology directly contributes to this psychologically unhealthy comparison of our moral worth.[9] Many other examples of elitism in EA can be found here.

Another form of psychological harm occurs when someone joins EA for unsustainable reasons (e.g. subconsciously seeking to meet social needs while thinking that it's due to genuine intellectual alignment), over-identifies with EA as a coping mechanism, and then eventually burns out and has a "tragic" disillusion-style fallout with the community. A more detailed description of this idea can be found here: “My Model Of EA Burnout” (Logan Strohl). Although I'm saying this type of journey is harmful, not all harm is preventable. All movements "facilitate" this type of journey being possible. That said, exclusivity and elitism tends to enable greater harm in the disillusionment process. This harm can be present across any of the 5 categories of exposure to EA, for example even someone who remains a dedicated EA for the rest of their life might still go through a burnout process, and the during the rougher periods before resolution, they might have expressed that disillusionment in ways that harmed other people, not just themselves. You probably know multiple people in your life who went through such processes and had a "toxic phases".

Suggestion: EA could reduce the harm that it facilitates by providing better tools and support to help people understand which parts of EA is beneficial and practical for them to integrate, and which parts are not, as well as reassuring people that it is okay to disagree with EA meta. If this ends up helping some people realize that EA is not for them, this is a positive and healthy outcome for multiple parties. On a more systemic level, the community could make a conscious decision about whether to make EA ideology less elitist and exclusive.

Some EA sub-communities are said to be extremely toxic

This is more of a placeholder to acknowledge that I've come across several people's accounts of extremely concerning sub-communities. On a surface level, these descriptions seem reminiscent of in-group/out-group dynamics, status games, and issues such as discrimination, favoritism, sexual harassment, coercion, power-seeking, etc. But I have too little exposure to be able to say anything more.

Suggestion: Hire professionals to evaluate cultural safety in sub-communities where there are a lot of complaints.

The ideological and cultish aspects of EA discourage openness, diversity, and critical thinking

Plenty before me have highlighted these two aspects. My previously mentioned suggestions could alleviate these downsides.

EA+ online communities normalize bad self-care

I'm concerned by my impression that many EAs and rationalists are insular to when it comes to actually good mental health advice. My impression is unreliably based on seeing many posts[10] that seem to me like a pattern of rationalists trying to re-invent the wheel when it comes to mental/emotional wellbeing rather than referring to existing bodies of knowledge. Although these posts aren't necessarily super popular (with mental health being a relatively less visible topic in general), my concern is that they could be leading people towards an unhelpful direction. I feel that these posts often have sensible-sounding-but-actually-harmful ideas and there basically aren't any competing posts with good ideas.

I'm not at all saying that these posts shouldn't exist, that we shouldn't share our personal perspectives even if we're at parts of our mental health journey where we can't tell which ideas genuinely help and which don't. What I am saying is I believe that unsound ideas are naturally going to be the most interesting and visible ideas about mental health in the current online spaces, and we could try to counteract this if we wanted to, and this could potentially improve the community's currently poor level of basic knowledge about mental health.

Suggestions: Commission a few mental health practitioners to write a few articles for EAs. They could tailor it towards the community's needs and challenges. I can also imagine a few podcast episodes that could succinctly demonstrate the way therapy might explore common blind spots held by EAs, providing a rapid update with less resistance than other methods.

The EA community lacks good mental health support

It's pretty hard to articulate what I think is bad about the status quo and how bad I think it is, so I'm mostly going to approach from the opposite direction and say what I think could make a positive difference to community mental health. We can proactively anticipate that there are certain parts of involvement in EA that can come with higher risk of mental health challenges. Possible examples:

  • People who recently joined EA and need help processing emotional burdens that come with moral contemplation
  • EAs who are full-time job hunting or experiencing burnout
  • EAs having interpersonal conflicts with other EAs where the circumstances are complicated due to consequences potentially affecting the wider community
  • EAs working in AI safety (this is a bit more niche).

Overall, I don't think we really have anything effective in place to meet these needs, not CEA[11], not friendly people who put on their profiles that you can contact them to talk about literally anything[12].

Wild idea: Fund 3-4 counsellors/therapists/psychologists available for subsidized or free short-term treatments for individuals in the community. Although this would be a significant cost, it also has the chance of uplifting the community in radical and unpredictable ways. By my rough estimate, this capacity is actually enough to provide accessible mental health support for the entire highly engaged EA community.[13] Ideally, this mix of practitioners would cover multiple intersectional perspectives such as neurodivergence, knowledge and non-knowledge of EA/rationality (I would argue that it's specifically beneficial to include therapists who are not EAs), LGBT+, etc. If this is too big a project, a smaller version would be to fund a single therapist and limit their support/availability to one niche, e.g. burnout or newcomers to EA.[14]

Normal suggestion: Have a friendly community contacts list somewhere that's up-to-date. One model for this could be a volunteer service, e.g. community contacts can volunteer certain hours of availability on their calendar. Just having a friendly chat could be surprisingly effective for addressing some of the risk factors that newcomers face. This could even be proactive outreach targeting recent graduates of the EA introductory reading groups. If community contacts are interested in providing more "intense" support without being a therapist, there are possible modalities for this such as some versions of peer support that can involve just a small amount of training.

EAs are intersectionally at greater risk of mental health challenges

The "average EA" is more likely to be in multiple minority groups that each have an above-average bar to reach average well-being (e.g., ADHD, autism, LGBT+/GSM, giftedness) across multiple health scales. Many people believe that EAs have a higher proportion of neurodivergent people than the general population. This implies at least the reference figure of 20% occurrence, though this reference figure itself is likely to be moderately underestimated too.

I have a few vague points about why this may be worth thinking about more:

  • If EAs are more miserable and unhealthy on average than normal people, this could be the case for undesirable reasons.
  • Even if there are no concerning reasons for this, not suffering from mental health issues can facilitate things like creative problem solving, making robust decisions, and having a scout mindset.
  • Minority groups are often systematically underrepresented in studies. We may be really keen to dive into science papers while unwittingly relying on studies that don't reflect our demographics. This can matter a lot for things like mental health, nutrition, productivity, communication styles, career advice.

Vague suggestion: The idea of "understanding yourself" seems hugely underrated to me and could be promoted alongside the already popular scout mindset concept in EA.

More will be said on this theme, in the section focusing on undiagnosed neurodivergence.

EA probably massively incentivizes burnout

Here's finally a concrete example of the type of blind spot EA seems prone to having around mental health. When EA asks us to focus on impacts that can be measured in numbers, and does not sufficiently mention the failsafes to detect when deciding based on numbers might be really really short-sighted, it means some proportion of people end up making choices that seem logical but are actually predictably bad (to an educated advisor), and they suffer the consequences for months, years, even decades before realizing and being able to change paths. Examples of this category:

  • 80,000 Hours recommends making logical decisions about career steps and choosing a career for impact. For a non-negligible proportion of people, this is literally the worst advice they could receive, because it seems sensible but simply does not work for some body/brains and can result in both misery AND low impact.[15]
  • Ranking career options based on a spreadsheet, with the implication that everyone can consider trading off some of their needs for the sake of impact. When a decision is framed this way, it relies on the comparison formula being accurate for your actual needs, and it's surprisingly easy to get your actual needs completely wrong in priority in favor of what seems to make sense.
  • "It makes sense for everyone to consider AI safety or veganism since orienting your choices towards these could potentially have greater impact than almost any other choices in your life." Despite the compelling logic here, this kind of generalization and framing is somewhat problematic in terms of mental health and theory of change.
  • "Don't do that even though you would love it, because you would have no impact."
  • "Unless you're really good at that career, you might not have much impact."
  • Maximization, because it's basically impossible for anyone to live up to that standard.
  • Too much emphasis on rationality / perfectionism in general.
  • Not having good boundaries around where EA fits into your life. I've seen examples of interviewees saying "I've never thought about that or had that concern" (automatic healthy boundary) and the opposite side "Yeah that's really troubling, but I try not to think about it / solved it through rationality". I don't think I've seen examples of conscious healthy boundaries being represented in EA.

I believe that EA's hyperfocus on numbers and rationality tends to result in over-valuing positive short-term outcomes at the detriment of failing to adequately evaluate long-term outcomes. All of the above examples involve ableism [16]and the theme of performing to a certain standard that is inherently unachievable or unhealthy for some people, down to their neurology. EA is somewhat rife with ableism. We're intelligent, privileged, and care about living beings; so why shouldn't we be able to do X, Y, Z? Unfortunately, internalized ableism has many negative effects, one of them being burnout.

Example: Even if AI timelines are very short (e.g. AGI within 5 years), would that make it worthwhile to have all our AI safety workers burn out on a scale of a few years?

Suggestion: Hire a specialist in burnout to evaluate EA culture, identify relevant risk factors, and come up with concrete strategies for reducing them.

Undiagnosed neurodivergence in EA

I believe undiagnosed neurodivergence to be a major risk factor to mental health in EA. There are too many angles to cover, I'll include just a few:

  • Ableism is a common lens through which people hold inaccurate beliefs about reality, but particularly beliefs about their own body (internalized ableism), and sometimes also inaccurate beliefs projected onto other people's bodies. Within EA, ableist advice often suggests that people do exactly the opposite of what their body needs from a neuropsychological self-care perspective, and unfortunately this advice often is the most unfortunate mix of sensible sounding, popular, difficult to refute without nuanced guidance, and short-term rewarding.
  • A lot of advice in EA is somewhat good for neurotypicals and somewhat bad for neurodivergents, but I see very little awareness or acknowledgement of the latter.
  • Organizations and projects can benefit from both neurotypical and neurodivergent thinking, but disability accommodations may be required to support neurodivergent people to perform to their strengths in a sustainable manner. This applies to EA circles in a lot of ways, from making workplaces, projects, resources more neurodivergent-friendly, as well as individuals empowering themselves through self-knowledge.

Awareness of neurodivergence worldwide is starting to gain traction, though our scientific understanding remains extremely limited. I'm hoping there will be a major revolution in terms of public acceptance of neurodiversity and related disability rights. I believe that EA and the especially the rationality community could benefit from not being insular to this.

EAs and rationalists are at greater risk of unhealthy rationalizations, and EA+ material makes this worse

Humans are not rational beings, and there are limits to how much emphasis people can place on reason, logic, "accounting", maximization, getting things right, and so on in their lives before it may interfere with wellbeing. This is a common theme in psychotherapy, but let me clarify my concern using an aggressive generalization. When rationality plays a strong part in someone's life, rationality is functioning as one or more of these four things:

  1. As a hobby, because it's fun or interesting
  2. As a tool that has practical value under certain circumstances
  3. As a habitual coping/defense mechanism, likely linked to trauma (e.g. to avoid criticism, or provide a sense of control and certainty)
  4. As an arbitrary subject being used in relation to one of the above three categories (e.g. tool for status signaling, coping mechanism for identity formation).

The third category is the one I want to highlight as unhealthy. When rationality is wired into a person's behavior as a defense mechanism, they are more likely to: engage in motivated reasoning and rationalization, suppress their emotions and neglect good self-care, have a soldier mindset, make poor long-term decisions held with high conviction, hold themselves to unreasonably high standards, promote ableist ideas, and burn themselves out.

While we can't read people's minds and tell for sure how much rationality is being used as a coping mechanism, there are certainly some signs, themes and rhetorics that are frequently associated with "coping-rationality" while rarely being associated with the other types.

Assuming that "coping-rationality" within EA is no more common than in the general population, it still seems likely that EA is amplifying more extreme versions of harmful ideas. This is because we engage deeply with these ideas, push their logical implications to extremes, and actively promote these conclusions within our community and beyond. My concern is that the EA community might be amplifying ideas influenced by mental health factors rather than clear, sound reasoning—and that we’re not only acting on these ideas ourselves but also spreading them more broadly.

Wild idea: Form a team of people who are good at generalist technical critique, they can act like a peer-review consultancy service within the EA ecosystem.

Ways in which EA fundamentally conflicts with mental health

Maximization is objectively bad, yet this concept persists in EA

Even though it seems plausible that there are very few EAs who identify as maximizers, maximization keeps appearing as a point of contention in EA discourse. Almost any form of maximization as a lifestyle is likely to be neutral at best, unhealthy at worst, with maximization of any rational endeavor skewing towards predictably unhealthy and harmful. Maximization is fundamentally incompatible with good mental health. You can't "just have a little bit of maximization"; it's all or nothing. One could try to do better by establishing boundaries around maximization, but that's simply not maximization anymore. We also can't just "rationally model our irrationality so that we can just adopt a more informed rational thinking while meeting our irrational needs". I haven't come across any modern framework of human wellbeing that suggests it can work, or that rationality is on the same level of fundamental needs such as love, safety, purpose, etc.

So long as EA ideology glorifies maximization as an ideal, EA is bad for mental health. I would go further and say that failure to educate EAs about the intrinsically harmful nature of maximization is a failure in terms of mental health.

Side note: One might argue that it might be worth degrading the mental health of EAs in exchange for saving millions/billions/trillions of lives. I would argue that this trade-off is not realistically available, because hyper-rationality leads to worse decision-making. It's a lose-lose situation. This post focuses on the mental health angle so I leave my explanation in the footnote.[17]

EA ideology fosters unsafe judgment and intolerance

My argument takes the following structure:

  1. EA values rationality over irrationality.
  2. In doing so, EA makes value judgments on irrational decisions and actions.
  3. Applying ethical frameworks that EAs commonly hold, irrationality is labelled as invalid, bad, and wrong due to its lower value.
  4. EA as a community normalizes making such ethical judgments, applied both internally towards oneself and towards other people and the world.
  5. Mental wellbeing frameworks generally hold that all perspectives are valid, including things that EA's ethical frameworks assert as bad/wrong/invalid.
  6. Ethical claims are often made using the same language (bad, wrong, worse, false, etc) without acknowledging the underlying framework, which leads to misunderstandings.
  7. Therefore, ethical frameworks and mental wellbeing frameworks cannot coexist naturally without conflict, unless a higher framework is used to integrate these clashing perspectives in a healthy way[18].
  8. Mental wellbeing frameworks also tend to hold that overemphasizing rationality, certainty, correctness, and control is damaging to self-care and self-esteem.
  9. An environment that normalizes forming and assessing ethical judgments is psychologically harmful to both individuals and their audiences, due to both the impact of judgmental language as well as the psychological burdens associated with deep contemplation of certain topics.
  10. Some proportion of people do not already have a solid mental wellbeing framework, let alone a higher framework when they first show interest in EA. After integration into EA, it seems plausible to me that such people may initially become even be more disadvantaged than before, especially since EA content skews towards the ethical frameworks.
  11. EA ideology is harmful for mental wellbeing because it values and emphasizes the ethical side over the wellbeing side. Hypothetically, this can be addressed by acknowledging and promoting a suitable higher framework.

Even if my argument holds, it can be tricky to gauge the significance and impact of not having a suitable higher framework. I would probably summarize my main concerns as:

  • When we use ambiguous language (c.f. point 6 & 7), some proportion of both EAs and people reading about EA are genuinely not aware of or able to easily switch between these two distinct frames, so we do amplify harmful misunderstandings at times.
  • In my opinion, EA is historically so heavily slanted away from wellbeing, such that taking the "extreme" action of adopting a higher framework carries zero risk of over-representing mental wellbeing. Conversely, if we don't do something "extreme" like adopt a higher framework that strongly acknowledges both frameworks, then mental wellbeing will continue to be disincentivized, under-represented, and neglected by EAs.
  • I genuinely believe that the judgment aspect of EA canon has psychological implications on individuals, cultural safety, openness, decision making, and community health, but it may be too nuanced a topic to flesh out here.

I feel skeptical about the idea that EA as a movement can adopt a suitable higher framework, because it requires significant undoing of existing EA canon, a significant injection of outside expertise, and a significant drive coming from a community of members who were drawn to the ethics/rationality side to begin with.

There is a simple solution to this on an individual scale: regard EA as a tool or hobby with major flaws and limitations, not as a complete ethical philosophy and way of being/thinking with potentially unlimited applications[18]. I think it's worth clarifying that individuals with healthy self-esteem and self-care are more likely to be doing this automatically, even without thinking about it. For the rest of us, ideas such as having a "morality budget" may be helpful as guidelines for thinking about healthy boundaries.

EA is incompatible with alternative values that can be healthy

Previous critiques of EA have made the point that EA is not truly and meaningfully open to all questions, and that subsequently it is unable to act on some possibilities, even when there are good reasons to suspect that those possibilities may be better than EA's current strategies.

For example:

  • Could we be undervaluing irrationality? Could instinctive decision-making lead to far better outcomes than rational thinking under certain contexts? (The obvious answer to this question is yes, but what are the chances of getting a grant for a project on this premise?)
  • Could we be overvaluing life and undervaluing death? Could we be incorrectly valuing productivity over non-productiveness? Happiness over suffering?
  • Could 80,000 Hours have it all wrong? We may know based on economic research that our gut instincts are sometimes drastically wrong, but that doesn't mean that not following our gut instincts necessarily leads to better long-term outcomes. Isn't it almost certainly the case that some people rely on their gut instincts too much and some people don't rely on it enough? I can come up with realistic examples where 80,000 Hours does have it all wrong (relative to "best practices" according to psychologists).

EA fundamentally discriminates against certain perspectives, not only making certain solutions unavailable in practice, but also undermining scout mindset, cultural safety, and open collaboration. EA is only tolerant towards a small minority of viewpoints out of all the clusters of viewpoints that exist in the world. But many of those viewpoints are "healthier" than accepted EA viewpoints.

EA devalues human life based on the arbitrary implications of capitalism and privilege

EA seems to imply that all human life is equally valuable, but for practical and ethical reasons, this means we should save people who can most easily/cheaply be saved, as well as favor individuals who can do the most good. This is unjust from a human rights perspective, and this exact reasoning can be used to justify elitism, discrimination, genocide, and all kinds of other injustices.

Ethics is very much unsolved in that every ethical framework you can do math with has at least one really stark edge case that doesn't seem acceptable, so my gripe isn't that EA doesn't have a magic solution, but that EA very much does apply ideas that propagate injustice, and it applies these ideas despite the fact that, in my opinion, there are tenable non-ethical perspectives that do not share the same problems.

I think EA fails the "veil of ignorance" test in its attitude towards non-EA altruists and even some subset of EAs. For example: Suppose, an EA suddenly inherits 10 million dollars, and initially intends to donate 9 million dollars to AMF over a 20 year period while retaining a certain amount to seed a FIRE-based lifestyle. However, they suddenly fall critically ill and get diagnosed with an ultra rare disease. Their maximum remaining lifespan is estimated to be 10 years, but only if they receive a rare experimental treatment that also costs 1 million dollars per year to administer. They decide they want to try and enjoy another 9 years of their life, contributing only 1 million dollars to AMF.

If you tweak the numbers and rarities in this anecdote enough, you can make it resemble what it's like to be an EA struggling with intersectional disprivilege, including neurodivergence, chronic illness, and other disabilities. "All else being equal", our lives are devalued by the current version of EA that lacks a higher framework as mentioned earlier.

Closing remarks

In conclusion, I currently hold the following views:

  • EA has many significant blind spots when it comes to mental health. Even in the absence of quantifiable evidence for this, there are many reasons to suspect that EA may have these blind spots for systemic reasons. These blind spots may have some casual relationships with some real and significant mental health factors affecting community health.
  • Some of EA's potential blind spots towards mental health can be concretely addressed. Most of my suggestions involve getting outside professional perspectives on the community, because I believe that existing bodies of knowledge such as psychotherapy have been surprisingly under-represented in EA discussions, with only a few notable exceptions (such as Rethink Wellbeing's programs).
  • The EA ecosystem may currently be acting as a breeding ground for harmful ideas and ignorance towards mental health, while being dangerously unaware about it.
  • EA is particularly susceptible to supporting ideas that externalize long-term sacrifices in the mental health of individuals in favor of short-term measurable outcomes, due to lack of knowledge about long-term risk factors to mental health.
  • EA's emphasis on measurability and rationality makes the community susceptible to making objectively suboptimal decisions in a way that "rationality done better" cannot necessarily overcome.
  • EA ideology fundamentally clashes with good mental health and social equity, and this can only be resolved by either 1) diligently acknowledging its shortcomings or 2) making a drastic adjustment such as adopting a higher-level framework that integrates mental health concepts with ethical concepts.
  • I'm somewhat skeptical that EA will naturally drift towards healthier mental health perspectives over time in the absence of specific and significant actions.
  • In light of the above reasons, I believe that there are some contexts in which EA-aligned approaches to mental wellbeing as a cause area will have decidedly less impact than non-EA approaches.
  1. ^

    By systemic, I mean that EA as a community and movement has incentives which extend from EA's core ideas, ultimately having a tendency to favor harmful mental health norms. This means that 1) EA is likely to be resistant to improving its mental health norms, and 2) if these harmful norms were magically removed, new or similar harmful norms would re-emerge over time.

  2. ^

    My stance is that this is "obviously correct" from any informed view of mental health, to the degree that casual skepticism about this is not worth considering. But I am happy to hear any informed views that present an opposing conclusion (not just skepticism), though I would be surprised to hear that any exists.

  3. ^

    It would be interesting to estimate the proportion of introductory program graduates that become EAs. This falls outside of CEA's past focus on retention, which targeted EAs who rated themselves as already highly engaged.

  4. ^

    Here's a helpful explanation and anecdote about problems with asynchronous communication.

  5. ^

    I know that I'm being a little bit vague here, mostly because I don't want to introduce nuance within a complicated and controversial topic that may detract attention from more central ideas in this post.

  6. ^

    NVC is not very popular as a communication framework, but seems surprisingly overrepresented among EAs and rationalists. I couldn't find a short article that explains how it relates to logical debate, so I picked a humorous TEDx Talk about it instead.

  7. ^
  8. ^

    Extremely low exclusivity might be something "if you've ever had a positive thought about wanting to help another human being, then you're an EA; you fall somewhere on the EA spectrum". Extremely high exclusivity might be something like "to be a real EA you have to be a maximizer". 

  9. ^

     I believe it is naive to say that "all we did was apply some rational thinking and ask sensible questions based on a few assumptions, how can that be psychologically unsafe?" In my understanding, it is unsafe, there are frameworks in which similar themes can be explored safely or at least with a healthier trade-off, and EA as an ideology thus far does not seem to value safer alternatives.

  10. ^
  11. ^

    I have a highly negative opinion about CEA's role/impact on online community health, but it seems unproductive to say more.

  12. ^

    Anecdotally, at a response rate of say 10% within a month timeframe, this doesn't seem very accessible to me for someone in a time of specific need.

  13. ^

    Example: 4x counsellors/therapists, with individual mean salary of US$100k, doing up to 20 sessions per week for 46 weeks each. This is a total cost of $400k for up to 3680 sessions, but let's account for 25% wastage in unbooked sessions, leaving 2760 sessions. (There are also options for partial subsidization, e.g. 50%.) We can allocate availability for therapy using a similar scheme to the way universities with more than ten thousand students do when offering free counselling. The treatments offered are generally short-term interventions, e.g. 4-8 sessions targeted at a specific problem area. This is done at the discretion of the therapists, who can also take into account the overall availability of sessions. If the sessions are under-utilized, they can see clients much longer term, or if the sessions are over-booked, clients who need longer treatments may be referred to external options after a certain number of sessions. If we crudely say that each individual receives an 8-session treatment once a year within this system, that means we can treat about 345 individuals. The last estimate of number of EAs (in 2020) was 10,000 total, with 2,600 being "highly engaged". If we assume that the subsidized service is promoted towards highly engage EAs, and even so that most EAs won't hear about or consider using this service no matter how it's promoted, 345 is 13% of the highly engaged community. Although I don't have any empirical data, I would tend to expect actual demand to be lower 13%.

  14. ^

    I'm under the impression that there was a therapist funded purely to support AI safety researchers at some point, though arguably this does not necessarily impact the general EA community as is the point of my suggestion.

  15. ^

    I'll raise a general point here about 80,000 Hours: their career guide is completely subjective, as in, there is no evidence for their career guide being effective and it's just one guess among many possible clusters of valid guesses about good career advice. This is not a criticism, just a note that their guide could be extremely flawed while being a perfectly "sensible" guess based on reading the relevant literature.

  16. ^

    "Ableism is the discrimination of and social prejudice against people with disabilities based on the belief that typical abilities are superior. At its heart, ableism is rooted in the assumption that disabled people require ‘fixing’ and defines people by their disability. Like racism and sexism, ableism classifies entire groups of people as ‘less than,’ and includes harmful stereotypes, misconceptions, and generalizations of people with disabilities."

    Ableism doesn't have to be overt or come from bad intentions in order to be harmful. Simple generalizations about people's abilities can harmful or discriminatory. For example, a candidate at a job interview might appear anxious, timid, and lacking in confidence. Even if the job interview is for a customer service role, it is ableist to assume that their anxious manner during the interview means they would have a similar manner in their actual role. They could be anxious specifically during interviews, or have nearly been hit by a bus right before the interview, or otherwise be good at building an unexpected kind of rapport with customers.

  17. ^

    I believe that EA dangerously lacks skepticism about the limits of rationality, in a way that leads to wrong conclusions with high confidence. There are limits to how rational human beings can be. I believe these limits are measurable, and they're much lower than EA/rationalists would like to believe. For example, we encourage EAs to be aware of cognitive biases and rationalization (a defense mechanism where we deceive ourselves with flawed logic because we want something to be true), yet there is no clear evidence that training ourselves to be more rational actually works. There are also many real-world contexts with incomplete information where rational thinking is actually more likely to lead us to grossly incorrect conclusions.

    No one is immune to cognitive biases. We either accept that we are biased, try a healthy amount to reduce bias without expecting that we necessarily succeeded, or try really hard not to be biased and deceive ourselves into thinking we succeeded. I suspect that a fair number of EAs fall into the last camp, especially those who carry a sense of pride and identity based on their faith in rationality and science.

    As a more tangible example, I think there many reasons to be skeptical that donating to AMF is really one of the best strategies for doing good. There are a ton of possible scenarios where we might look back on this and realize we were baited into overconfidence just because our current thinking has a certain "rational aesthetic".

  18. ^

    It seems to me that any healthy model would place mental wellbeing at the absolute foundation, with ethics as an optional choice, though I could potentially see a case for other models being promoted for strategic reasons.

  19. ^

    Technically, neurodivergent burnout, which is typically much more severe and longer lasting than normal burnout.

14

3
19
3

Reactions

3
19
3

More posts like this

Comments24
Sorted by Click to highlight new comments since:

Thank you for writing about an important subject! I’m sorry about the ways I gather EA has been difficult for you. I’ve found EA pretty emotionally difficult myself at times.

People who fill out the EA Survey are likely to report that EA has a neutral or positive effect on their mental health. This might be because participating in a community and having a sense of purpose can be helpful for people's wellbeing. Of course, you’d expect bias here because people who find EA damaging may be especially likely to leave the community and not take the survey. An excerpt from a colleague’s summary:

“An interesting bit of information is that the 2022 EA survey asked how EA had affected the mental health of individuals in the community. While some people reported that their mental health had reduced as a result of being part of EA, on average, most people reported improved mental health. Obviously, there is some sampling bias here in who filled out the survey. Still, this was more positive than I expected. That’s not to say that we can’t do better - it would be really great if no one was in a situation where they found that this was personally harmful for them.

. . . I asked Rethink Priorities to do a more thorough analysis of this question. They’ve now done this! TL;DR: There are only small differences in responses across cause area/engagement level/location/career level/time in EA (students + newcomers were slightly more likely to say EA improved their mental health than other groups).”


source: EA Survey 2022

About existing efforts on mental health in EA (some of which are mentioned in other comments):

  • MentNav (formerly the EA Mental Health Navigator) aims to list mental health resources that will be useful to people in EA or elsewhere
  • You mention Rethink Wellbeing, which is running projects similar to some of what you suggest
  • The EA Peer Support Facebook group is for informal peer support, and allows anonymous posts
  • The Effective Peer Support Slack is one location where people have worked on related projects, although it doesn’t seem to be very active currently
  • Some other resources, like community contact people in local groups and at EA conferences, or the community health team where I work, aren’t focused on mental health specifically but do end up assisting with some situations related to mental health.
  • Some efforts to provide volunteer support with accessing mental health services proved difficult, because of liability to the volunteers.
  • On imposter syndrome, there’s enough content that there’s a Forum tag specifically on this topic.
  • You suggested mental health materials like articles or podcasts by mental health practitioners. Readers interested in this might explore the 80,000 Hours interview with with psychotherapist Hannah Boettcherwriting by psychologist Ewelina Tur, another mental health provider who writes as Daystar Eld on the Forum, and other writing under the Forum tag self-care and wellbeing in the effective altruism community.

I’ll note that I think it’s good to have mental health resources tailored to specific communities / populations, but this doesn’t necessarily mean much about the prevalence of problems in those populations. E.g. there are lots of therapy resources aimed at people with climate anxiety, therapists who specialize in treating medical professionalsclergy, etc.

While I agree with many points in this post, I think it would be stronger if it engaged more with the existing discussion within EA on mental health, on the Forum and elsewhere. 

For example:

  • The "Self-care and wellbeing in the EA community" tag contains over 270 posts, including some of the most highly-upvoted posts on the site.
  • 80,000 Hours has dedicated significant attention here:
  • Several programs have popped up over the years offering to provide or connect EAs with mental health services. Currently Rethink Wellbeing is active and provides CBT and IFS-based peer-facilitated programs. (There may be others I’m forgetting.)

A few of these seem to me like the sort of thing the suggestions were asking for, e.g. "a few podcast episodes that could succinctly demonstrate the way therapy might explore common blind spots held by EAs, providing a rapid update with less resistance than other methods."

I've personally experienced mental health challenges due to EA, so I'm certainly not saying the problems are all solved, or that the resources above cover everything. Publishing one podcast doesn't solve a community-wide problem. But parts of this post read to me as suggesting these resources and discussions don't exist, so I want to provide an alternate perspective.

This post seems to strongly hint at endorsing some very strong, implausible claims that aren't particularly necessary to its central point that various EA ways of thinking can make people sad and anxious*. First, footnote 18 seems to suggests that really people should give absolute priority to their own mental health over any ethical considerations whatsoever: 

"It seems to me that any healthy model would place mental wellbeing at the absolute foundation, with ethics as an optional choice, though I could potentially see a case for other models being promoted for strategic reasons."  

I think this is basically self-undermining. Presumably the reason we want people to generally think in ways that are mentally healthy is because we want to prevent harm to them (and others). But that very goal of preventing harm will sometimes point to prioritizing other things over getting people to think in the most mental health-conducive way**. (I personally, as a non-hedonist about well-being also think that sometimes having less healthy but more accurate beliefs can in itself make you overall better off than if you had less accurate beliefs and were less depressed/anxious, even if the more accurate beliefs bring no practical benefits. But I think it's totally reasonable to disagree with me about this.) 

Secondly there's a suggestion that a healthy, balanced person thinks all perspectives are valid***. I don't really get what sense of "validity" this could possibly be true on. If "valid" means "personal mental health promoting" then you've just spent a lot of time arguing, quite plausibly in my view, that a lot of EA perspectives damage mental health, which would make them less valid. This whole post is about arguing that some perspectives are less good and should be abandoned. Equally, not all perspectives are equally true, vaccines don't cause autism. Nor all they all equally moral: think of cliché examples like Hitler or a serial killer. Obviously, in therapy itself it might make sense for the therapist to ignore all this, and not provide judgments on your thoughts and feelings, but that doesn't mean everyone should always take that attitude in all contexts. 


I'll also say that I think the interaction of this topic with neurodiversity stuff is quite complicated. Much of the ways of thinking you are criticizing here feel to me, as an autistic person, to be distinctly autistic. (But don't just take my word for it! I am only one autistic person.) I think that makes plausible that encouraging them might harm autistic people in one way. But it also means that criticizing them can be stigmatizing. I have already spent a lot of emotional energy on the idea that there is something very wrong and bad and evil with how I process things, related to some of the themes of this post. In some ways, this might have been "healthy" in the sense of making me a better person, but it definitely didn't make me feel good about myself. 




*I personally think that most EAs already think in roughly those ways before the encounter EA, but that is only a guess. 

**If you've seen The Sopranos, Tony arguably becomes mentally healthier by the end, but only because he has become more sociopathic and therefore feels less guilty. Fiction, yes, but definitely something that could plausibly happen to a real person. 


***"Mental wellbeing frameworks generally hold that all perspectives are valid, including things that EA's ethical frameworks assert as bad/wrong/invalid"

What do you mean by "maximization"? I think it's important to distinguish between:

(1) Hegemonic maximization: the (humanly infeasible) idea that every decision in your life should aim to do the most impartial good possible.

(2) Maximizing within specific decision contexts: insofar as you're trying to allocate your charity budget (or altruistic efforts more generally), you should try to get the most bang for your buck.

As I understand it, EA aims to be maximizing in the second sense only. (Hence the norm around donating 10%, not some incredibly demanding standard.)

On the broader themes, a lot of what you're pointing to is potential conflicts between ethics and self-interest, and I think it's pretty messed up to use the language of psychological "health" to justify a wanton disregard for ethics. Maybe it's partly a cultural clash, and when you say things like "All perspectives are valid," you really mean them in a non-literal sense?

The norm around donating 10% is one of the places where EA has constructed a sort of "safe harbour," sending a message at least somewhat like as long as you give 10%, and under certain circumstances less, you should feel good about yourself as an EA / feel supported / etc. In other words, the community ethos implicitly discourages feeling guilty about "only" donating 10 percent.

I'm not as convinced we have established and effectively communicate that kind of safe harbour around certain other personal decisions, like career decisions. Thus, I don't know if the soft 10 percent norm is representative of norms and pressures relating to demandingness.

To be fair, it's easier to construct a safe harbour around money than around something like career decisions because we don't have ten careers to allocate.

On the types of maximization: I think different pockets of EA are in different places on this. I think it's not unusual, at least historically, for subcultures to have some degree of lionization of 1). And there's a natural internal logic to this: if doing some good well is good, surely doing more is better?

I mean, it's undeniable that the best thing is best. It's not like there's some (coherent) alternative view that denies this. So I take it the real question is how much pressure one should feel towards doing the impartial best (at cost of significant self-sacrifice); whether the maximum should be viewed as the baseline for minimal acceptability, and anything short of it constitutes failure, or whether we rather aim to normalize something more modest and simply celebrate further good beyond that point as an extra bonus.

I can see pathologies in both directions here. I don't think it makes sense to treat perfection as the baseline, such that any realistic outcome automatically qualifies as failure. For anyone to think that way would seem quite confused. (Which is not to deny that it can happen.) But also, it would seem a bit pathological to refuse to celebrate moral saints? Like, obviously there is something very impressive about moral heroism and extreme altruism that goes beyond what I personally would be willing to sacrifice for others?  I think the crucial thing is just to frame it positively rather than negatively, and don't get confused about where the baseline or zero-point properly lies.

I largely agree with this, but I feel like your tone is too dismissive of the issue here? Like: the problem is that the maximizing mindset (encouraged by EA), applied to the question of how much to apply the maximizing mindset, says to go all in. This isn't getting communicated explicitly in EA materials, but I think it's an implicit message which many people receive. And although I think that it's unhealthy to think that way, I don't think people are dumb for receiving this message; I think it's a pretty natural principled answer to reach, and the alternative answers feel unprincipled.

Given this, my worry is that expressing things like "EA aims to be maximizing in the second sense only" may be kind of gaslight-y to some people's experience (although I agree that other people will think it's a fair summary of the message they personally understood).

On the potential conflicts between ethics and self-interest: I agree that it's important to be nuanced in how this is discussed.

But:

  1. I think there's a bunch of stuff here which isn't just about those conflicts, and that there is likely potential for improvements which are good on both prudential and impartial grounds.

  2. Navigating real tensions is tricky, because we want to be cooperative in how we sell the ideas. cf. https://forum.effectivealtruism.org/posts/C665bLMZcMJy922fk/what-is-valuable-about-effective-altruism-implications-for

Risk factors for psychological harm that I believe to be predictable, neglected, and tractable to address on a shorter-term scale without drastic systemic change.

My general reaction is that some of the issues you identify may implicate moderately deep structural issues and/or involve some fairly significant tradeoffs. If true, that wouldn't establish that nothing should be done about them, but it would make proposed solutions that don't sufficiently grapple with the structural issues and tradeoffs unlikely to gain traction.

For example, on the issue of Forum drama (which I've chosen because the discussion and proposals feel a bit more concrete to me):

The case of Lightcone et al. v. Nonlinear et al. related to an attempt to protect community members from a perceived bad actor. Without litigating the merits of that dispute -- especially since I tried to stay away from it as much as possible! -- there still has to be a means of protecting community members from perceived bad actors. It's not clear to me that there existed a place to try this matter other than the Court of Public Opinion (EA Forum Division). A fair amount of ink has been spilled on the bad-actor problem more generally, but EA is decentralized enough that the non-messy solutions generally wouldn't work well either. Likewise, the recent case of In re Manifest was -- to at least some of the disputants -- about the bounds of what was and wasn't acceptable in the community. If there's not anything like the President or Congress of EA (and there likely shouldn't be in my view), only the community can make those decisions.

Dealing with alleged bad actors and defending core norms are important, so if that isn't going to be done on the Forum then it will need to be done somewhere else. I think you're right that people tend to behave better in person than online, but it's not clear how these kinds of issues would be adequately fleshed out in person. For starters, that gives a lot of power to whoever is handing out the invites to the in-person events (and the limited opportunities for one-to-many communications). Spending more time on community drama could also derail the stated purposes of those events. 

More broadly, taking the community out of adjudicating these kinds of disputes would mean setting up some centralized authority to do so -- maybe an elected representative assembly or something. That's possible, and maybe desirable -- but it would be a major structural change.

Maybe you could make the online discourse better -- but in this case it would have to be by the slow and time-consuming task of building consensus, not by moderator fiat. Finding people with the independence, skill set, community buy-on and time/flexibility to control big on-Forum disputes much more tightly than the mods have would be tough. It's an open Internet, and if enough of the community thinks topics that need to be discussed elsewhere are being suppressed, there's always Reddit. 

I really appreciated this post. I don't agree with all of it, but I think that it's an earnest exploration of some important and subtle boundaries.

The section of the post that I found most helpful was "EA ideology fosters unsafe judgment and intolerance". Within that, the point that I found most striking was: that there's a tension in how language gets used in ethical frameworks and in mental wellbeing frameworks, and people often aren't well equipped with the tools to handle those tensions. This ... basically just seems correct? And seems like a really good dynamic for people to be tracking.

Something which I kind of wish you'd explored a bit more is ways in which EA may be helpful for people's mental health. You get at that a bit when talking about how/why it appeals to people, and seem to acknowledge that there are ways in which it can be healthy for people to engage, but I think that we'll get faster to a better/deeper understanding of the dynamics if we try to look honestly at the ways in which it can be good for people as well as bad, as well as what levels of tradeoff in terms of potentially being bad for people are worth accepting (I think the correct answer will be "a little bit", in that there's no way to avoid all harms without just not being in the space at all, and I think that would be a clear mistake for EA; though I am also inclined to think that the correct answer is "somewhat less than at present").

Agree with this take ⬆️

My mental health has greatly improved since joining EA and I think that's because:

  • the culture encourages having an internal locus of control (or being agentic) which is associated with better mental health outcomes
  • it faced me with the reality that I'm incredibly privileged in global terms so I should be using that to help others rather than feeling sorry for myself
  • helping others is intrinsically satisfying

I do think there's more that could be done to develop psychological safety and remind people that their intrinsic value is separate from their instrumental value to the EA movement.

This is why I do community building work.

Idk I'm not a maximiser but I do think it's useful to have barriers to entry that require strong signals of shared values. I'm not interested in running a social club for privileged people that aren't actually contributing money and/or labour into EA causes.

I think most EAs living in rich countries should by default be working normal jobs while donating 10% and contributing to EA projects on the side. That does help calibrate with the wider world.

Exploring what's helpful is definitely an interesting angle that generates ideas. One idea that comes to mind is how EA communicates around the Top Charities Fund, basically "let us do the heavy lifting and we'll do our best to figure out where your donations will have impact". This has two particular attributes that I like. Firstly it provides maximum ease for a reader to just accept a TLDR and feel good about their choice (and this is generally positive for a non-EA donator independent of how good or bad TCF's picks are). Secondly, I think the messaging is more neutral and a bit closer to invitational consent culture. Hardcore EA is more likely to imply that you "should" think and care about whether TCF is actually a good fund and decide for yourself, but the consent culture version might be psychologically beneficial to both EAs and non-EAs while achieving the same or better numeric outcomes.

I think that this post is helpful, and has a lot of aspects with are ripe for discussion. I think that it might get more people to read it and to think about it if it were split into several smaller forum posts, with each subsection (such as EA devalues human life based on the arbitrary implications of capitalism and privilege) being it's own post. Each of these subsections is its own arguments, and each also has quite a bit of nuance.

If that feels like too much work, I'd be happy to help you copy-and-paste, format, and share this as a series of smaller posts.

Thanks for writing all of this up in one place! 

One of my gripes with the community has long been that maximization is core to EA and we're still really clueless about what it implies and most of the community (outside RP, QURI, etc. and some researchers) seems to have given up on figuring it out. 

I feel like we're like this one computer science professor I had who seemed a bit senile and only taught the sort of things that haven't lost relevance in the last 30 years because he hadn't kept up with anything that happened since the 80s. He probably had good personal and neurological reasons for that, but we don't, right?

I haven't read any EA introductory materials in a few years, but I hope they contain articles about expected value maximization along with articles on how EV is usually largely unknowable due to cluelessness and often undefined due to Pasadena games. That stochastic dominance is arguably a much better approach to prioritization but that Christian Tarsney is so far more or less the only one who has bothered to look into it. That there is perhaps a way forward to figure out what's the best thing to do if we funded some big world-modeling efforts based on software like Squiggle but that hardly anyone outside RP and QURI (and Convergence?) currently bothers to do anything about it. (I've dabbled a bit in these fields but my personal fit doesn't seem to be great.)

Maybe there's even a way to scalably adjust for personal fit whatever recommendations this big model effort might yield. Maybe there are some common archetypes/personas plus quizzes that tell people which ones they are closest to.

Arguably this can wait until this whole AI thing is under control (if that's even possible), but few people will want to work on AI safety, so maybe it doesn't have to wait?

My takeaway has been mostly that I don't have a clue and so will go with some sort of momentary best guess gives me enough fulfillment, enjoyment, and safety. I've written more about it here.

That said, EA has had a great effect on my mental health.

I used to be a crying wreck because of all the suffering in the world. I spread myself thin trying to help everyone. I felt guilty about the majority of terrible things in the world that I was powerless to prevent. (Suicidal too, except that would've been self-defeating.)

Then EA came along and gave me an excuse to “pick my battles,” i.e. focus on a few things where I could make a big difference, taking into account my skills and temperament. Now, if someone went, “Hey, you should become a politician to prevent X and Y,” I could go, “No, I wouldn't be good at that and hate every second of it and it would come at a great cost to A, which I'm already doing.” EA, for the first time, allowed me to set boundaries.

EA also gave me an appreciation for the (perhaps, plausibly, who knows really) great absolute impact that I can have despite the minimal impact that I have (perhaps, plausibly, who knows really) relative to the totality of the suffering in the world. That made it much easier to find fulfillment.

[strong upvoted for being well-formed criticism]

Almost any form of maximization as a lifestyle is likely to be neutral at best, unhealthy at worst, with maximization of any rational endeavor skewing towards predictably unhealthy and harmful. Maximization is fundamentally incompatible with good mental health. You can't "just have a little bit of maximization"; it's all or nothing.

how would you respond to the idea that good mental health is instrumental to maximization? that's a standard position in my impression.

This is an important question, which I left out because my full answer is extremely nuanced and it isn't central to my intention for this post (to stimulate discussion about the mental health of the community).

Here's a brief version of my response:

A good maximizer would know to take mental health into account and be good at it. However, it's very difficult to guess and figure out what the needs and requirements are for good mental health. Good mental health needs more than "the minimum amount of self-care", and maximizers will always be considering whether they could be doing less self-care. I argue that maximization as a strategy will always be suboptimal when one of these two conditions are present (and I believe they often are): when self-care is less visible and measurable than the other parts of the maximization equation, and if one of the requirements for good mental health includes things that necessarily include not maximizing. For example: embracing failure and imperfection, trusting one's body, giving yourself permission to adjust your social/moral/financial obligations at any time, these are not compatible with any rationality-based maximization. (Wild thought: Maybe they could be compatible with "irrational maximization"?) I believe I can refute pretty much any angle resembling "but the maximizer could just bootstrap based on your criticism and be better/smarter about maximization", but there are too many forms of this to pre-emptively address here.

These two strategies are worlds apart, despite seeming like they have a common interest: treating self-care as a task necessary for impact vs treating impact as an important expression within self-care. I advocate for the second approach, and I believe that for some people, this second approach can lead to greater impact AND greater happiness.

If we're listing factors in EA leading to mental health problems, I feel like it's worth pointing that a  portion of EA thinks there's a high chance of an imminent AI apocalypse that will kill everybody. 

I myself don't believe this at all, but to the people that do believe this, there's no way it doesn't affect your mental health. 

You seem to indicate that one who is “maximizing” for some value, such as the well-being of moral patients across spacetime would lead to, or tend to lead to, poor mental health. I can understand how one might think this for a “naïve maximization”, where one depletes oneself by giving of oneself, in terms of ones effort, time, and resources, at a rate that either makes one burnout, or barely able to function. But this is like suggesting if you want to get the most out of a car, you should drive it as frequently and relentlessly, without providing the vehicle needed upkeep and repairs.

But one who does not incorporate one’s own needs, including mental health needs, into one’s determination of how to maximize for a value is not operating optimally as a maximizer. I will note that there have been others who have indicated that when they view the satisfaction of their own needs or desires as primarily instrumental, rather than terminal goals, that this somewhat diminishes them. In my personal experience, I strive to “maximize”- I want to live my life in a way that best calculated toward reducing suffering and increasing flourishing of conscious beings- but I recognize that taking care of my health is part of how to do so.

I would be curious if other “maximizers” would say that they are capable of integrating their own health into their decisions such that they can maintain adequate health.

I hold the same view towards "non-naive" maximization being suboptimal for some people. Further clarification in my other comment.

I have concerns about the idea that a healthy-seeming maximizer can prove the point that maximization is safe. In mental health, we often come across "ticking time bomb" scenarios that I'm using as a sort of Pascal's mugging (except that there's plenty of knowledge and evidence that this mugging does in fact take place, and not uncommonly). What if someone just appears to be healthy and this appearance of being healthy is simply concealing and contributing to a serious emotional breakdown later in their life, potentially decades on? This process isn't a mysterious thing that comes without obvious signs, but what may be obvious to mental health professionals may not be obvious to EAs.

I don't reject the possibility that healthy maximizers can exist. (Potentially there is a common ground where a rationalist may describe a plausible strategy as maximization, and I, as a mental health advocate, would say it's not, and our disagreement in terminology is actually consistent with both our frameworks.) If EA continues to endorse maximizing, how about we at least do it in a way that doesn't directly align with known risks of ticking time bombs?

Hi Victor. I have a clarification question. For the quotes in this post (such as "Don't do that even though you would love it, because you would have no impact."), are those exact quotes, or are those your own phrasing/description of the types of things that you see and hear?

Longer quotes like these are narrative descriptions of the types of things I see and hear. Do you have any ideas on how to distinguish this from word-for-word quotations?

Unfortunately, I don't know of any good methods for distinguishing other than simply explaining in writing that these are general ideas rather than exact quotes.

Curated and popular this week
Relevant opportunities