(Probably somebody else has said most of this. But I personally haven't read it, and felt like writing it down myself, so here we go.)

 


 

I think that EA [editor note: "Effective Altruism"] burnout usually results from prolonged dedication to satisfying the values you think you should have, while neglecting the values you actually have.

Setting aside for the moment what “values” are and what it means to “actually” have one, suppose that I actually value these things (among others):
 

True Values

  • Abundance
  • Power
  • Novelty
  • Social Harmony
  • Beauty
  • Growth
  • Comfort
  • The Wellbeing Of Others
  • Excitement
  • Personal Longevity
  • Accuracy


One day I learn about “global catastrophic risk”: Perhaps we’ll all die in a nuclear war, or an AI apocalypse, or a bioengineered global pandemic, and perhaps one of these things will happen quite soon. 

I recognize that GCR is a direct threat to The Wellbeing Of Others and to Personal Longevity, and as I do, I get scared. I get scared in a way I have never been scared before, because I’ve never before taken seriously the possibility that everyone might die, leaving nobody to continue the species or even to remember that we ever existed—and because this new perspective on the future of humanity has caused my own personal mortality to hit me harder than the lingering perspective of my Christian upbringing ever allowed. For the first time in my life, I’m really aware that I, and everyone I will ever care about, may die.

My fear has me very focused on just two of my values: The Wellbeing Of Others and Personal Longevity. But as I read, think, and process, I realize that pretty much regardless of what my other values might be, they cannot possibly be satisfied if the entire species—or the planet, or the lightcone—is destroyed. 

[This is, of course, a version of EA that’s especially focused on the far future; but I think it’s common for a very similar thing to happen when someone transitions from “soup kitchens” to “global poverty and animal welfare”. There’s an exponential increase in stakes, accompanied by a corresponding increase in the fear of lost value.]

So I reason that a new life strategy is called for.

Over time, under the influence of my “Accuracy” value as well as my “Social Harmony” value (since I’m now surrounded by people who are thinking about this stuff), I come to believe that I should value the following:


Should Values

  • Impact*
  • Calibration
  • Openness*
  • Collaboration*
  • Empiricism*
  • The Wellbeing Of Others
  • Personal Longevity
     

(The values on this new list with an asterisk beside them have a correlate on the original list (impact→power, collaboration→social harmony, empiricism→accuracy), but these new values are routed through The New Strategy, and are not necessarily plugged into their correlates from the first list.)

Over a couple of years, I change my career, my friend group, and my hobbies to reflect my new values. I spend as little time as possible on Things That Don’t Matter, because now I care about Impact, and designing computer games has very little Impact since it takes a lot of time and definitely doesn’t save the world (even though it’s pretty good on novelty, beauty, growth, and excitement).

 


 

Ok, so let’s talk now about what “values” are.

I think that in humans at least, values are drives to action. They are things that motivate a person to choose one possible action over another. If I value loyalty over honesty, I’ll readily lie to help my friend save face; if I value both about equally, I may be a little paralyzed in some situations while I consult the overall balance of my whole value system and try to figure out what to do. When I go for a hike with my field kit of watercolor paints, I tend to feel really good about that decision as I make it, as I hike and paint, and also as I look back on the experience, because it satisfies several of my values (such as novelty, growth, and beauty). When I choose to stay in and watch a movie rather than run errands out in the cold rain, that’s my comfort value expressing itself. Values are the engines of motivation.

It is one thing to recognize that a version of you who strategically prioritizes “collaboration” will be more effective at accomplishing goals that you really do care about. But it’s another to incorrectly believe that “collaboration” directly motivates your actions.

 


 

Perhaps “collaboration” really is one of your true values. Indeed, perhaps your true values just happen to exactly match the central set of EA values, and that is why you are an EA. 

However, I think it’s much more common for people to be EAs because their true values have some overlap with the EA values; and I think it’s also common for EAs to dramatically overestimate the magnitude of that overlap. According to my model, this is why “EA burnout” is a thing.

[ETA: My working model is incomplete. I think there are probably other reasons also that EA burnout is a thing. But I'm nowhere near as satisfied with my understanding of the other reasons.]

If I am wrong about what I value, then I will miss-manage my motivational resources. Chronic mismanagement of motivational resources results in some really bad stuff.

Recall that in my hypothetical, I’ve oriented my whole life around The Should Values for my longtermist EA strategy—and I’ve done so by fiat, in a way that does not converse much with the values that drove me before. My career, my social connections, and my daily habits and routines all aim to satisfy my Should Values, while neglecting my True Values. As a result, my engines of motivation are hardly ever receiving any fuel. 

Gradually, I find myself less and less able to take any actions whatsoever. Not at work, not with friends, not even when I’m by myself and could theoretically do anything I want. I can’t even think about my work without panicking. I am just so exhausted all of the time. Even when apparently excellent opportunities are right in front of me, I just cannot bring myself to care.

 


 

I think there are ways to prioritize world-saving or EA-type strategies without deceiving ourselves about what motivates us. I think it is possible to put skill points into calibration, for example, even when you’re not directly motivated by a drive to be well calibrated. It is often possible to choose a job that satisfices for your true values while also accomplishing instrumental goals. In fact, I think it’s crucial that many of us do this kind of thing a bunch of the time.

I also think it is devastatingly dangerous for most of us to be incorrect about what really drives us to act.

It is probably possible to recover even from severe cases of EA burnout. I think I’ve done a decent job of it myself, though there’s certainly room for improvement. But it takes years. Perhaps several of them, perhaps a whole decade. And that is time I am not at all confident our species has.

I am a bit wary of telling EAs what I think they Should do. It seems to me that as a movement, EAs are awfully tangled up about Shoulds, especially when it comes to the thoughts of other EAs.

Still, it seems awfully important to me that EAs put fuel into their gas tanks (or electricity into their batteries, if you prefer), rather than dumping that fuel onto the pavement where fictional cars sit in their imaginations. 

And not just a little bit of fuel! Not just when you’re too exhausted to go on without a little hit. I think that no matter what you hope to accomplish, it is wise to act from your true values ALL of the time—to recognize instrumental principles as instrumental, and to coordinate with allies without allowing them to overwrite your self concept. 

My advice to my past self would be: First, know who you are. If you’re in this for the long haul, build a life in which the real you can thrive. And then, from the abundance of that thriving, put the excess toward Impact (or wherever else you would like for it to go).

Maybe you think that you lack the time to read fiction, or to go rock climbing, or to spend the whole weekend playing board games with friends.

I think that you may lack the time not to.

New to LessWrong?

New Comment
50 comments, sorted by Click to highlight new comments since: Today at 2:54 AM

This all seems broadly correct, to me.

But I think it's worth noting that there's an additional piece of the puzzle that I believe this one is largely codependent on: namely, that burnout often comes from a mismatch between responsibility and power.

This can be seen in not just high-stress jobs like medicine or crisis work, but also regular "office jobs" and interpersonal relationships. The more someone feels responsible for an outcome, whether internally or due to external pressure/expectations, the more power to actually affect change they will need to not feel that their efforts are pointless.

EAs tend to be the sort of people who, in addition to taking large scale problems seriously, internalize the idea of Heroic Responsibility. This can work out well if they manage to find some form of work that helps them feel like they are making meaningful change, but if they do not, it can make the large, difficult, and often heartbreaking challenges the world faces all the more difficult to engage with. And for many, narratives of personal inadequacy start to creep in, unless they have proper CBT training, robust self-care norms, or a clear sense of boundaries and distinctions between what is in their power and what isn't.

Most people in society tend to do work that progresses causes and institutions with not-perfectly-aligned values to their own. The two main ways I've seen this not cause burnout is either 1) when they don't really pay attention to the issues at all, or 2) when they feel like they're still making a meaningful difference to progress their values in some way, shape or form. Lacking that, the mismatch of values will indeed tend to erode many aspects of their mental and emotional wellbeing until they grow numb to the value dissonance or burnout. 

As a result of this comment, I have added to the OP: "[ETA: My working model is incomplete. I think there are probably other reasons also that EA burnout is a thing. But I'm nowhere near as satisfied with my understanding of the other reasons.]"

I agree that "something related heroic responsibility" is almost certainly part of the puzzle. But I feel lingering confusion and/or dissonance when I read your account here, and also when I've heard anyone else talk along similar lines. I am not sure yet where my confusion and/or dissonance comes from. It may be that there's really not a gears-level model here, and I'm holding out for seeing the gears. It may also be that a description along these lines is basically accurate and complete, but that I'm having some kind of defensiveness response that's preventing me from really getting it. 

In fact I'm confident that I am having some kind of defensiveness response, and the thing I can see now that I'm worried I'd lose track of if I fully adopted a perspective like this is: There's something good and right about heroic responsibility. I am curious what would happen to me if someone precisely named what is good and right about heroic responsibility in the course of discussing how it interacts with EA burnout.

Ah, thanks for saying that. It does feel worth noting that I am a huge proponent of Heroic Responsibility, so let me see if I can try in bullet point form at least, for now...

1) People have much more capacity for agency than society tends to instill in them.

2) The largest problems in the world are such that some people pretty much have to take it upon themselves to dedicate large chunks of their life to solving them, or else no one will.

3) This in fact describes most of the widely admired people in history: those who saw a major problem in the world, decided to make it their life mission to solve it, and often sacrificed much to do so.

4) For these reasons and more, I would never tell someone not to take Heroic Responsibility for things they care about. It would be hypocritical of me to do so. But...

4a) I do caution people against taking Heroic Responsibility for things they feel pressured to value, as you note in this post, and

4b) I do caution people to remember that most heroes historically do not in fact have happy endings.

5) Furthermore and separately, for every hero who visibly took a major problem in the world upon their shoulders and was recognized for doing so, many more are invisible to us because they never managed to accomplish anything.

6) Heroic Responsibility is not just a lens, it also provides power. It is a frame for motivating action, heightening agency, and expanding solution-space.

7) Like most powers, it comes with a cost to those who try to wield it unprepared. Someone who has not internalized and accepted "failure" as a part of life, as an intrinsic part of the process for learning and growth, is more likely to let the power of Heroic Responsibility break them in pursuit of their cherished values.

...I think that's it for now, though I can say more and expand on each of these. Thoughts so far?

I'm still mulling this over and may continue doing so for a while. I really appreciate this comment though, and I do expect to respond to it. :)

A quote I find relevant: 

“A happy life is impossible, the highest thing that man can aspire to is a heroic life; such as a man lives, who is always fighting against unequal odds for the good of others; and wins in the end without any thanks. After the battle is over, he stands like the Prince in the re corvo of Gozzi, with dignity and nobility in his eyes, but turned to stone. His memory remains, and will be reverenced as a hero's; his will, that has been mortified all his life by toiling and struggling, by evil payment and ingratitude, is absorbed into Nirvana.” - Arthur Schopenhauer

Perhaps the key question is what does research on burnout in general say, and are there things about the EA case that don't match that?

 

Also to what extent is burnout specifically a problem, vs pepole from different places bouncing and moving onto different social groups (either wihtin a year or two, or after a long relationship)?

For the former, my guess is that right around now (after having done some original seeing) is the time in Logan's MO when they typically go see what preexisting research says.

For the latter, anecdata: I've had something on the order of twenty conversations with EAs on this topic in the past five years, and those conversations were generally with officer-class EAs rather than enlisted-class EAs (e.g. people who've been around for more than five years, or people who have careers in EA and are highish in EA orgs) and I've never had someone say that burnout seemed not a problem and I've had lots of people say that they themselves struggled with burnout on the level of "wrecked at least a year of my life" and the rest are only one degree away from someone who did, which seems higher than base rate out in genpop.

Social narratives can run away with us, and people do catastrophize, but my personal sense is that it's a real and prevalent problem.

I know very little about other sorts of charity work, but i heard social workers complaining about burnout a lot.

I tend to assume that encounter harsh reality s hard, and working in unappreciated work that lack resources is hard.

It may be interesting to see what is the baseline burnout level in some fields is, to look both on variation and to how similar or dissimilar EA to other charities is. It may help to understand who big part different elements play in burnout - true values alignment, Heroic Responsibility, encountering discouraging reality, other things (like simply too many working hours).

This makes me think it is more likely that there is some problem specifically with EA that is driving this. Or maybe something wrong with the sorts of people drawn to EA? I've burned out several times while following a career that is definitely not embedded in an EA organization. But it seems more likely there is something going on there.

The way i see it, something wrong with people EA attract and some problem with EA are complimentary hypotheses. dysfunctional workplaces tend to filter for people that accept those dysfunctionalities.

[-]cata1y186

I want to say something, but I'm not really sure how to phrase it very precisely, but I will just say the gist of it in some rambly way. Note: I am very much on the periphery of the phenomenon I am trying to describe, so I might not be right about it.

Most EAs come from a kind of western elite culture that right now assigns a lot of prestige to, like, being seen to be doing Important Work with lots of Power and Responsibility and Great Meaning, both professionally and socially.

"I am devoting my life to solving the most important problems in the world and alleviating as much suffering as possible" fits right into the script. That's exactly the kind of thing you are supposed to be thinking. If you frame your life like that, you will fit in and everyone will understand and respect what is your basic deal.

"I am going to have a pleasant balance of all my desires, not working all that hard, spending some time on EA stuff, and the rest of the time enjoy life, hang out, read some books, and go climbing" does not fit into the script. That's not something that anyone ever told you to do, and if you tell people you are going to do that, they will be surprised at what you said. You will stand out in a weird way.

Example anecdote: A few years ago my wife and I had a kid while I was employed full-time at a big software company that pays well. I had multiple discussions roughly like this with my coworkers:

  • Me: My kid's going to be born this fall, so I'll be taking paternity leave, and it's quite likely I will quit after, so I want to figure out what to do with this thing that I am responsible for.
  • Them: What do you mean, you will quit after?
  • Me: I mean I am going to have a baby, and like you, they paid me lots of money, so my guess is that I will just hang out being a parent with my wife and we can live off savings for a while.
  • Them: Well, you don't have to do that! You can just keep working.
  • Me: But doesn't it sound like if you were ever going to not work, the precise best time would be right when you have your first kid? Like, that would be literally the most common sense time in your life to choose not to work, and pay attention to learning about being a parent instead? I can just work again later.
  • Them: [puzzled] Well, you'll see what I mean. I don't think you will quit.

And then they were legitimately surprised when I quit after paternity leave, because it's unusual for someone to do that (at least not men) regardless of whether they have saved a bunch of money due to being a programmer. The normal thing to do is, let your work define your role in life and gives you all your social capital, so it's basically your number 1 priority, and everything else is a sideshow.

So it makes total sense to me that EAs who came from this culture decide that EA should define their role in life and give them all their social capital and be their number 1 priority, and it's not about a failure of introspection, or about a conscious assessment of their terminal values that turned out wrong. It's just the thing people do.

My prediction would be that among EAs who don't come from a culture with this kind of social pressure, burnout isn't really an issue.

I am very skeptical that "people doing 'just the thing people do'" does not tend to amount to a failure of introspection.

(This is going to ramble.)

I am not quite sure what I would have thought about this two weeks ago, but I've just finished reading the book "Unmasking Autism" so my thoughts are kinda wrapped up in processing it right now.

Those of us who found out relatively late in life that we're autistic tend to be very, very good at doing whatever it is that "people just do", often unreflectively and very often to our severe detriment. (People diagnosed early can also be very good at this.) According to my own reading, which certainly emphasizes some parts of the book while downplaying others, the book is about 1) what it is like for a not-perfectly-normal person to pretend to be as-normal-as-possible, 2) how and why pretending to be normal kind of ruins our lives, and 3) how to live more authentically instead.

All of the "how to live more authentically instead" stuff is organized around exercises that help you figure out what you, personally, actually care about, as opposed to what everyone around you has told you for your entire life that you're supposed to care about, or which things it's really useful for you to care about in order to get by in a society where you're disabled. 

For example, my own story about myself has always said that I deeply value "independence". I like to know how to do everything on my own, with no social support, to the point that people who want to support me often feel frustrated and helpless because I come off as so capable that it seems like there's nothing for them to do. It's not just immediately relevant stuff I like to be independent about, like financial security, cooking, or running errands; this goes all the way down to "wanting to be independent of society itself". I got kind of upset with myself a while back when I realized that I didn't know how to butcher a deer. I knew how to make bows and arrows, and I knew how to hunt rabbits, but I did not know how to turn one of the most plentiful protein sources in my area into food. So of course I immediately learned.

But despite all of this, I'm no longer sure whether I actually value independence. 

The book has several "how to figure out what you actually care about" exercises, that are together called a "values-based integration process". It starts with [paraphrased] "Think of five moments from throughout your life when you felt fully alive. Tell the story of each moment in as much detail as possible, and think about why the moment stuck with you so dramatically." It is not obvious that any of my alive moments centrally feature independence.

Independence is one of the properties that came up over and over again as a common feature of autistic people's masks. Not only does American society tell everyone to value independence, but one of the best ways for a disabled person to hide their disabilities is to seem exceptionally independent to everyone around them. "Disabled" means "I need extra support and accommodations, at least in the existing societal context", while "independent" means "I don't need shit from anybody".

But of course, as I am in fact disabled, I really do need extra support and accommodations, no matter how successful I might be at convincing those around me and also myself that I don't. My life would be much, much better if I received the support I need, and indeed it's improved dramatically as I've begun to seek out that support over the past several years.

And this sort of mistake is happening all over the place for heavily masked autistics. Our lives are a lot worse, and we're constantly exhausted and burnt out, because we're putting everything we have into living as the people we've somehow come to believe we're supposed to be, instead of putting at least something into living as the people we actually are.

I think this situation is especially extreme for disabled people of all sorts: autistics, ADHDers, people with schizophrenia, Deaf people, etc. And it's especially extreme for other kinds of people who fall far outside of society's expectations on some axis: gay people in conservative areas, for example, or people with exceptionally strong emotional responses. Because the phenomenon I'm describing is approximately "the consequences of being in the closet"; the consequences of pretending to be the person other people want or expect you to be, living an ordinary life, "just doing whatever it is that people do" without accounting for how you, personally, differ from whatever it is you think that "people" are.

But there's a continuum, I think, from "the most neurotypical person in the world" to "super duper weirdos who do not naturally fit in at all and absolutely wreck themselves attempting to do so". Is it really the case that none of those people who were baffled by your plan to stop working would not themselves be better off if they did the same, in your situation? 

Maybe, but I doubt it! Some of them would probably be "just doing what people do", and they would be making a mistake. They would be making the mistake of unreflectively pretending to be the person they've come to believe they are supposed to be, rather than knowing who they are.

I have known someone who did not need to work for money because his wife was quite wealthy, and was self-aware that all else equal, he would prefer not to work; but he also thought (perhaps correctly) that he would not be as well respected if he didn't have a "job", and for that reason he went to work. He is not making a mistake (or if he is, it's not the mistake of unreflectively masking). Sometimes "pretending to be normal" when you know you are not is strategically the best thing to do, with respect to the balance of your true values. But for every one like him, I expect there are many, many more who do not really know why they are doing what they are doing, and who leave tons of value on the table as a result.
 

[-]cata1y100

What you say makes sense. I think most of the people "doing whatever it is that people do" are making a mistake.

The connection to "masking" is very interesting to me. I don't know much about autism so I don't have much background about this. I think that almost everyone experiences this pressure towards acting normal, but it makes sense that it especially stands out as a unique phenomenon ("masking") when the person doing it is very not-normal. Similarly, it's interesting that you identify "independence" as a very culturally-pushed value. I can totally see what you mean, but I never thought about it very much, which on reflection is obviously just because I don't have a hard time being "the culturally normal amount of independent", so it never became a problem for me. I can see that the effect of the shared culture in these cases is totally qualitatively different depending on where a person is relative to it.

One of the few large psychological interventions I ever consciously did on myself was in about 2014 when I went to one of the early CFAR weekend workshops in some little rented house around Santa Cruz. At the end of the workshop there was a kind of party, and one of the activities at the party was to write down some thing you were going to do differently going forward.

I thought about it and I figured that I should basically stop trying to be normal (which is something that before I thought was actively virtuous, for reasons that are now fuzzy to me, and would consciously try to do -- not that I successfully was super normal, but I was aiming in that direction.) It seemed like the ROI on being normal was just crappy in general and I had had enough of it. So that's what I did.

It's interesting to me that some people would have trouble with the "how to live more authentically instead" part. My moment to moment life feels like, there is a "stuff that seems like it would be a good idea to do right now" queue that is automatically in my head, and I am just grabbing some things out of it and doing them. So to me, the main thing seems to be eliminating any really dumb biases making me do things I don't value at all, like being normal, and then "living more authentically" is what's left.

But that's just my way -- it would make sense to me if other people behaved more strategically more often, in which case I guess they might need to do a lot more introspection about their positive values to make that work.

But cata, where does your "stuff that seems like it would be a good idea to do right now" queue come from? If you cannot see its origin, why do you trust that it arises primarily from your true values? 

Perhaps in your case it does, or at least enough so that your life is really the life that you would prefer to be living, at the limit of full knowledge and reflective equilibrium. But it's just not the case that "deliberately subjugating organic desires" is the main way that people end up acting without integrity. We get mixed up below the level of consciousness, so that the automatic thoughts we have arise from all kinds of messed up places. That's why this kind of thing is so very tricky to fix! We don't just have to "choose the other option" when we consciously encounter a dilema; we have to learn to see things that are currently invisible to us.

But cata, where does your "stuff that seems like it would be a good idea to do right now" queue come from? If you cannot see its origin, why do you trust that it arises primarily from your true values?

Well, I trust that because at the end of the day I feel happy and fulfilled, so they can't be too far off.

I believe you that many people need to see the things that are invisible to them, that just isn't my personal life story.

This is very interesting comment, about book that I just added to my reading list. would you consider posting this as separate post? I have some thoughts about masking and Authenticity, and the price of it and the price of too much of it, and I believe it's discussion worth having, but not here.

(I believe some people will indeed benefit a lot from not working as a new parents, but for others, it will be very big hit to their self-worth, as they define themselves by work, and better to be done only after some introspection and creating foundation of self-value disconnected from work.)

"I am devoting my life to solving the most important problems in the world and alleviating as much suffering as possible" fits right into the script. That's exactly the kind of thing you are supposed to be thinking. If you frame your life like that, you will fit in and everyone will understand and respect what is your basic deal.

Hm, this is a pretty surprising claim to me. It's possible I haven't actually grown up in a "western elite culture" (in the U.S., it might be a distinctly coastal thing, so the cliché goes? IDK). Though, I presume having gone to some fancypants universities in the U.S. makes me close enough to that. The Script very much did not encourage me to devote my life to solving the most important problems and alleviating as much suffering as possible, and it seems not to have encouraged basically any of my non-EA friends from university to do this. I/they were encouraged to have careers that were socially valuable, to be sure, but not the main source of purpose in their lives or a big moral responsibility.

Curated.

It does feel a bit weird for this to be curated on LessWrong rather than Effective Altruism Forum. Partly for "idk, feels like the wrong genre?" reasons (which I don't think are very strong), and partly because I think it's healthy for LessWrong to be an intellectual scene that doesn't really route through the "EA" abstraction.

But, I do still expect the post to be relevant to a lot of LessWrong contributors, who are engaging with various flavors of "the world seems full of big problems", and feel various flavor of pressure to adapt themselves to help with those big problems, in ways that are unhealthy. I think there are particular flavors of this that route through "conform to the Effective Altruism cultural center-of-mass", and versions that route through a somewhat more general pressure, and I think the models in this post will be relevant to a fair number of people.

[-]dxu1y100

Relevant Eliezer post:

The Watcher spoke on, then, about how most people have selfish and unselfish parts - not selfish and unselfish components in their utility function, but parts of themselves in some less Law-aspiring way than that.  Something with a utility function, if it values an apple 1% more than an orange, if offered a million apple-or-orange choices, will choose a million apples and zero oranges.  The division within most people into selfish and unselfish components is not like that, you cannot feed it all with unselfish choices whatever the ratio.  Not unless you are a Keeper, maybe, who has made yourself sharper and more coherent; or maybe not even then, who knows?  For (it was said in another place) it is hazardous to non-Keepers to know too much about exactly how Keepers think.

It is dangerous to believe, said the Watcher, that you get extra virtue points the more that you let your altruistic part hammer down the selfish part.  If you were older, said the Watcher, if you were more able to dissect thoughts into their parts and catalogue their effects, you would have noticed at once how this whole parable of the drowning child, was set to crush down the selfish part of you, to make it look like you would be invalid and shameful and harmful-to-others if the selfish part of you won, because, you're meant to think, people don't need expensive clothing - although somebody who's spent a lot on expensive clothing clearly has some use for it or some part of themselves that desires it quite strongly.

It is a parable calculated to set at odds two pieces of yourself (said the Watcher), and your flaw is not that you made the wrong choice between the two pieces, it was that you hammered one of those pieces down.  Even though with a bit more thought, you could have at least seen the options for being that piece of yourself too, and not too expensively.

And much more importantly (said the Watcher), you failed to understand and notice a kind of outside assault on your internal integrity, you did not notice how this parable was setting up two pieces of yourself at odds, so that you could not be both at once, and arranging for one of them to hammer down the other in a way that would leave it feeling small and injured and unable to speak in its own defense.

(That link seems to be broken. Here's one that hopefully works.)

[-]dxu1y20

Thank you for the correction! I've edited the original to match.

(so now there's like two of the same link in this thread but whatever haha)

I appreciate you writing this. I found myself agreeing much of it. The post also helped me notice some feeling of "huh, something seems missing... there's something I think this isn't capturing... but what is it?" I haven't exactly figured out where that feeling is coming from, so apologies if this comment ends up being incoherent or tangential. But you've inspired me to try to double-click on it, so here it goes :) 

Suppose I meet someone for the first time, and I begin to judge them for reasons that I don't endorse. For example, maybe something about their appearance or their choice of clothing automatically triggers some sort of (unjustified) assumptions about their character. I then notice these thoughts, and upon reflection, I realize I don't endorse them, so I let them go. I don't feel like I'm crushing a part of myself that wants to be heard. If anything, I feel the version of myself that noticed the thoughts & reframed them is a more accurate reflection of the person I am/want to be.

I think EAs sometimes struggle to tell the difference between "true values" and "biases that, if removed, would actually make you feel like you're living a life more consistent with your true values." 

There is of course the tragic case of Alice: upon learning about EA, she crushes her "true values" of creativity and beauty because she believes she's "supposed to" care about impact (and only impact or impact-adjacent things). After reframing her life around impact, she suppresses many forms of motivation and joy she once possessed, and she burns out.

But there is also the fantastic case of Bob: upon learning about EA, he finds a way of clarifying and expressing his "true values" of impact and the well-being of others. He notices many ways in which his behaviors were inconsistent with these values, the ones he values above all else. After reframing his life around impact, he feels much more in-tune with his core sense of purpose, and he feels more motivated than ever before.

My (rough draft) hypothesis is that many EAs struggle to tell the difference between their "core values" and their "biases". 

My guess is that the social and moral pressures to be like Bob are strong, meaning many EAs err in the direction of "thinking too many of their real values are biases" and trying too hard to self-modify. In some cases, this is so extreme that it produces burnout. 

...But there is real value to self-modifying when it's appropriate. Sometimes, you don't actually want to be a photographer, and you would be acting in a way that's truly more consistent with your values (and feel the associated motivational benefits) if you quit photography and spent more time fighting for a cause you believe in.

To my knowledge, no one has been able to write the Ultimate Guide To Figuring Out What You Truly Value. If such a guide existed, I think it would help EAs navigate this tension.

A final thought is that this post seems to describe one component of burnout. I'm guessing there are a lot of other (relatively standard) explanations for why some EAs burnout (e.g., not having strong friendships, not spending enough time with loved ones, attaching their self-worth too much to the opinions of a tiny subset of people, not exercising enough or spending enough time outdoors, working really hard, not having enough interpersonal intimacy, getting trapped in concerns around status, feeling alienated from their families/old friends, navigating other classic challenges of young adulthood). 

This post crystallized some thoughts that have been floating in my head, inchoate, since I read Zvi's stuff on slack and Valentine's "Here's the Exit."

Part of the reason that it's so hard to update on these 'creative slack' ideas is that we make deals among our momentary mindsets to work hard when it's work-time. (And when it's literally the end of the world at stake, it's always work-time.) "Being lazy" is our label for someone who hasn't established that internal deal between their varying mindsets, and so is flighty and hasn't precommitted to getting stuff done even if they currently aren't excited about work.

Once you've installed that internal flinch away from not working/precommitment to work anyways, though, it's hard to accept that hard work is ever a mistake, because that seems like your current mindset trying to rationalize its way out of cooperating today!

I think I finally got past this flinch/got out of running that one particular internal status race, thanks to this and the aforementioned posts.

This post crystallized some thoughts that have been floating in my head

+1. I've explained a less clear/expansive version of this post to a few people this last summer. I think there is often some internal value-violence going on when many people fixate on Impact.

What is "EA burnout"? Personally I haven't noticed any differences between burnout in EAs and burnout in other white-collar office workers. If there are such differences, then I'd like to know about them. If there aren't, then I'm skeptical of any model of the phenomenon which is particular to EA.

Have you considered cross-posting this to the EA forum? (There's an option under Edit -> Options -> Crosspost.)

I have yes but I do not want to do that.

Nothing stopping somebody else from making a link post though, I guess.

I'm also open to being convinced. 

IShouldNotTouchTheEAForumWithATenFootPoleChangeMyMind.gif

current best guesses at my cruxes (i can probably expand on these with less metaphorical language if somebody's serious about the discussion):

1) I will be eaten by piranhas.
2) The comments will cause me to want to disappear into the forest and never talk to another human ever again.
3) It would be better to light the EA forum on fire than to expose myself to it in the hopes of causing an incremental improvement.

I find this comment super interesting because

a) before, I would have expected many more people to be scared of being eaten by piranhas on LessWrong and not the EA Forum than vice versa. In fact, I didn't even consider that people could find the EA Forum more scary than LessWrong. (well, before FTX anyway)

b) my current read of the EA Forum (and this has been the case for a while) is that forum people like when you say something like "People should value things other than impact (more)" and that you're more likely to be eaten by piranhas for saying "People should value impact more" than vice versa.

Take this a slight nudge towards posting on the EA Forum perhaps, although I don't really have an opinion on whether 2) and 3) might still be true.

The specific things you said about the EA forum seem true but it also seems to me to be a hellscape of vicious social punishment and conformity and suspicion. The existence of a number of people pushing back against that doesn't quite suffice for feelings of safety, at least according to my own intuitions.

I'm kinda new here, so where all this EAF fear comes from?

There's a lot of criticism of EA on the forum, arguably too much (or at least it's misdirected), so I don't think you'll be eaten by piranhas or whatever in the comments, although if you have your reasons-for-~wanting-the-EA-forum-to-burn written up somewhere I'd like to read them

I model 1) as meaning people have high expectations and are mean in comments criticizing things ? 
I am unsure about what your reasons for 2) are - Is it close to "the comments will be so low quality I'll be ashamed of having seen them and that other humans are like that" ?
I expect 3) to be about your model of how difficult it is to improve the EA forum, and meaning that you think it's not worth investing time in trying to make it better ?

As an open question, I'm curious about what you've previously seen on EA Forum which makes you expect bad thing from it. Hostile behaviour ? Bad epistemics ? 

For me the core of it feels less like trying to "satisfying the values you think you should have, while neglecting the values you actually have" and more like having a hostile orientation to certain values I have.

I might be sitting at my desk working on my EA project and the parts of me that are asking to play video games, watch arthouse movies, take the day off and go hiking, find a girlfriend are like yapping dogs that won't shut up. I'll respond to their complaints once I've finished saving the world.

Through CFAR workshops, lots of goal factoring, journaling, and Focusing I'm getting some traction on changing that pattern. 

I've realised that values (or perhaps 'needs' fits better) are immutable facts about myself. Like my height or hair colour.  And getting annoyed at them for not being different makes about as much sense as shouting at the sky for raining.

The part of me that wants to maximize impact has accepted that moving to the Bay Area and working 80-hours a week at an EA org is a fabricated option.  A realistic plan takes into account my values that constrain me to want to live near my family, have lots of autonomy over my schedule and work independently on projects I control. Since realising that, my motivation, productivity, sense of agency (and ironically expected impact) have improved.  The future feels a lot brighter – probably because a whole load of internal conflict I wasn't acknowledging has been resolved.

I've realised that values (or perhaps 'needs' fits better) are immutable facts about myself. Like my height or hair colour. And getting annoyed at them for not being different makes about as much sense as shouting at the sky for raining.

Just noting that I'm reasonably confident that neither Logan nor most CFAR staff would claim that values are immutable; just that they are not easily changeable.

I think values do, indeed, shift; we can see this when e.g. people go through puberty or pregnancy or lose a limb or pass through a traumatic experience like a war zone. This puts a floor on how immutable values/needs can really be, and presumably they can be shifted via less drastic interventions.

It seems like a major issue here is that people often have limited introspective access to what their "true values" are. And it's not enough to know some of your true values; in the example you give the fact that you missed one or two causes problems even if most of what you're doing is pretty closely related to other things you truly value. (And "just introspect harder" increases the risk of getting answers that are the results of confabulation and confirmation bias rather than true values, which can cause other problems.)

Reading this post was a bit of a lightbulb moment for me, because I read it and went "ohhh, that's the thing other people are talking about happening to them when they talk about what an easy trap it is to fall into scrupulosity and stuff." This might also explain why I don't feel that much at home with the EA community even though I'm on board with basically all the main propositions and have donated a bunch.

My brain just doesn't do the "get hijacked by other people's values" thing anymore. I think it got burned too much by me doing that in my late teens / early twenties and getting super depressed as a result, so now anytime I see a project that part of me wants to get excited about and subsumed by, my brain goes "Nope. Nah. Not messing with that." To the point where it's kind of hard for me to contemplate ambitious projects at all, because the part of me that refuses to be ruled will not submit to it.

Here are two directions that may be fruitful to explore when taking this essay as the starting point:

If you’re in this for the long haul, build a life in which the real you can thrive. And then, from the abundance of that thriving, put the excess toward Impact (or wherever else you would like for it to go).

This rhymes with posts about Slack, including Zvi's original post on Slack, and his post Out to Get You. Furthermore, Zvi has written some criticisms of EA which IIRC also partly come from the perspective that effective altruism saps slack.

[This is, of course, a version of EA that’s especially focused on the far future; but I think it’s common for a very similar thing to happen when someone transitions from “soup kitchens” to “global poverty and animal welfare”. There’s an exponential increase in stakes, accompanied by a corresponding increase in the fear of lost value.]

On this point, this essay (on the EA forum; recommended reading) about a paper by Tyler Cowen argues that this may be a fundamental limitation of utilitarianism plus scope sensitivity, i.e. that this moral framework necessarily collapses everything into a single value (utility) to optimize at the expense of everything else:

So, the problem is this. Effective Altruism wants to be able to say that things other than utility matter—not just in the sense that they have some moral weight, but in the sense that they can actually be relevant to deciding what to do, not just swamped by utility calculations. Cowen makes the condition more precise, identifying it as the denial of the following claim: given two options, no matter how other morally-relevant factors are distributed between the options, you can always find a distribution of utility such that the option with the larger amount of utility is better. The hope that you can have ‘utilitarianism minus the controversial bits’ relies on denying precisely this claim.

...

Now, at the same time, Effective Altruists also want to emphasise the relevance of scale to moral decision-making. The central insight of early Effective Altruists was to resist scope insensitivity and to begin systematically examining the numbers involved in various issues. ‘Longtermist’ Effective Altruists are deeply motivated by the idea that ‘the future is vast’: the huge numbers of future people that could potentially exist gives us a lot of reason to try to make the future better. The fact that some interventions produce so much more utility—do so much more good—than others is one of the main grounds for prioritising them. So while it would technically be a solution to our problem to declare (e.g.) that considerations of utility become effectively irrelevant once the numbers get too big, that would be unacceptable to Effective Altruists. Scale matters in Effective Altruism (rightly so, I would say!), and it doesn’t just stop mattering after some point.

So, what other options are there? Well, this is where Cowen’s paper comes in: it turns out, there are none. For any moral theory with universal domain where utility matters at all, either the marginal value of utility diminishes rapidly (asymptotically) towards zero, or considerations of utility come to swamp all other values.

...

I hope the reasoning is clear enough from this sketch. If you are committed to the scope of utility mattering, such that you cannot just declare additional utility de facto irrelevant past a certain point, then there is no way for you to formulate a moral theory that can avoid being swamped by utility comparisons. Once the utility stakes get large enough—and, when considering the scale of human or animal suffering or the size of the future, the utility stakes really are quite large—all other factors become essentially irrelevant, supplying no relevant information for our evaluation of actions or outcomes.

...

Once you let utilitarian calculations into your moral theory at all, there is no principled way to prevent them from swallowing everything else. And, in turn, there’s no way to have these calculations swallow everything without them leading to pretty absurd results. While some of you might bite the bullet on the repugnant conclusion or the experience machine, it is very likely that you will eventually find a bullet that you don’t want to bite, and you will want to get off the train to crazy town; but you cannot consistently do this without giving up the idea that scale matters, and that it doesn’t just stop mattering after some point.

Started to enter a state that could be described as "meta analysis paralysis" ("meta-[analysis paralysis]" and not "[meta-analysis] paralysis") when I wanted to formulate my comment about your very interesting take on EA Burnout!

Your post screamed to me as a great example of analysis paralysis and bounded rationality.

Then I started to get paralyzed trying to analyse analysis paralysis and bounded rationality in the context of EA burnout and I quickly burnt out solutionless writing this comment.

Oh the irony!

Even burnt out I was still stuck in analysis paralysis so in the end I told myself: 

"Tomorrow I will ask Google and ChatGPT: 'how to solve analysis paralysis?'".

And then submitted that above comment which does not really help you... or maybe it does?!

Damned still paralyzed!

Anyway pushing the submit button now, not sure if is the right thing to do but my bounded rationality tells me that at least it is one thing done, even if I could have spent much more time on a much more thorough and thoughtful answer that would have allowed me to formulate a better (less wrong / more helpful) comment but maybe also hitting diminishing returns!

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?

I would tend to agree with the thoughts you've shared. I think burnout is bound to happen in any situation where the expectations we have of self (the shoulds) are misaligned to a significant degree from the wants we have. Any time there is that level of tension it will eventually lead to some sort of system failure if not somehow corrected by bringing the two into closer alignment. 

I asked the AI what Abundance was as a value, and it should "refer to a mindset or belief that there is enough of everything - resources, opportunities, love, happiness, etc. - to go around, and that one can have as much as they desire without taking away from others. It's about feeling and being grateful for what one has, and having a positive outlook on life and the future."
While I subscribe to @DaystarEld's reminder of the misguided Heroic Responsibility, I wonder if additionally this Epicurean dilemma isn't at core of most "EA burnout" and the recurring idea in this and similar write-ups.

I must admit, acronyms are the worst enemy to my brain. Took a good 5 minutes of scanning comments for me to realize we‘re talking about effective altruism. Not a strike on your part entirely but I suspect providing a quick upfront glossary can help you affect positive change among a wider range of individuals.

Yeah pretty fair, I just added an editor note about it in the opening paragraph

[+][comment deleted]1y40