Dorothea Brooke as an example to follow
I once read a post by an effective altruist about how Dorothea Brooke, one of the characters in George Eliot’s Middlemarch, was an EA. There’s definitely something interesting about looking at the story like this, but for me this reading really missed the point when it concluded that Dorothea’s life had been “a tragic failure”.[1] I think that Dorothea’s life was in many ways a triumph of light over darkness, and that her success and not her failure is the thing we should take as a pattern.
Dorothea dreamed big: she wanted to alleviate rural poverty and right the injustice she saw around her. In the end, those schemes came to nothing. She married a Radical MP, and in the process forfeited the wealth she could have given to the poor. She spent her life in small ways, “feeling that there was always something better which she might have done, if she had only been better and known better.” But she made the lives of those around her better, and she did good in the ways which were open to her. I think that the way in which Dorothea’s life is an example to us is best captured in the final lines of Middlemarch:
“Her finely touched spirit had still its fine issues, though they were not widely visible. Her full nature, like that river of which Cyrus broke the strength, spent itself in channels which had no great name on the earth. But the effect of her being on those around her was incalculably diffusive: for the growing good of the world is partly dependent on unhistoric acts; and that things are not so ill with you and me as they might have been, is half owing to the number who lived faithfully a hidden life, and rest in unvisited tombs.”
If this was said of me after I’d die, I’d think I’d done a pretty great job of things.
A related critique of EA
I think many EAs would not be particularly pleased if that was ‘all’ that could be said for them after they died, and I think that there is something worrying about this.
One of the very admirable things about EAs is their commitment to how things actually go. There’s a recognition that big talk isn’t enough, that good intentions aren’t enough, that what really counts is what ultimately ends up happening. I think this is important and that it helps make EA a worthwhile project. But I think that when people apply this to themselves, things often get weird. I don’t spend that much time with my ear to the grapevine, but from my anecdotal experience it seems not uncommon for EAs to:
- obsess about their own personal impact and how big it is
- neglect comparative advantage and chase after the most impactful whatever
- conclude that they are a failure because their project is a failure or lower status than some other project
- generally feel miserable about themselves because they’re not helping the world more, regardless of whether they’re already doing as much as they can
An example of a kind of thing I’ve heard several people say is ‘aw man, it sucks to realise that I’ll only ever have a tiny fraction of the impact Carl Shulman has’. There are many things I dislike about this, but in this context the thing that seems most off is that being Carl Shulman isn’t the game. Being you is the game, doing the good you can do is the game, and for this it really doesn’t matter at all how much impact Carl has.
Sure, there’s a question of whether you’d prefer to be Carl or Dorothea, if you could choose to be either one.[2] But you are way more likely to end up being Dorothea.[3] You should expect to live and die in obscurity, you should expect to undertake no historic acts, you should expect most of your work to come to nothing in particular. The heroism of your life isn’t that you single-handedly press the world-saving button - it’s that even though you’ll probably fail to achieve any of your dreams, you dream them, you pursue them with a constant heart, and you make the lives of those around you better with your hope and your altruism.
My friend Eli once said to me that if the most impactful thing for him to do was to sweep the CEA offices, he’d be totally happy with that. I think for most people this isn’t the case, that it’s really important that people speak truth to themselves here, and that forcing yourself into thinking you’re happy with things you’re not leads to bad things happening. But I think in Eli’s case it’s actually completely true, and I want to hold out that Eli sweeping the offices is more truly heroic than Eli chasing after the biggest project or the most prestigious role or the highest status research area. (For me the important thing here isn’t what Eli is actually doing, it’s his orientation. He now works somewhere pretty prestigious, but I think he still does it in the spirit of sweeping the offices.)
I still think there’s something important about being in analytical mode where only the actual outcomes count. There’s definitely a time for reflecting on whether your projects actually made a difference. But when it comes to me as a human being, I think it’s the greatness of my heart that matters, and not the greatness of my deeds. Probably I won’t achieve much in my life - but if I actually try, and do it with love, in some sense I think I’ll have done all that anyone could hope to do.[4]
Quote from Middlemarch, from the prologue in reference to women like Saint Theresa, who is then compared to Dorothea, but not in direct reference to Dorothea as a particular individual. ↩︎
I think different answers are legitimate here. ↩︎
I’m using Dorothea and Carl as labels here. For all I know, actual Carl might well be more like Dorothea than he’s like imaginary Carl. I also think it’s pretty unusual to end up as big-hearted and wise as Dorothea. ↩︎
I think actually trying and doing it with love are both very hard to do, and would feel very happy if I achieved that. ↩︎
Thinking about this further, one concern I have with this post as well as Ollie's comment is that I think people could unduly underrate the amount of good the average Westerner can actually do after reading it.
If you have a reasonably high salary or donate more than 10% (and assuming donations don't become much less cost-effective) to AMF or similarly effective charities, you can save hundreds of lives over your lifetime. Saving one life via AMF is currently estimated to cost around only £2,500. If you only earn the average graduate salary forever and only donate 10%, you can still save dozens of lives.
For reference, Oskar Schindler saved 1200 lives and is now famous for it worldwide.
My words at someone's funeral who saved dozens or even hundreds of lives would be a lot more laudatory than what was said about Dorothea.
Bravo.
Forgive me playing to type and offering a minor-key variation on the OP's theme. Any EA predisposition for vainglorious grasping after heroism is not only an unedifying shape to draw one's life, but also implies attitudes that are themselves morally ugly.
There are some (mercifully few) healthcare professionals who are in prison: so addicted to the thrill of 'saving lives' they deliberately inflicted medical emergencies on their patients so they had the opportunity to 'rescue' them.
The error in 'EA-land' is of a similar kind (but a much lower degree): it is much better from the point of view of the universe that no one needs your help. To wish instead they are arranged in jeopardy as some potemkin vale of soul-making to demonstrate one's virtue (rightly, ego) upon is perverse.
(I dislike 'opportunity' accounts of EA for similar reasons: that (for example) millions of children are likely to die before their fifth birthday is a grotesque outrage to the human condition. Excitement that this also means one has the opportunity make this number smaller is inapt.)
Likewise, 'total lifetime impact (in expectation)' is the wrong unit of account to judge oneself. Not only because moral luck intervenes in who you happen to be (more intelligent counterparts of mine could 'do more good' than I - but this can't be helped), but also in what world one happens to inhabit.
I think most people I met in medical school (among other comparison classes) are better people than I am: across the set of relevant possible circumstances we could find ourselves, I'd typically 'do less good' than the cohort average. If it transpires I end up doing much more good than them, it will be due to the accident where particular features of mine - mainly those I cannot take moral credit for, and some of which are blameworthy - happen to match usefully to particular features of the world which themselves should only be the subject of deep regret. Said accident is scant cause for celebration.
(my thoughts on this are conflicted, and I'm not sure I will endorse this after significant reflection) I think whether opportunity style accounts of EA are grotesque or natural are somewhat domain-specific.
I agree that it's somewhat grotesque to think of kids dying of diarrhea or chickens being tortured in factory farms as an exciting opportunity. But when I think of opportunity accounts, I think of something like Parfit (who himself probably had an obligation framing, not sure) talking about the future we can work towards:
Thanks for this piece, I really enjoyed it.
I also admire this orientation, props to Eli.
I note that you think the orientation is more important than the action but I do think that doing some marginally helpful task for an EA org is now slightly overrated by the community. I'd want to make salient the much larger class of unheroic yet valuable actions which one can take outside of the professional EA community, such as:
I have a lot of respect for people who do/are doing the above, especially when they know it probably won't secure them a place in the history books.
Strong agree, thanks for pointing this out Ollie
Epistemic Status: Thinking out loud.
Also: Pardon the long comment, I didn't have the time to write a short one. No one is under any obligation to address everything or even most things I said when writing replies.
During the past 4 years of being involved in-person with EA, my instinctive reaction to this problem has been to mostly argue against it whenever anyone tells me they personal think like this.
I think I can 'argue' convincingly against doing the things on your list, or at least the angst that comes associated with them.
In line with the latter, I often try to identify the trains of thought running through someone's mind that are causing them to feel pain, and try to help them bucket times for dealing with them, rather than them being constant.
I have conversations like this:
And yet somehow, given what has always felt to me like a successful attempt to clearly lay out the considerations, the problem persists, and people are not reliably cured after I talk to them. I mean, I think I have helped, and helped some people substantially, but I've not solved the general problem.
When a problem persists like this, especially for numerous people, I've started to look instead to incentives and social equilibria.
Incentives and Social Equilibria in EA
Here is a different set of observations.
A lot of the most successful parts of EA culture are very mission-oriented. We're primarily here to get sh*t done, and this is more central than finding friends or feel warm fuzzies.
EA is the primary place in the world for smart and young people who reason using altruism and empathy to make friends with lots of others who think in similar ways, and get advice about how to live life and live out their careers.
EA is new, it's young, and it's substantially built over the internet, and doesn't have many community elements to it. It's mostly a global network of people with a strong intellectual and emotional connection, rather than a village community where all the communal roles can be relied on to be filled by different townsfolk (caretakers, leaders, parents, party organisers, police, lawyers, etc).
The majority of large EA social events are often the primary way many people interact with people who may hire them in the future, or that they may wish to hire. For many people who identify as "EA", this is also the primary environment in which they are able to interact with widely respected EAs who might offer them jobs some day. This is in contrast with parties within major companies or universities, where there is a very explicit path in your career that will lead to you being promoted. In OpenPhil's RA hiring round, I think there were over 1000 applications, of which I believe they have hired and kept 4 people. Other orgs hiring is similarly slow. This suggests that in general you shouldn't expect to be able to have a career progression within orgs run by the most widely respected EAs.
Many people are trying to devote their entire lives to EA and EA goals, and give up on being committed members of other cultures and communities in the pursuit of this. (I was once at a talk where Anna Salamon noted, with sadness, that many people seem to stop having hobbies as they moved closer into EA/Rationality.)
This puts a very different pressure on social events. Failing to impress someone at a party / other event sometimes feels not merely like a social disappointment, but also one for your whole career and financial security and social standing among your friends and acquaintances. If the other people you mainly socialise with also attend those parties (as is true for me), in many ways these large events set the norms for social events in the rest of your life, with other things being heavily influenced by the dynamics of what is reward/punished in those environments.
I think this puts many people in bad negotiating positions. With many other communities (e.g. hobby communities built around sports/arts etc, professional communities that are centuries old like academia/finance/etc) if the one you're in isn't healthy for you, it's always an option to find another sport, or another company. But, speaking personally, I don't feel there are many other communities who are going to be able to proactively deal with the technological challenges of this century, who are smart and versatile and competent and care enough about humanity and its future to work on the existential problems. I mean, it's not like there aren't other places I could do good work, but I'd have to sacrifice a lot of who I am and what I care about to feel at home within them. So leaving doesn't tend to feel like much of an option (and I didn't even write about all the evolutionary parts of my brain screaming at me to never do anything socially risky never mind decide to leave my tribe).
So the standards of the mission are used as the standards of the community, and the community is basically hanging off of much of the mission, and that leads people to use the standards for themselves in places one would never normally apply those standards (e.g. self-worth and respect by friends).
Further Thoughts
Hmm, on reflection, something about the above feels a bit stronger than the truth (read: false). As with other healthy professional communities, I think in many parts of EA and rationality the main way to get professional respect is to actually build useful things and have interesting ideas, far more than having good social interactions at parties[1]. I'm trying to talk about the strange effects it has when there's also something like a community or social group built around these groups as well, that people devote their lives to, that isn't massively selective - insofar as it's not just the set of people who work full-time on EA projects, but anyone who identifies with EA or likes it.
I think it is interesting though, to try to think of a fairly competent company with 100s of employees, and imagining what would happen if a group of people tried to build their entire social life around the network inside of that company, and genuinely tried to live in accordance with the value judgements that company made, where the CEO and top executives were the most respected. Not only was this community inside the company, but lots of other people who like what the company is doing would turn up to the events, and also be judged precisely in accordance with how much utility they're providing the company, and how they're evaluated by the company. And they'd keep trying to get hired by the company, even though there are more people in the community than in the company by like 10x, or maybe 100x.
I think that's a world where I'd expect to see blogposts, by people in both the community and throughout the company, that saying things like "I know we all try to judge ourselves by where we stand in the company, but if you die having never become a top executive or even getting hired, maybe you shouldn't feel like your life has been a tragic waste?" And these get mixed into weird, straightforwardly false messages that people sometimes say behind closed doors just to keep themselves sane like "Ah, it only matters how much you tried, not whether you got hired" and "Just caring about the company is enough, it doesn't matter if you never actually helped the company make money."
When the company actually matters, and you actually care about outcomes, these memes are at best unhelpful, but when the majority of community members around the company can't do anything to affect the trajectory of the company, and the community uses this standard in place of other social standards, these sorts of memes are used to avoid losing your mind.
--
[1] Also with EA (much more than with the LessWrong in-person diaspora) has parts that aren't trying to be a community or a company, but are trying to be a movement, and that has further weird interactions with the other parts.
Related: https://forum.effectivealtruism.org/posts/rrkEWw8gg6jPS7Dw3/the-home-base-of-ea
"But EA orgs can't be inclusive, so we should have a separate social space for EA's that is inclusive. Working at an EA org shouldn't be the only option for one's sanity."
This riff from Eliezer seems relevant to me:
https://www.facebook.com/yudkowsky/posts/10154965691294228
Thinking in terms of virtue ethics on a day to day basis seems like a good way for some people to internalize some of the things folks have brought up in this thread although I've never been able to do it successfully myself.
Great post!
A few points:
I like the metaphor "the Game" a lot to describe consequentialism. I've been thinking about it a fair bit recently. Its use in The Wire seemed particularly relevant, and a bit less so in Game of Thrones. I obviously don't like the book association, but I think its quite fair to consider that a minor thing in comparison to its' greater use.
I think the idea of "compare yourself to the best person" is a bit of a fallacy, though one that basically everyone seems to do. What really matters is that people do what is optimal; and that means comparing yourself to whatever you find pragmatically useful. The best person obviously shouldn't compare themselves only to themselves, as they should be aiming for higher still.
While this may be a suboptimal funeral, I think my idea of an ideal funeral would look something like a few analysts going over my life to try to get a sense of how well I did on The Game relative to what I should have aimed for, given my very unique advantages and disadvantages. Then hopefully writing up a document of "lessons learned" for future people. Something like, "Ozzie was pretty decent at Y, and had challenges Z and Q. He tried doing these things, which produced these modest results. We'd give him maybe a C- and suggest these lessons that others could apply to make sure they do better."^1
[1] Ok, maybe this is trolling a little.
I enjoyed this, thanks! I just wanted to try and articulate a thought regarding failure. It feels simple in my head but I'm finding it hard to express it clearly!
It's something like: a failure is only a failure if looked at in isolation. If instead it's seen as part of a wider collection of failures and successes it is a necessary part of trial and error that helps move towards overall success.
If a project a person is working on turns out to be a dead end compared to another project that turns out to be highly impactful, the person working on the dead end project might feel a sense of disappointment and failure. "My project was a total dud. What a waste of time! I suck compared to the other person!"
But if it is part of a wider collection of projects and actions (the EA community as a whole) every failure that can be shared with the community is a useful exploration that the community as a whole can learn from. So even a failure can benefit the collective progress towards making a large positive difference.
Maybe it's a mindset shift - to avoid seeing it as a race where there are winners and losers, people who did the most good and people who didn't; and instead see it as a collaborative effort where we are all working together for a shared goal, and every effort counts towards the overall progression of the movement. Every win is a win for everyone (humanity!), and every failure is a useful learning point we can learn from and build upon.
With that in mind I would love to read more accounts of things people in the EA community have tried that didn't work out, for whatever reason! I've read some accounts before and I always enjoy it. Not only is it useful to share that knowledge so others can avoid similar dead ends or mistakes, but it also helps me remember that other people try and fail, and are not perfect magical superhumans. It gives me courage that, even if they might not work out, I can try things too.
Some (likely insufficient) instrumental benefits of feeling bad about yourself:
A recent book discusses the evolutionary causes of "bad feelings", and to what extent they have instrumental benefits: Good Reasons for Bad Feelings: Insights from the Frontier of Evolutionary Psychiatry.
Great post. I also think we could work more on the root cause of people feeling like this. Perhaps the message should be: "Doing good and having an impact is not about you. Doing good is for the world, its people and other living beings."
I love this post! It’s beautifully written, and one of the best things I’ve read on the forum in a while. So take my subsequent criticism of it with that in mind! I apologize in advance if I’m totally missing the point.
I feel like EAs (and most ambitious people generally) are pretty confused about how to reconcile status/impact with self-worth (I’m including myself in this group). If confronted, many of us would say that status/impact should really be orthogonal to how we feel about ourselves, but we can’t quite bring that to be emotionally true. We helplessly invidiously compare ourselves with successful people like “Carl” (using the name as a label here, not saying we really do this when we look at Carl Schuman), even though we consciously would admit that the feeling doesn’t make much sense.
I’ve read a number of relevant discussions, and I still don’t think anyone has satisfactorily dealt with this problem. But I’ll say that, for now, I think we should separate questions about the moral integrity of our actions (how we should define the goodness/badness of our actions) and those about how we should think about ourselves as people (whether we’re good/bad people). They’re related, but there might not be an easy mapping from one to the other. For instance, I think it’s very conceivable that a “Dorothea” may be a better person than a “Carl”, but a “Carl” does more good than a “Dorothea.” And, perhaps, while we should strive to do as much good as possible, our self-worth should track the kind of people we are much more closely than how much good we do.
I like this point
because it emphasizes that the reason to have this mindset is a fact about the world. Sometimes, when I encounter statements like this, it can be easy for them to "bounce off" because I object "oh, of course it's adaptive to think that way... but that doesn't mean it's actually true." It was hard for this post to "bounce off" me because of the force of this point.
Are you... really eli?
(i.e. the eli mentioned in the post)
Ha, no I am an unrelated Eli.
There's is a trap that consequentialists can easily fall into that the author describes beautifully in this post. I think the solution solution within consequentialism is to see that consequentialism doesn't recommend that we we only praise the highest achievers. Praise and blame are only justified within consequentialism when they produce good consequences, and it's beneficial to praise a wide variety of people, most especially people who are trying their hardest to improve the world.
For a fuller spectrum account of what it is to live a moral life, you can add 'virtue consequentialism' to your consequentialism. This position is just the observation that within consequentialism, virtues can be defined as character traits that lead to good consequences, and it's useful to cultivate these.
https://www.goodreads.com/book/show/5489019-uneasy-virtue