This post is motivated by Joey’s post on ‘Empirical data on value drift’ and some of the comments.
“And Harry remembered what Professor Quirrell had said beneath the starlight: Sometimes, when this flawed world seems unusually hateful, I wonder whether there might be some other place, far away, where I should have been…
And Harry couldn’t understand Professor Quirrell’s words, it might have been an alien that had spoken, (...) something built along such different lines from Harry that his brain couldn’t be forced to operate in that mode. You couldn’t leave your home planet while it still contained a place like Azkaban. You had to stay and fight.”
– Harry Potter and the Methods of Rationality
I use the terms value drift and lifestyle drift in a broad sense to mean internal or external changes leading you to lose most of the expected altruistic value of your life.
- Value drift is internal; it describes changes to your value system or motivation.
- Lifestyle drift is external; the term captures changes in your life circumstances leading to difficulties implementing your values.
Internally, value drift could occur by ceasing to see helping others as one of your life’s priorities (losing the ‘A’ in EA), or loosing the motivation to work on the highest-priority cause areas or interventions (losing the ‘E’ in EA). Externally, lifestyle drift could occur (as described in Joey's post) by giving up a substantial fraction of your effectively altruistic resources for non-effectively altruistic purposes, thus reducing your capacity to do good. Concretely, this could involve deciding to spend a lot of money on buying a (larger) house, having a (fancier) wedding, traveling around the world (more frequently or expensively), etc.
Of course, changing your cause area or intervention to something that is equally or more effective within the EA framework does not count as value drift. Note that even if your future self were to decide to leave the EA community, as long as you still see ‘helping others effectively’ as one of your top-priorities in life it might not constitute value drift. You don’t need to call yourself an EA to have a large impact. But I am convinced that EA as a community helps many members uphold their motivation for doing the most good.
Why this is important for altruists
There is a difference between the potential altruistic value and the expected altruistic value you may achieve over the course of your lifetime. Risks of value or lifestyle drift may make you lose most of the expected altruistic value of your life, thus preventing you from realizing a large fraction of your potential altruistic value.
Most of the potential altruistic value of EAs lies in the medium- to long-term, when more and more people in the community take up highly effective career paths and build their professional expertise to reach their ‘peak productivity’ (likely in their 40s). However, if value and lifestyle drift are common, most of an EA's expected altruistic value lies in the short- to medium-term; the reason being, that many of the people currently active in the community will cease to be interested in doing the most good long before they reach their peak productivity.
This is why, speaking for myself, losing my altruistic motivation or giving up a large fraction of my altruistic resources in the future would equal a small moral tragedy to my present self. I think that as EAs we can reasonably have a preference for our future selves not to abandon our fundamental commitment to altruism or effectiveness.
What you can do to reduce risks of value drift and lifestyle drift:
Caveat: the following suggestions are all very tentative and largely based on my intuition of what I think will help me avoid value drift; please take them with a large grain of salt. I acknowledge that other people function differently in some respects, that some of the suggestions below will not have beneficial effects for many people and could even be harmful for some. Also keep in mind that some of the suggestions might involve trade-offs with other goals. A toy example to illustrate the point: it might turn out that getting an EA tattoo is a great commitment mechanism, however it could conflict with the goal (among others) to spend your limited weirdness points wisely and might have negative effects on how EA is perceived by people around you. Please reflect carefully on your personal situation before adopting any of the following.
- Beware of falling prey to cognitive biases when thinking about value drift: You probably systematically underestimate a) the likelihood of changing significantly in the future (i.e. End-of-history-illusion) and b) the role that social dynamics play in your motivation. There is a danger in believing both that your fundamental values will not change or that you have control over how they will change, and in believing that your mind works radically differently from other people (e.g. atypical mind fallacy or bias blind spot); for instance, that your motivation is grounded more in rational arguments than it is for others and less in social dynamics. In particular, beware of base rate neglect when thinking that the risk of value drift occurring to your own person is very low; Joey’s post provides a very rough base rate for orientation.
- Surround yourself with value aligned people: There is a saying that you become the average of the five people closest to you. Therefore, surround yourself with people who motivate and inspire you in your altruistic pursuits. From this perspective, it seems especially beneficial to spend time with other EAs to hold up and regain your motivation; though ‘value aligned’ people don’t have to be EAs, of course. However, it is worth pointing out that you should beware of groupthink and surrounding yourself only with people who are very similar to you. As a community we should retain our ability to take the outside view and engage critically with community trends and ideas. If you decide you want to spend more time with value aligned people / other EAs, here are some concrete ways: making an effort to have regular social interactions with value aligned people (e.g. meeting for lunch/dinner, coffee), engaging in or starting your own local EA chapter, attending EA Global conferences or retreats, becoming friends with EAs, complete internships at EA aligned organisations, getting in touch with value aligned people & other EAs online and chatting/skyping to exchange ideas, sharing a flat etc. Avoiding value drift might increase the importance you should place on living in an EA hub, such as the Bay Area, London, Oxford or Berlin, or other places with a supportive community.
- Discount the expected value of your longer term altruistic plans by the probability that they will never be realised due to value or lifestyle drift (see Joey’s post for a very rough base rate). This consideration might lead you to place relatively more weight on how you can achieve near term impact or reduce risks of value drift. However, a counter-consideration is that your future self will have more skills, knowledge and resources to do good, which could make capacity building in the near term extremely valuable. Attempt to balance these considerations – the risk of value drift tomorrow against the risk of underinvesting in building your capacity today.
- Make reducing risks of value and lifestyle drift a top altruistic priority: Think about whether you agree that most of the potential social impact of your life lies several years or decades in the future. If yes, then thinking about risks of value drift in your own life and implementing concrete steps to reduce them, is likely going to be (among) the highest expected value activities for you in the short-term. I expect that learning more about the causes of value drift on the individual level has a high moral value of information by making it easier for yourself to anticipate and avoid future life circumstances that contribute to it. Joey’s post indicates that value drift occurs for various different reasons and many of those seem to be circumstantial rather than coming from disagreement with fundamental EA principles (e.g. moving to a new city without a supportive EA community, transitioning from university to workforce, finding a non-EA partner and investing heavily in the relationship, marrying, having kids etc.).
- Think about what your priorities are in life: There are many different ways to lead a happy and fulfilling life. A subset of those ways revolve around altruism. And a subset of these count as effectively altruistic. While you should be careful not to sacrifice your long term happiness to short-term altruistic goals – being unhappy with your way of life, even if it is doing a ton of good in the short-term, is a safe way to lose your motivation and pivot over time – there are ways to live a very happy and fulfilled life that is also dedicated to EA principles.
- Confront yourself with your major motivational sources regularly: This is related to the above point. For example, talk to other EAs about what motivates you and them, reread your preferred book by your favourite moral philosopher, watch motivating talks or articles (quick shout-out for Nate Soares’ ‘On Caring’) or whatever increased your motivation to become EA in the first place. In addition, consider writing a list of personalised, motivational affirmations for yourself that you read regularly or when feeling low and unmotivated. When considering (re-)watching emotionally salient videos (e.g. slaughterhouse videos), please bear in mind that this can have traumatic effects for some people and might thus be counterproductive.
- Send your future self letters: describing a) your altruistic motivation, b) wishes for how you should live your life in the years to come and including c) concrete resources (e.g. the new EA Handbook) to re-learn and potentially regain motivation. Consider adding d) a list of ways in which your present self would accept value changes to prevent your future self from rationalising value drift after the fact (e.g. value changes resulting from your future self being better informed, say, about moral philosophy and overall more rational – as opposed to purely circumstantial value drift).
- Conduct (semi-)annual reviews and planning: By evaluating how your life is going according to your own priorities, goals and values, you can know whether you are still on track to achieving them or whether you should make changes to the status quo.
- Really make bodily and mental health a priority: This is particularly important for the EA community, which is focused on (self-)optimization and where some people might be tempted in the short-run to work really hard and long hours, reduce sleep, neglect nutrition and exercise, and do other things that are neither healthy nor sustainable in the long run. Experiment with and implement practices to your life to reduce the chance of future (mental) health breakdown, which would a) be very bad by itself, b) radically limit your ability to do good in the short-term and c) could cause a reshuffling of your priorities or act as a Schelling point for your future self to disengage from EA. Julia Wise offers great advice on self-care and burnout prevention for EAs.
- Make doing good enjoyable: This is related to the above point on mental health. By finding ways to make engaging in altruistic behaviour enjoyable, you create a positive emotional association with the activity. This should help you keep up the commitment in the long-run. On the flipside, be careful when engaging in altruistic activities that you have (strong) negative associations with. Julia Wise writes “effective altruism is not about driving yourself to a breakdown. We don't need people making sacrifices that leave them drained and miserable. We need people who can walk cheerfully over the world”. A further advantage of finding ways to combine effective altruism with ‘having fun’ or ‘being cheerful’ is that it will likely make EA much more attractive for others. Concretely, you might want to try the following: Many activities are more fun in a group than alone, so engage in altruistic endeavours together with others if possible. Attempt to associate EA in your life not just with work, but also with socialising, friendship and fun. Make sure not to overwork yourself and keep in mind that “the important lesson of working a lot is to be comfortable with taking a break” (from Peter Hurfords ‘How I Am Productive’).
- Do good directly: You might want to consider keeping habits of doing good directly, even in cases where these are not top-priority do-gooding activities by themselves. I believe this can be helpful to keep up and increase internal motivation to engage in altruistic activities as well as for cultivating a sense of ‘being an altruistic person’. For example, you could live veg*an, live frugally, donate some amount of money every year (even if the sums are small) and keep up to date with cause area and charity recommendations when making your donation decisions. However, as a counter to this point, I have met someone arguing that spending willpower on low-impact activities might potentially lead to ego depletion (note that this effect is disputed) or compassion fatigue for some people, thereby decreasing their motivation to engage in high-impact behaviour. Regarding career choice, you might see reducing risks of value drift as one reason to place a higher weight on direct work or research within an EA aligned organisation relative to other options such as earning to give or building career capital.
- Consider ‘locking in’ part of your donation or career plans: While the flexibility to change your plans and retain future option value are important considerations, in some cases making hard-to-reverse decisions could be beneficial to avoid value drift. Application for career planning: be wary of building very general career capital for a long time, “particularly if the built capacity is broad and leaves open appealing non-altruist paths”, Joey writes. Instead you might consider specialising and building more narrow, EA-focused career capital (which is endorsed by 80,000 Hours for people focusing on top-priority paths anyway). However, in this article Ben Todd discusses some counterarguments to locking in your career decisions too early. Application for donations: Consider putting your donations in a donor advised fund instead of a savings account and potentially take a donation pledge (see point below). Joey writes, “that way even if you become less altruistic in the future, you can’t back out on the pledged donations and spend it on a fancier wedding or a bigger house”.
- Consider taking the Giving What We Can pledge: For me, the ‘lock in’ aspect of the pledge as a commitment device was among the strongest reasons to take it. It is worth pointing out though that taking the pledge could have downsides for some people (e.g. losing flexibility and falling prey to the overjustification effect; for details, read Michael Dicken’s post).
- Commit yourself publicly: This is another form of ‘lock in’. For example, you could participate in an EA group, write articles describing EA and your motivation to dedicate your life to doing the most good, post on social media about this, talk to other people about EA and be public about your EA career and donation plans, wear EA-T-shirts etc. The idea behind this is to engineer peer pressure for your future self and a potential loss of social status that could come with abandoning EA principles; I believe this works (subconsciously) for many as a motivational driving force to stay engaged. For this strategy to work it seems more important what you think your peers think of you, then what they actually think of you. Having said that, I encourage fostering a social norm among EAs not to shame or blame others when value drift occurs to them, in line with the overall recommendation for EAs to be especially nice and considerate.
- Regularly engage with EA content: Have habits in place to regularly engage with content of some form that helps you keep up your motivation or increases your knowledge of how to do the most good. For example, by subscribing to EA newsletters or RSS feeds (e.g. EA Newsletter, 80,000 Hours, Animal Charity Evaluators, Open Philanthropy Project, GiveWell, EA Forum feed), listening to EA & rationalist podcasts (e.g. 80,000 Hours podcast, Rationally Speaking Podcast), reading EA Forum articles, befriending and subscribing other (effective) altruists on FB, reading EA or rationality blogs (e.g. see list of EA blogs, LessWrong, Slate Star Codex), reading utopian fiction etc.
- Relationships: For those looking for a partner, I endorse the recommendation of generally just choosing whoever makes you happiest. For most people this anyway includes finding partners who share their values. It is worth pointing out that avoiding value drift might give you an additional reason to place some weight on finding partners who share your values and wouldn't put you under pressure in the long-term to give up your altruistic commitments or make it much harder to implement them. Concretely, you might consider looking for partners via platforms that allow you to share a lot about yourself and don’t match you with people with opposing values (e.g. OkCupid).
- Apply findings of behavioural science research: I suspect that there are relevant insights from the research on nudging or on successful habit creation and retention (e.g. see these articles, one & two), that can be applied to help you avoid long-term value drift. One way to use nudges to make yourself engage in a desired altruistic behaviour is by making the behaviour the default option. For instance, you might set up automated, recurring donations (i.e. donating as default option) or, Joey writes, “ask your employer to automatically donate a pre-set portion of your income to charity before you even see it in your bank account”. As another example, by working for an EA aligned organisation you can make high-impact direct work or research your default option.
What EA organisations can do to deal with value and lifestyle drift:
- Encourage norms of considerateness, friendliness and welcomingness within the EA community, which is beneficial in its own right but also helps keep motivational levels of community members high.
- Conduct further research on causes of value and lifestyle drift and how to avoid them. An obvious starting point is researching the EA ‘reference class’, i.e. looking at the value drift experiences of other social movements. I acknowledge that many EA organisations have already spent significant efforts on similar research projects (e.g. Open Philanthropy Project, Sentience Institute). In particular, there might be ways for Rethink Charity to expand the EA survey to gather more rigorous data on value drift (selection effects are obviously problematic – the people whose values drifted the most will likely not participate in the survey).
- Continue to support and expand opportunities for community members to surround themselves with other great people, e.g. by organising EAG(x) conferences and EA retreats, supporting local chapters and creating friendly and welcoming online communities (such as this forum or EA Facebook groups).
- Incorporate the findings of research on value drift into EA career advice, especially when recommending careers whose value will only be realized decades in the future. Rob Wiblin already indicated that 80,000 Hours considers incorporating this into their discussion of discount rates.
I would highly appreciate your suggestions for concrete ways to reduce risks of value drift in the comments.
I warmly thank the following people for providing me with their input, suggestions and comments to this post: Joey Savoie, Pascal Zimmer, Greg Lewis, Jasper Götting, Aidan Goth, James Aung, Ed Lawrence, Linh Chi Nguyen, Huw Thomas, Tillman Schenk, Alex Norman, Charlie Rogers-Smith.
[Edit, May 2019: Updated my definition of value and lifestyle drift above + added a section on why I believe this topic ought to be a priority for altruists]
This is an interesting coincidence. I'm someone who read and was influenced by EA blogs around 2014-2015, after working for an NGO for a few years. I was influenced enough to factor it in my decision to leave my job and go back to school to become a Nurse Practitioner. (As evidenced by the fact that nursing and advanced practice nursing aren't highly recommended pathways in 80k hrs, it's fair to say I factored what I read among EA sites alongside my own appraisals of priority areas, beliefs/attitudes, and individual circumstances).
Despite being in Boston, arguably an EA hub, I didn't engage with the community there during my time in school. Although the NGO I worked for wasn't EA, having a peer group that was concerned about various issues and recognized the need to deviate from mainstream culture when it came to matters of earnings and consumption, definitely counted for something. Compared to grad school, where classmates surely had diverse motivations for choosing the career path, the greatest common denominator revealed itself to be "achieving a comfortably middle to upper-middle class lifestyle and the ability to 'help people.'"
I can tell you my values drifted. Concepts such a marginal impact, and the fact that clinicians' marginal impact is much less than most believe, are threatening to front line clinicians - so I avoided those topics with peers. Surely many classmates were conventionally status-oriented that my previous peers in the non-profit. As I tried to learn about the more conservative norms regarding attire expected in some private practices, (as opposed to the NGO where I worked which was quite informal), I was exposed to more of such consumption-as-status messaging. After breaking up with a long term significant other, who at least understood my beliefs and attitudes about EA and simpler-living, and starting to date other professionals in a high-COL city, I definitely found myself thinking about more of my income going more towards self-presentation. With significant student loan debt from the program, my first job will definitely prioritize salary more than I'd otherwise like.
All the while, I always had EA in the back of my mind. I listened to MacAskill's book in audio format at some point during my graduate program. As someone who seems to be more interested in managing downward risk combined with the fact that my previous work was in behavior change/nudging, I was always concerned I would not 'come back' to the community or succumb to norms of the dominant consumer culture.
On reflection, I think that is part of the reason I wasn't deeply engaged with the community - I seem to be more concerned with making sure I have some kind of significant impact, than trying to maximize impact. I've long been concerned about an imagined future-self succumbing to burnout, resentment, or alienation. I wondered how my future self might cope if I invest heavily in a problem area that turns out to be, for one reason or another, no longer a high-impact area. It's safe to say that I'm rather risk-intolerant.
Moving forward, I do plan to re-engage with the community, especially in person because I now appreciate how alienating it can be to not have peers than understand your deeply held values. I hope this post adds-value; rather than codify the challenges and protective-factors, I thought it's best I just concretely describe my experience and leave it here for interpretation.
Thank you for this detailed description of your experience!
I would guess that many other people in the EA community have a similar story to tell about the challenge of self-presentation/conspicuous consumption, as well as the ease with which you can drift when you find a new partner/friend group. I'm trying to understand value drift better, and this comment added value for me.
For people who worry that the list sounds onerous I am happy to report that having done many of the items my life feels better, not worse. I'd say the biggest negative has been a reduction in how much I feel I can relate to people on more normal life paths, but this feels like an additional benefit in many ways since I wind up spending more time with people doing other non-standard things.
Thanks for writing this - it seems worthwhile to be strategic about potential "value drift", and this list is definitely useful in that regard.
I have the tentative hypothesis that a framing with slightly more self-loyalty would be preferable.
In the vein of Denise_Melchin's comment on Joey's post, I believe most people who appear to have value "drifted" will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously; as per The Elephant in the Brain, I believe these non-altruistic motives are more important than most people think. In the vein of The Replacing Guilt series, I don't think that attempting to override these other values is generally sustainable for long-term motivation.
This hypothesis would point away from pledges or 'locking in' (at least for the sake of avoiding value drift) and, I think, towards a slightly different framing of some suggestions: for example, rather than spending time with value-aligned people to "reduce the risk of value drift", we might instead recognize that spending time with value-aligned people is an opportunity to both meet our social needs and cultivate one's impactfulness.
Thanks for your comment! I agree with everything you have said and like the framing you suggest.
This is what I tried to address though you have expressed it more clearly than I could! As some others have pointed out as well, it might make sense to differentiate between 'value drift' (i.e. change of internal motivation) and 'lifestyle drift' (i.e. change of external factors that make implementation of values more difficult). I acknowledge that, as Denise's comment points out, the term 'value drift' is not ideal in the way that Joey and I used it and that:
However, it seems reasonable to me to be concerned and attempt to avoid both about value and lifestyle drift and in many cases it will be hard to draw a line between the two (as changes in lifestyle likely precipitate changes in values and the other way around).
In addition to Darius's suggestions, I recommend using Murphyjitsu to generate your personal list of failure modes. Imagine yourself one/five/ten years from now, no longer being an EA. Ask yourself: what happened? Then try to think of ways to prevent this from happening.
I'm confused about what is happening here. I remember reading this article a year ago, and most of the comments are almost exactly one year old. But for some reason the date of the post is "8th May 2019" and the post is in the first page of the forum where it says that it was posted 8 days ago. I guess there is some kind of a bug in the forum that caused the date of the post to be wrong.
I believe if you save something as a draft and then re-publish it, it changes the publication date. Darius, is that maybe what happened? If you know the original publication date, the moderators can change it to the original.
Yes, this is what happened. There are cases where it's good to be able to have the date adjust (e.g. if you accidentally publish a post before it's finished and want to edit and repost), but in this case, it was unintentional. I'll change the date.
Wonderful post! This is easily the best resource I'm aware of on ways to reduce value drift, and I anticipate sharing it with a lot of people over the years.
In my view, one of the most threatening risks to EA is value drift -- not collectively, but in the sense that many of the community's most devoted members gradually lose interest and leave. There are a lot of people whose names you can see all over the 80K/GWWC websites from material produced a few years ago, but who are no longer involved in EA in any kind of public capacity (and may not be involved at all). We're still growing, on net, but if getting older tends to lead to drift, I can imagine us hitting a point where so many people "age out" that growth drops to roughly zero.
Something that wasn't in your list: Helping the people who are already in your life become aligned with your values, or with the idea that you should keep your values.
The latter seems easier; it's tough to get a random person to become truly interested in EA, but any close friend should care somewhat about your sticking to your plans and meeting your goals.
If my most religious friend told me they'd stopped going to church and felt "meh" about it, I'd be concerned for them even as an atheist, because the change might indicate that they were struggling with their life in general. If I decided to stop giving money to charity, I'd hope that my non-EA friends wouldn't simply let the matter drop, and would at least gently ask questions that would prompt me to engage with my own beliefs and come up with a good reason that I'd abandoned something which was previously very important to me.
My version of this is keeping a journal, where I sometimes address "Future Aaron" but mostly focus on recording my beliefs/feelings as they are on any given day, trusting that Future Aaron will read those entries and feel connected to me. I haven't yet struggled with value drift, but I have seen my journal help me recover past states of mind to become more excited/inspired/etc. I hope that it will also reduce the odds that I drift away from EA over time.
Do you know if anyone's debriefed these folks?
Could be interesting to systematically interview people like this, to learn more about why people distance from EA & to see if any generalizable trends appear.
What you're calling "value drift," Evangelical Christians call "backsliding." The idea is you've taken steps toward a countercultural lifestyle in line with your values, but now you're sliding back toward the mainstream - for an Evangelical Christian, an example would be binge drinking with friends. Backsliding is common and Evangelicals use many of the techniques listed above to counteract it.
Evangelicals heavily emphasize community. Christians are encouraged to attend services, join a small group Bible study, socialize with each other, and marry other Christians.
I also remember being encouraged to establish good habits and stick with them - for example, reading the Bible every morning.
We also, of course, begin with a public commitment to Christianity. And community members will pull you aside and have a chat with you (read: judge you) if they think you're in danger of backsliding.
I've seen all of these strategies work, although some have undesirable side effects.
There's probably something to be gained by investigating this further, but i would guess that most cases of value drift are because a loss of willpower and motivation, rather that an update of one's opinion. I think the word value drift is a bit ambigious here, because i think the stuff you mention is something we don't really want to include in whatever term we use here. Now that i think about it, i think what really makes the difference here are deeply held intuitions about the range of our moral duty and so for which 'changing your mind' doesn't alway seem appropriate.
Great posts, Joey and Darius!
I'd like to introduce a few considerations as an "older" EA (I am 43 now) :
Scope of measurement: Joey’s post was based on 5 year data. As Joey mentioned, “it would take a long time to get good data”. However, it may well be that expanding the time scope would yield very different results. It is possible that a graph plotting a typical EA’s degree of involvement/commitment with the movement would not look like a horizontal line but rather like a zigzag. I base this on purely anecdotal evidence, but I have seen many people (including myself) recover interests, hobbies, passions, etc. once their children are older. I am quite new to the movement, but there is no way that 10 years ago I would have put in the time I am now devoting to EA. If I had started my involvement in college —supposing EA had been around—, you could have seen a sharp decline during my thirties (and tag that as value drift)… without knowing there would be a sharp increase in my forties.
Expectations: This is related to my previous point. Is it optimal to expect a constant involvement/commitment with the movement? As EAs, we should think of maximizing our lifetime contributions. Keeping the initial engagement levels constant sounds good in theory, but it may not be the best strategy in the long run (e.g. potentially leading to burnout, etc). Maybe we should think of “engagement fluctuations” as something natural and to be expected instead of something dangerous that must be fought against.
EA interaction styles: If and as the median age of the community goes up, we may need to adapt the ways in which we interact (or rather add to the existing ones). It can be much harder for people with full-time jobs and children to attend regular meetings or late afternoon “socials”. How can we make it easier for people that have very strong demands on their time to stay involved without feeling that they are missing out or that they just can’t cope with everything? I don’t have an answer right now, but I think this is worth exploring.
The overall idea here is that instead of fighting an uneven involvement/commitment across time it may be better to actually plan for it and find ways of accommodating it within a “lifetime contribution strategy”. It may well be that there is a minimum threshold below which people completely abandon EA. If that it so I suggest we think of ways of making it easy for people to stay above that threshold at times when other parts of their lives are especially demanding.
Great points, thanks for raising them!
It would be very encouraging if this is a common phenomenon and many people 'dropping out' might potentially come back at some point to EA ideals. It provides a counterexample to something I have commented earlier:
Regarding your related point:
I strongly agree with this, which was my motivation to write the post in the first place! I don't think constant involvement/commitment to (effective) altruism is necessary to maximise your lifetime impact. That said, it seems like for many people there is a considerable chance to never 'find their way back' to this commitment after they spent years/decades in non-altruistic environments, on starting a family, on settling down etc. This is why I'd generally think people with EA values in their twenties should consider ways to at the least stay loosely involved/updated over the mid- to long-term to reduce the chance of this happening. So it provides a great example to hear that you actually managed to do just that! In any case, more research is needed on this - I somewhat want to caution against survivorship bias, which could become an issue if we mostly talk to the people who did what is possibly exceptional (e.g. took up a strong altruistic commitment in their forties or having been around EA for for a long time).
Good points. If I were doing a write up on this subject it would be something like this:
"As the years go by, you will likely go through stages during which you cannot commit as much time or other resources to EA. This is natural and you should not interpret lower-commitment stages as failures: the goal is to maximize your lifetime contributions and that will require balancing EA with other goals and demands. However, there is a risk that you may drift away from EA permanently if your engagement is too low for a long period of time. Here are some tools you can use to prevent that from happening:"
Thank you, Joey, for gathering those data. And thank you, Darius, for providing us with the suggestions for reducing this risk. I agree that further research on causes of value drift and how to avoid it is needed. If the phenomenon is explained correctly, that could be a great asset to the EA community building. But regardless of this explanation, your suggestions are valuable.
It seems to be a generally complex problem because retention encapsulates the phenomenon in which a person develops an identity, skill set, and consistent motivation or dedication to significantly change the course of their life. CEA in their recent model of community building framed it as resources, dedication, and realization.
Decreasing retention is also observed in many social movements. Some insights about how it happens can be culled from sociological literature. Although it is still underexplored and the sociological analysis might have mediocre quality, but it might still be useful to have a look at it. For example, this analysis implicate that “movement’s ability to sustain itself is a deeply interactive question predicted by its relationship to its participants: their availability, their relationships to others, and the organization’s capacity to make them feel empowered, obligated, and invested."
Additional aspects of value drift to consider on an individual level that might not be relevant to other social movements: mental health and well-being, pathological altruism, purchasing fuzzies and utilons separately.
The reasons for the value drift from EA seems to be as important in understanding the process, as the value drift that led to EA, e.g. In Joey's post, he gave an illustrative story of Alice. What could explain her value drift was the fact that at people during their first year of college are more prone to social pressure and need for belonging. That could make her become EA and drifted when she left college and her EA peers. So "Surround yourself with value aligned people" for the whole course of your life. That also stresses the importance of untapped potential of local groups outside the main EA hubs. For this reason, it's worth considering even If in case of outreach we shouldn't rush to translate effective altruism
About the data itself. We might be making wrong inferences trying to explain those date. Because it shows only a fraction of the process and maybe if we would observe the curve of engagement it would fluctuate over a longer period of time, eg. 50% in the first 2-5 year, 10% in a 6th year, 1% in for the next 2-3 and then coming back to 10%, 50% etc.? Me might hypothesize that life situation influence the baseline engagement for short period (1 month- 3 years). As analogous for changes in a baseline of happiness and influences of live events explained by hedonic adaptation, maybe we have sth like altruistic adaptation, that changes after a significant live event (changing the city, marriage etc.) and then comes back to baseline.
Additionally, the level of engagement in EA and other significant variables does not correlate perfectly, the data could also be explained by the regression to the mean. If some of the EAs were hardcore at the beginning, they will tend to be closer to the average on a second measurement, so from 50% to 10%, and those from 10% to 1%. Anyhow, the likelihood that the value drift is true is higher than that it's not.
More could be done about the vale drift on the structural level, e.g. it might be also explained by the main bottlenecks in the community itself, like the Mid-Tire Trap (e.g. too good for running local group, but no good enough to be hired by main EA organizations -> multiple unsuccessful job applications -> frustration -> drop out).
Becuase mechanism of the value drift would determine the strategies to minimalize risk or harm of it and because the EA community might not be representative for other social movements, we should systematically and empirically explore those and other factors in order to find the 80/20 of long-lasting commitment.
Doing effective altruistic things ≠ Doing Effective Altruism™ things
All the main Effective Altruism orgs together employ only a few dozen people. There are two orders of magnitude more people interested in Effective Altruism. They can't all work at the main EA orgs.
There are lots of highly impactful opportunities out there that aren't branded as EA - check out the career profiles on 80,000hours for reference. Academia, politics, tech startups, doing EtG in random places, etc.
We should be interested in having as high an impact as possible and not in 'performing EA-ness'.
I do think that EA orgs dominate the conversations within the EA sphere which can lead to this unfortunate effect where people quite understandably feel that the best thing they can do is work there (or at an 'EA approved' workplace like D pmind or J n Street) - or nothing. That's counterproductive and sad.
A potential explanation: it's difficult for people to evaluate the highly impactful positions in other fields. Therefore the few organisations and firms we can all agree on are Effectively Altruistic get a disproportionate amount of attention and 'status'.
As the community, we should try to encourage to find the highest impact opportunity for them out of many possible options, of which only a tiny fraction is working at EA orgs.
Thanks for your comment, Karolina!
Yep, I see engaging people & keeping up their motivation in one location as a major contribution of EA groups to the movement!
This is an interesting suggestion, though I think it unlikely. It is worth pointing out that most of this discussion is just speculation. The very limited anecdata we have from Joey and others seems too weak to draw detailed conclusions. Anyway: From talking to people who are in their 40s and 50s now, it seems to me that a significant fraction of them were at some point during their youth or at university very engaged in politics and wanted to contribute to 'changing the world for the better'. However, most of these people have reduced their altruistic engagement over time and have at some point started a family, bought a house etc. and have never come back to their altruistic roots. This common story is what seems to be captured by the saying (that I neither like nor endorse): "If you're not a socialist at the age of 20 you have no heart. If you're not a conservative at the age of 40, you have no head".
This is a valuable and under-discussed point that I endorse!
Say a person could check a box and commit to being vegan for the rest of their lives, do you think that would be a ethical/good thing for someone to do? Given what we know about average recidivism in vegans?
It could turn out to be bad. For example, say she pledges in 2000 to "never eat meat, dairy, or eggs again." By 2030, clean meat, dairy, and eggs become near universal (something she did not anticipate in 2000). Her view in 2030 is that she should be willing to order non-vegan food at restaurants since asking for vegan food would make her seem weird while being unlikely to prevent animal suffering. If she takes her pledge seriously and literally, she is tied to a suboptimal position (despite only intending to prevent loss of motivation).
This could happen in a number of other ways:
She takes the Giving What We Can Further Pledge* intending to prevent herself from buying unnecessary stuff but the result is that her future self (who is just as altruistic) cannot move to a higher cost of living location.
She places her donation money into a donor-advised fund intending to prevent herself from spending it non-altruistically later but the result is that her future self (who is just as altruistic) cannot donate to promising projects that lack 501(c)(3) status.
She chooses a direct work career path with little flexible career capital intending to prevent herself from switching to a high earning career and keeping all the money but the result is that her future self (who is just as altruistic) cannot easily switch to a new cause area where she would be able to have a much larger impact.
It seems to me that actions that bind you can constrain you in unexpected ways despite your intention being to only constrain yourself in case you lose motivation. Of course, it may still be good to constrain yourself because the expected benefit from preventing reduced altruism due to loss of motivation could outweigh the expected cost from the possibility of preventing yourself from becoming more impactful. However, the possibility of constraining actions ultimately being harmful makes me think that they are distinct from actions like surrounding yourself with like-minded people and regularly consuming EA content.
*Giving What We Can does not push people to take the Further Pledge.
Daniel Gambacorta has discussed value drift in two episodes of his Global Optimum Podcast (one & two) and recommends the following, which I found really helpful:
"Choose effective altruist endeavors that also grant you selfish benefits. There are a number of standard human motivators. Status, friends, mates, money, fame. When these things are on the line work actually gets done. Without these things it’s a lot harder. If your effective altruism gets you none of the things that you selfishly want, that’s going to make things harder on you. If your plan is to go off into a cave, do something brilliant and never get credit for it, your plan’s fatal flaw is you won’t actually do it. If you can’t get things you selfishly want through effective altruism, you are liable to drift towards values that better enable you to get what you selfishly want. We humans are extremely good at fulfilling selfish goals while being self-deceived about it. With this in mind, you might pick some EA endeavor which is impactful but also gets you some standard things that humans want, because you are a human and you probably want the standard things other humans want. Even if the endeavor that grants you selfish benefits is less impactful in the abstract, this could be outweighed by the chance that you actually do it, and also how much more productive you will be when you work on something that is incentivized. If you do something that grants you significant selfish benefits, you just have to watch out for optimizing for those benefits instead of effective altruism, which would of course defeat the purpose."
This strikes me as incredibly good advice.
Thanks, Tom! I agree with with you that all else being equal
though I still think that in some cases the benefits of hard-to-reverse decisions can outweigh the costs.
This seems to assume that our future selves will actually make important decisions purely (or mostly) based on their epistemic status. However, as CalebWithers points out in a comment:
If this is valid (as it seems to me) than many of the important decisions of our future selves are a result of some more or less conscious psychological drives rather than an all-things-considered, reflective and value-based judgment. It is very hard for me to imagine that my future self could ever decide to stop being altruistic or caring about effectiveness on the basis of being better informed and more rational. However, I find it much more plausible that other psychological drives could bring my future self to abandon these core values (and find a rationalization for it). To be frank, though I generally appreciate the idea of 'being loyal to and cooperating with my future self', it seems to me that I place a considerably lower trust in the driving motivations of my future self than many others. From my perspective now, it is my future self that might act disloyally with regards to my current values and that is what I want to find ways to prevent.
It is worth pointing out that in the whole article and this comment I mostly speak about high-level, abstract values such as a fundamental commitment to altruism and to effectiveness. This is what I don't want to lose and what I'd like to lock in for my future self. As illustrated by RandomEAs comment, I would be much more careful about attempting to tie-myself-to-the-mast with respect to very specific values such as discount rates between humans and non-human animals, specific cause area or intervention preferences etc.
It's not enough to place a low level of trust in your future self for commitment devices to be a good idea. You also have to put a high level of trust in your current self :)
That is, if you believe in moral uncertainty, and believe you currently haven't done a good job of figuring out the "correct" way of thinking about ethics, you may think you're likely to make mistakes by committing and acting now, and so be willing to wait, even in the face of a strong chance your future self won't even be interested in those questions anymore.
One thing I find really helpful to remain consistent in my values is introspection followed by writing the results down in a note, both a physical one and in a text file in my pc. I observed that this strategy really works for me, both for figuring out who I am and for making my actions consistent with it for however long periods of time. I still have 70% of the notes I wrote 5 years ago, and 100% of the most important ones that are the core of all my values.
Idea: the local group organisers might use something like spaced repetition to invite busy community members [say, people who are pursuing a demanding job to increase their career capital] to the social events.
Anki's "Again", "Hard", "Good", "Easy" might map to "1-on-1 over coffee in a few weeks", "Invite to the upcoming event and pay more attention to the person", "Invite person to the social event in 3mo", "Invite person to the event in 6mo or to the EAG".
Another possible metaphor here is exponential backoff.
Oh, underrated comment from 3 years ago. One problem, however, is that you don't want too many connections to go through you specifically, since it'll overload you and possibly replace other connections they might form. People don't have infinite bandwidth for connections, and if they only have room for one EA friend, say, you don't want to take up that slot long-term. You may not want to permanently set yourself up as the linchpin.
An easy way to gather a pool of "value drifted" people to survey could be to look at previous iterations of the EA survey and identify people who filled out the survey at some point in the past, but haven't filled it out in the past N years. Then you could email them a special survey asking why they haven't been filling out the survey, perhaps offering a chance to win an Amazon gift card as an incentive, and include questions about sources of value drift.
Good article in lots of ways. I'm perhaps slightly put off by the sheer amount of info here- I don't feel like I can input all of this easily, given my own laziness and number of goals which I feel like I prioritise. Not sure there's an easy solution to that (maybe some sort of two three top suggestions?), but feel like this is a bit of an information overload. Thanks for writing it though Darius, I enjoyed it :)
Personally, if I were to simplify this post down to top 2 pieces of advice 1) focus on doing good now 2) surround yourself with people who will keep encouraging you to do good long term.
I think you're just denying the possibility of value drift here. If you think it exists, then committment strategies could make sense. if you don't, they won't.
I disagree - I think you can believe "value drift" exists and also allow your future self autonomy.
My current "values" or priorities are different from my teenage values, because I've learned and because I have a different peer group now. In ten years, they will likely be different again.
Which "values" should I follow: 16-year-old me, 26-year-old me, or 36-year-old me? It's not obvious to me that the right answer is 26-year-old me (my current values).