Concrete Ways to Reduce Risks of Value Drift


36


This post is motivated by Joey’s recent post on ‘Empirical data on value drift’ and some of the comments. Its purpose is not to argue for why you should avoid value drift, but to provide you with ideas and tools for how you can avoid it if you want to do so.

Introduction

“And Harry remembered what Professor Quirrell had said beneath the starlight: Sometimes, when this flawed world seems unusually hateful, I wonder whether there might be some other place, far away, where I should have been…

And Harry couldn’t understand Professor Quirrell’s words, it might have been an alien that had spoken, (...) something built along such different lines from Harry that his brain couldn’t be forced to operate in that mode. You couldn’t leave your home planet while it still contained a place like Azkaban. You had to stay and fight.”

Harry Potter and the Methods of Rationality

I use the term value drift in a broad sense to mean certain life changes that would lead you to lose most of the expected altruistic value of your life. Those could be a) changes to your value system or internal motivation, and b) changes in your life circumstances leading to difficulties implementing your values (I acknowledge that the term is non-ideal. Value drift seems to capture part a) well, and part b) might better be captured by 'lifestyle drift'; see terminology discussion here). On the motivational side, this could be by ceasing to see helping others as one of your life’s priorities (losing the ‘A’ in EA), or pivoting towards an ineffective cause area or intervention (losing the ‘E’ in EA).

Of course, changing your cause area or intervention to something that is equally or more effective within the EA framework does not count as value drift. Note that even if your future self were to decide to leave the EA community, as long as you still see ‘helping others effectively’ as one of your top-priorities in life it might not constitute value drift. You don’t need to call yourself an EA to have a large impact. But I am convinced that EA as a community helps many members uphold their motivation for doing the most good.

Most of the potential value of EAs lies in the mid- to long-term, when more and more people in the community take up highly effective career paths and build their professional expertise to reach their ‘peak productivity’ (likely in their 40s). If value drift is common, then many of the people currently active in the community will cease to be interested in doing the most good long before they reach this point. This is why, speaking for myself, losing my altruistic motivation in the future would equal a small moral tragedy to my present self. I think that as EAs we can reasonably have a preference for our future selves not to abandon our fundamental commitment to altruism or effectiveness.

Caveat: the following suggestions are all very tentative and largely based on my intuition of what I think will help me avoid value drift; please take them with a large grain of salt. I acknowledge that other people function differently in some respects, that some of the suggestions below will not have beneficial effects for many people and could even be harmful for some. Also keep in mind that some of the suggestions might involve trade-offs with other goals. A toy example to illustrate the point: it might turn out that getting an EA tattoo is a great commitment mechanism, however it could conflict with the goal (among others) to spend your limited weirdness points wisely and might have negative effects on how EA is perceived by people around you. Please reflect carefully on your personal situation before adopting any of the following.

What you can do to reduce risks of value drift:

  • Beware of falling prey to cognitive biases when thinking about value drift: You probably systematically underestimate a) the likelihood of changing significantly in the future (i.e. End-of-history-illusion) and b) the role that social dynamics play in your motivation. There is a danger in believing both that your fundamental values will not change or that you have control over how they will change, and in believing that your mind works radically differently from other people (e.g. atypical mind fallacy or bias blind spot); for instance, that your motivation is grounded more in rational arguments than it is for others and less in social dynamics. In particular, beware of base rate neglect when thinking that the risk of value drift occurring to your own person is very low; Joey’s post provides a very rough base rate for orientation.
  • Surround yourself with value aligned people: There is a saying that you become the average of the five people closest to you. Therefore, surround yourself with people who motivate and inspire you in your altruistic pursuits. From this perspective, it seems especially beneficial to spend time with other EAs to hold up and regain your motivation; though ‘value aligned’ people don’t have to be EAs, of course. However, it is worth pointing out that you should beware of groupthink and surrounding yourself only with people who are very similar to you. As a community we should retain our ability to take the outside view and engage critically with community trends and ideas. If you decide you want to spend more time with value aligned people / other EAs, here are some concrete ways: making an effort to have regular social interactions with value aligned people (e.g. meeting for lunch/dinner, coffee), engaging in or starting your own local EA chapter, attending EA Global conferences or retreats, becoming friends with EAs, complete internships at EA aligned organisations, getting in touch with value aligned people & other EAs online and chatting/skyping to exchange ideas, sharing a flat etc. Avoiding value drift might increase the importance you should place on living in an EA hub, such as the Bay Area, London, Oxford or Berlin, or other places with a supportive community.
  • Discount the expected value of your longer term altruistic plans by the probability that they will never be realised due to value drift (see Joey’s post for a very rough base rate). This consideration might lead you to place relatively more weight on how you can achieve near term impact or reduce risks of value drift. However, a counter-consideration is that your future self will have more skills, knowledge and resources to do good, which could make capacity building in the near term extremely valuable. Attempt to balance these considerations – the risk of value drift tomorrow against the risk of underinvesting in building your capacity today.
  • Make reducing risks of value drift a top altruistic priority: Think about whether you agree that most of the potential social impact of your life lies several years or decades in the future. If yes, then thinking about risks of value drift in your own life and implementing concrete steps to reduce them, is likely going to be (among) the highest expected value activities for you in the short-term. I expect that learning more about the causes of value drift on the individual level has a high moral value of information by making it easier for yourself to anticipate and avoid future life circumstances that contribute to it. Joey’s post indicates that value drift occurs for various different reasons and many of those seem to be circumstantial rather than coming from disagreement with fundamental EA principles (e.g. moving to a new city without a supportive EA community, transitioning from university to workforce, finding a non-EA partner and investing heavily in the relationship, marrying, getting kids etc.).
  • Think about what your priorities are in life: There are many different ways to lead a happy and fulfilling life. A subset of those ways revolve around altruism. And a subset of these count as effectively altruistic. While you should be careful not to sacrifice your long term happiness to short-term altruistic goals – being unhappy with your way of life, even if it is doing a ton of good in the short-term, is a safe way to lose your motivation and pivot over time – there are ways to live a very happy and fulfilled life that also is dedicated to EA principles.
  • Confront yourself with your major motivational sources regularly: This is related to the above point. For example, talk to other EAs about what motivates you and them, reread your preferred book by your favourite moral philosopher, watch motivating talks or articles (quick shout-out for Nate Soare’s ‘On Caring’) or whatever increased your motivation to become EA in the first place. In addition, consider writing a list of personalised, motivational affirmations for yourself that you read regularly or when feeling low and unmotivated. When considering (re-)watching emotionally salient videos (e.g. slaughterhouse videos), please bear in mind that this can have traumatic effects for some people and might thus be counterproductive.
  • Send your future self letters: describing a) your altruistic motivation, b) wishes for how you should live your life in the years to come and including c) concrete resources (e.g. the new EA Handbook) to re-learn and potentially regain motivation. Consider adding d) a list of ways in which your present self would accept value changes to prevent your future self from rationalising value drift after the fact (e.g. value changes resulting from your future self being better informed, say, about moral philosophy and overall more rational – as opposed to purely circumstantial value drift).
  • Conduct (semi-)annual reviews and planning: By evaluating how your life is going according to your own priorities, goals and values, you can know whether you are still on track to achieving them or whether you should make changes to the status quo.
  • Really make bodily and mental health a priority: This is particularly important for the EA community, which is focused on (self-)optimization and where some people might be tempted in the short-run to work really hard and long hours, reduce sleep, neglect nutrition and exercise, and do other things that are neither healthy nor sustainable in the long run. Experiment with and implement practices to your life to reduce the chance of future (mental) health breakdown, which would a) be very bad by itself, b) radically limit your ability to do good in the short-term and c) could cause a reshuffling of your priorities or act as a Schelling point for your future self to disengage from EA. Julia Wise offers great advice on self-care and burnout prevention for EAs.
  • Make doing good enjoyable: This is related to the above point on mental health. By finding ways to make engaging in altruistic behaviour enjoyable, you create a positive emotional association with the activity. This should help you keep up the commitment in the long-run. On the flipside, be careful when engaging in altruistic activities that you have (strong) negative associations with. Julia Wise writes “effective altruism is not about driving yourself to a breakdown. We don't need people making sacrifices that leave them drained and miserable. We need people who can walk cheerfully over the world”. A further advantage of finding ways to combine effective altruism with ‘having fun’ or ‘being cheerful’ is that it will likely make EA much more attractive for others. Concretely, you might want to try the following: Many activities are more fun in a group than alone, so engage in altruistic endeavours together with others if possible. Attempt to associate EA in your life not just with work, but also with socialising, friendship and fun. Make sure not to overwork yourself and keep in mind that “the important lesson of working a lot is to be comfortable with taking a break” (from Peter Hurfords ‘How I Am Productive’).
  • Do good directly: You might want to consider keeping habits of doing good directly, even in cases where these are not top-priority do-gooding activities by themselves. I believe this can be helpful to keep up and increase internal motivation to engage in altruistic activities as well as for cultivating a sense of ‘being an altruistic person’. For example, you could live veg*an, live frugally, donate some amount of money every year (even if the sums are small) and keep up to date with cause area and charity recommendations when making your donation decisions. However, as a counter to this point, I have met someone arguing that spending willpower on low-impact activities might potentially lead to ego depletion (note that this effect is disputed) or compassion fatigue for some people, thereby decreasing their motivation to engage in high-impact behaviour. Regarding career choice, you might see reducing risks of value drift as one reason to place a higher weight on direct work or research within an EA aligned organisation relative to other options such as earning to give or building career capital.
  • Consider ‘locking in’ part of your donation or career plans: While the flexibility to change your plans and retain future option value are important considerations, in some cases making hard-to-reverse decisions could be beneficial to avoid value drift. Application for career planning: be wary of building very general career capital for a long time, “particularly if the built capacity is broad and leaves open appealing non-altruist paths”, Joey writes. Instead you might consider specialising and building more narrow, EA-focused career capital (which is endorsed by 80,000 Hours for people focusing on top-priority paths anyway). However, in this article Ben Todd discusses some counterarguments to locking in your career decisions too early. Application for donations: Consider putting your donations in a donor advised fund instead of a savings account and potentially take a donation pledge (see point below). Joey writes, “that way even if you become less altruistic in the future, you can’t back out on the pledged donations and spend it on a fancier wedding or a bigger house”.
  • Consider taking the Giving What We Can pledge: For me, the ‘lock in’ aspect of the pledge as a commitment device was among the strongest reasons to take it. It is worth pointing out though that taking the pledge could have downsides for some people (e.g. losing flexibility and falling prey to the overjustification effect; for details, read Michael Dicken’s post).
  • Commit yourself publicly: This is another form of ‘lock in’. For example, you could participate in an EA group, write articles describing EA and your motivation to dedicate your life to doing the most good, post on social media about this, talk to other people about EA and be public about your EA career and donation plans, wear EA-T-shirts etc. The idea behind this is to engineer peer pressure for your future self and a potential loss of social status that could come with abandoning EA principles; I believe this works (subconsciously) for many as a motivational driving force to stay engaged. For this strategy to work it seems more important what you think your peers think of you, then what they actually think of you. Having said that, I encourage fostering a social norm among EAs not to shame or blame others when value drift occurs to them, in line with the overall recommendation for EAs to be especially nice and considerate.
  • Relationships: For those looking for a partner, I endorse the recommendation of generally just choosing whoever makes you happiest. For most people this anyway includes finding partners who share their values. It is worth pointing out that avoiding value drift might give you an additional reason to place some weight on finding partners who share your values and wouldn't put you under pressure in the long-term to give up your altruistic commitments or make it much harder to implement them. Concretely, you might consider looking for partners via platforms that allow you to share a lot about yourself and don’t match you with people with opposing values (e.g. OkCupid).
  • Apply findings of behavioural science research: I suspect that there are relevant insights from the research on nudging or on successful habit creation and retention (e.g. see these articles, one & two), that can be applied to help you avoid long-term value drift. One way to use nudges to make yourself engage in a desired altruistic behaviour is by making the behaviour the default option. For instance, you might set up automated, recurring donations (i.e. donating as default option) or, Joey writes, “ask your employer to automatically donate a pre-set portion of your income to charity before you even see it in your bank account”. As another example, by working for an EA aligned organisation you can make high-impact direct work or research your default option.

What EA organisations can do to deal with value drift:

  • Encourage norms of considerateness, friendliness and welcomingness within the EA community, which is beneficial in its own right but also helps keep motivational levels of community members high.
  • Conduct further research on causes of value drift and how to avoid it. An obvious starting point is researching the EA ‘reference class’, i.e. looking at the value drift experiences of other social movements. I acknowledge that many EA organisations have already spent significant efforts on similar research projects (e.g. Open Philanthropy Project, Sentience Institute). In particular, there might be ways for Rethink Charity to expand the EA survey to gather more rigorous data on value drift (selection effects are obviously problematic – the people whose values drifted the most will likely not participate in the survey).
  • Continue to support and expand opportunities for community members to surround themselves with other great people, e.g. by organising EAG(x) conferences and EA retreats, supporting local chapters and creating friendly and welcoming online communities (such as this forum or EA Facebook groups).
  • Incorporate the findings of research on value drift into EA career advice, especially when recommending careers whose value will only be realized decades in the future. Rob Wiblin already indicated that 80,000 Hours considers incorporating this into their discussion of discount rates.

I would highly appreciate your suggestions for concrete ways to reduce risks of value drift in the comments.

I warmly thank the following people for providing me with their input, suggestions and comments to this post: Joey Savoie, Pascal Zimmer, Greg Lewis, Jasper Götting, Aidan Goth, James Aung, Ed Lawrence, Linh Chi Nguyen, Huw Thomas, Tillman Schenk, Alex Norman, Charlie Rogers-Smith.