Why It’s Important to Know the Risk of Value Drift
The concept of value drift is that over time, people will become less motivated to do altruistic things. This is not to be confused with changing cause areas or methods of doing good. Value drift has a strong precedent of happening for other related concepts, both ethical things (such as being vegetarian) and things that generally take willpower (such as staying a healthy weight).
Value drift seems very likely to be a concern for many EAs, and if it were a major concern, it would substantially affect career and donation plans.
For example, if value drift rarely happens, putting money into a savings account with the intent of donating it might be basically as good as putting it into a donor-advised fund. However, if the risk of value drift is higher, a dollar in a savings account is more likely to later be used for non-altruistic reasons and thus not nearly as good as a dollar put into a donor advised fund, where it’s very hard not to donate it to a registered charity.
In a career context, a plan such as building career capital for 8 years and then moving into an altruistic job would be considered a much better plan if value drift were rare than if it were common. The more common value drift is, the stronger near-term focused impact plans are relative to longer-term focused impact plans. For example, you might get an entry-level position at a charity and build up capacity by getting work experience. This has the potential, though not always, to be slower at building your CV than getting a degree or working in a low-impact but high-prestige field. However, it has impact right away, which matters more if the risk of value drift is high.
Despite the importance of value drift to important questions, it's rarely been talked about or studied. One of the reasons it is so under-studied is that it would take a long time to get good data.
I have been in the EA movement for ~5 years. I decided to pool some data from contacts who I met in my first year of EA. I only included people who would have called themselves EAs for 6 months or longer (I would not include someone who was only into EA for a month and then disappeared), and who and took some sort of EA action (working for an EA org, taking the GWWC pledge, running an EA group). I also only included people who I knew and kept in touch with well enough to know what happened to them (even if they left the EA movement). It is ultimately a convenience sample, but it was based on working for 4 current EA orgs and living in 4 different countries over that time, so it’s not focused on a single location or organization.
I also broke the groups down into ~10% donors and ~50% donors, because many times I have heard people being more or less concerned about one of these groups vs the other. These broad groups are not just focused on people doing earning to give. Someone who is working heavy hours for an EA organization and making most of their life decisions with EA as their number one priority would be considered in the 50% group. Someone running an EA chapter who makes decisions with EA as a factor, but prioritizes other factors above it, would be put in the 10% group. The percentages are aimed at rough proxies of how important EA is in these people's lives, not strictly financial donations. I did not count changing cause areas as value drift (e.g. changing from donating 10% to MIRI to AMF) -- only different levels of overall altruistic involvement.
The results over 5 years are as follows:
16 people were ~50% donors → 9/16 stayed around 50%
22 people were ~10% donors → 8/22 stayed around 10%
No one moved from the 10% category to the 50% category, and I only counted fairly noticeable changes (if someone changed their donations from 50% to 40%, I would not have the resolution to notice).
Value drift was high across both groups, with roughly 50% of the population drifting over 5 years. I talked to many of those people about value drift and their thoughts on long term altruism, and most of them, like most people I talk to now, had previously been very confident that they would stay altruistic in the long term. Why people generally value drifted was a mix of reasons with no clear consistent source, although life changes were a large factor for many (e.g. people moving from university to the workforce, changing cities or workplaces, marrying or having kids).
Interestingly, this data also sheds a little light on concerns of “pushing yourself too hard” and vs “taking it too easy on yourself”, with generally more involved or dedicated people value drifting noticeably less (~30% vs ~60%).
Overall, these results seem pretty scary to me, especially since there’s a natural selection effect where I tend to make friends who are more dedicated and drift apart from the ones who leave the movement. It's also worth noting that there have not been particularly major controversies or problems in the EA movement that would cause a lot of value drift over this period of time vs. any other. Historically, many EAs have been young non-family-starters, so we've arguably been seeing a period of artificially low value drift that is not sustainable as the movement gets older and goes through standard life changes.
Of course, the data could be a lot better quality, and I wish it were measured in a more rigorous way. I would be keen to see any more data that anyone else has along these lines. Despite quality concerns, I still think we can draw some conclusions from it, particularly given that people are already effectively drawing conclusions about the likelihood of value drift from no data at all. I also spoke to a few EAs who have been around the movement awhile, and this data broadly fit their intuitions, which gives me more confidence that it's not 100% off the mark.
The implications of this data are that people should be cautious of deferring impact to later, and should set up commitment devices to help them stick to what they care about.
One example: be wary of building capacity for very long periods of time, particularly if the built capacity is broad and leaves open appealing non-altruist paths. Instead, see if you can build capacity in such a way that it also does good at the moment, such as getting work experience with non-profits.
For instance, if you want to do direct impact, volunteering for an organization and showing how good your work is is often better than having a degree, especially an unrelated one. It’s also substantially faster and does good directly. Degrees are largely a way to signal that you’re a hard worker with a decent amount of intelligence, and if all you are is a CV in somebody’s inbox, that’s very important. However, if you’ve been working alongside them for months, they’ll already know these traits of yours.
Another way to build career capacity with value drift in mind is to get experience and credentials that make it harder to work in a non-altruistic area. This could be getting a degree in development economics instead of economics generally, or working for prestigious nonprofits instead of other prestigious organizations. Option value is great if the risk of value drift is low, but if it’s high, it makes it easier for you to slip. It’s like only having healthy food in the house. If the only easy options are also altruistic, you’re much more likely to stick it out in the long haul.
If your primary path to impact is donations and you want to keep value drift in mind, but you don’t know where you want to give yet, don’t save those donations. Put them into a donor-advised fund. That way, even if you become less altruistic in the future, you can’t back out on the pledged donations and spend it on a fancier wedding or a bigger house. You can also set up monthly donations, or ask your employer to automatically donate a preset portion of your income to charity before you even see it in your bank account.
Overall, if 50% of the EAs I met 5 years ago have value drifted, this should factor into your plans. Nobody thinks they’ll value drift, just like no teenager with a fast metabolism thinks they’ll be the one who gains weight when they hit middle age. By all means, indulge in junk food every once in awhile and don’t constantly stress about calories, but put some time into setting up your life to make it easier for you to reach for a banana instead of the ice cream, or in this case, the altruistic path instead of the less altruistic one.
For a deeper dive into concrete ways to reduce value drift, check out this post.
The reference class I've always used when casually thinking about something like "value drift" is the original CEA team from 2011.
Here's my summary of the public information relevant to their "EA dedication" today (please do comment with additional relevant public info):
If I had to sum that up I'd say: ~75% of the CEA founding team (n=17) are still highly dedicated to doing the most good, 6.5 years on.
If early involvement and higher involvement/dedication are correlated (which I suspect they are), this data fits well with the following observation:
The CEA founding team seems like the absolute best case for value drift, because to found CEA one must have a much higher baseline inclination towards EA than the average person. Also probably a lot of power, which helps them control their environment while many EAs would be forced into non-EA lifestyles by factors beyond their control. So 25% drifters of the original CEA team feels more scary to me than 40-70% of average EAs.
I'm not so convinced on this. I think the framing of 'this was the founding team' was a little misleading: in 2011 all of us were volunteers and students. The lower bar for doing ~5 hours a week of volunteering for EA for ~1 year. Obviously students are typically in a uniquely good position for having time to volunteer. But it's not clear all the people on this list had uniquely large amounts of power. Also, I think situational effects were still strong: I felt it made a huge difference to what I did that I made a few friends who were very altruistic and had good ideas of how to put that into practice. I don't think we can assume that all of us on this list would have displayed similarly strong effective altruist inclinations without having met others in the group.
I think that's basically right, though I also have the intuition that drift from the very early days will be higher, since at that point it was undecided what EA even was, and everyone was new and somewhat flung together.
This is really helpful, thanks.
It's interesting to note that it's now two years later, and I don't think the picture above has really changed.
So the measured marginal drift rate is ~0%.
On the previous estimate of 25% leaving after 6.5 years, that's about 5% per year, which would have predicted 1.4 extra people leaving in two years.
Of course these are tiny samples, but I think our expectation should be that 'drift' rates decrease over time. My prior is that if someone stays involved from age 20 to age 30, then there's a good chance they stay involved the rest of their career. I guess my best guess should be that they stay involved for another 10 years.
If I eyeball the group above, my guess is that this pattern also holds if we look back further i.e. there was more drift in the early years among people who were involved for less time.
One small comment on the original analysis is that in addition to how long someone has already been involved, I expect 'degree of social & identity involvement' to be a bigger predictor of staying involved than 'claimed level of dedication' e.g. I'd expect someone who works at an EA org is more likely to stay involved than someone who says they intend to donate 50% but doesn't have any good friends in the community. It would be cool to try to do an analysis more based around that factor, and it might reveal a group with lower drop out rates. The above analysis with CEA is better on these grounds but could still be divided further.
That doesn't seem right - since this comment was made, Holly's gone from being EA London strategy director to not really identifying with EA, which is more like the 5% per year.
Since the comment was made, Rob Gledhill has returned to CEA as the CBG Programme Manager. (Not totally confident that they are the same person though)
Thanks for collecting the data Joey! Really useful.
i) I'm not sure whether 'value drift' is a good term to describe loss of motivation for altruistic actions. I'm also not sure whether the data you collected is a good proxy for loss of motivation for altruistic actions.
To me the term value drift implies that the values of the value drifting person are less important to them than they used to be, as opposed to finding them harder to implement. Your data is consistent with both interpretations. I also wouldn't call someone who still cares as much about their values but finds it harder to be motivated having 'value drifted'.
If we observe someone moving to a different location and then contributing less EA wise, then this can have multiple causes. Maybe their values actually changed, maybe they lost motivation or EA contributions have just become harder to do because there's less EA information and fewer people to do projects with around.
As the EA community we should treat people sharing goals and values of EA but finding it hard to act towards implementing them very differently to people simply not sharing our goals and values anymore. Those groups require different responses.
ii) This is somewhat tangential to the post, but since having kids came up as a potential reason for value drifting, I'd like to mention how unfortunate it can be for people who have had kids if other EAs assume they have value drifted as a result.
I've had a lot of trouble within the last year in EA spaces after having a baby. EAs around me constantly assume that I suddenly don't care anymore about having a high impact and might just want to be a stay at home parent. This is incredibly insulting and hurtful to me. Especially if it comes from people whom I have known for a long time and who should know this would completely go against my (EA & feminist) values. Particularly bitter is how gendered this assumption is. My kids' dad (also an EA) never gets asked whether he wants to be a stay at home parent now.
I really had expected the EA community to be better at this. It also makes me wonder on how many opportunities to contribute I might have missed out on. The EA community often relays information about opportunities only informally, if someone is assumed to not be interested in contributing the information about opportunities is much less likely to reach them. Thus the belief that EAs will contribute much less once they have kids might turn into a self-fulfilling prophecy.
I agree regarding implementation difficulties, particularly long term ones (e.g. losing a visa for a place you were living in with a big EA community) can muddy the waters a lot. It's hard to get into the details, but I would generally consider someone not drifted if it was a clearly capacity affecting thing (e.g. they got carpal tunnel) but outside of that they are working on the same projects they would have wanted to in all cases.
A more nuanced view might be break it down into: “Value change away from EA” - defined as changing fundamental ethical views, maybe changing to valuing people within your country more than outside of it.. “Action change away from EA” - defined as changing one of the fundamental applications of your still similarly held values. Maybe you think being veg is good, but you are no longer veg due to moving to a different, less conducive living situation.
With short and long term versions of both and with it being pretty likely that “value change” would lead to “action change” over time, I used value drift as a catch-all for both the above. It’s also how I have heard it commonly used as, but I am open to changing the term to be more descriptive.
“As the EA community we should treat people sharing goals and values of EA but finding it hard to act towards implementing them very differently to people simply not sharing our goals and values anymore. Those groups require different responses.”
I strongly agree. These seem to be very different groups. I also think you could even break it down further into “EAs who rationalize doing a bad thing as the most ethical thing” and “EAs who accept as humans that they have multiple drives they need to trade off between”. Most of my suggestions in the post are aimed at actions one could take now that reduce both “action change” and “value change”. Once someone has changed I am less sure about what the way forward is, but I think that could warrant more EA thought (e.g. how to re-engage someone who was disconnected for logistical reasons).
Sorry to hear you have had trouble with the EA community and children. I think it's one of the life changes that is generally updated too strongly on by EAs and assuming that a person (of any gender) will definitely value drift upon having children is clearly incorrect. Personally I have found the EAs who I have spoken to who have kids to be unusually reflective about its effects on them compared to other similar life changes, perhaps because it has been more talked about in EA than say partner choice or moving cities. When a couple who plans to have kids has kids and changes their life around that in standard/expected ways, I do not see that as a value drift from their previous state (of planning to have kids and planning to have life changes around that).
I also think people will run into problems pretty quickly if they assume that every time someone goes through a life change that the person will change radically and become less EA. I think I see it intuitively as more of a bayesian prior. If someone has been involved in EA for a week and then they are not involved for 2 weeks, it might be sane to consider the possibilities of them not coming back. On the flip side, if an EA has been involved for years and was not involved for 2 weeks, people would think nothing of it. The same holds true for large life changes. It’s more about the person's pattern of long term of behavior and a combined “overall” perspective.
My list of concerns about a new trend of EA’s “relaying information about opportunities only informally” is so long it will have to be reserved for a whole other blog post.
I still think you're focussing too much on changed values as opposed to implementation difficulties (I consider lack of motivation an example of those).
I think it's actually usually the other way around - action change comes first, and then value change is a result of that. This also seems to be true for your hypothetical Alice in your comment above. AFAIK it's a known psychology result that people don't really base their actions on their values, but instead derive their values from their actions.
All in all, I consider the ability to have a high impact EA-wise much more related to someone's environment than to someone's 'true self with the right values'. I would therefore frame the focus on how to get people to have a high impact somewhat differently: How can we set up supportive environments so people are able to execute the necessary actions for having a high impact?
And not how can we lock in people so they don't change their values - though the actual answers to those questions might not be that different.
I second the being sorry about the trouble with EAs and kids. Having kids does make it more difficult to be a 50% EA, but there definitely examples such as Julia Wise/Jeff Kaufman, Toby Ord/Bernadette Young, and myself. As for the gendered response, about 3% of US stay-at-home parents are dads. But one time I thought through my friends, and it was 50%! Granted, they were pretty left-leaning, but so is EA. As an aside, now that young women make more money than young men (largely because women go to college at higher rates than men), if we made the decision just based on money, we could have the majority of stay-at-home parents be dads.
Thanks very much for doing this.
Could you possibly say more (i.e. as much as you can) about why people left? Moving city, leaving university or starting a family don't have to stop someone being an EA. More explanation seems needed. For instance, "X moved city" by itself doesn't really explain what happened, whereas "X moved city, didn't know any EAs and lost motivation without group support" or "Y started a family and realised they wanted a higher quality of life than they could find working for an EA org" do. Putting this in dating terms, one reason people sometimes give when they break up with someone is "I'm moving to city Z and it would never work" but that's not quite a sufficient/honest reason, which would be "I'm moving to Z and this will make things sufficiently hard I want to stop. If I liked you a lot more I'd suggest we do long distance; but I don't like you that much, so we're breaking up". I'd want to know if people stop 'believing' in EA, kept thinking it was important but lost motivation or something else.
Equally, I'd be interested if you did a survey of the people who stayed and ask why they stayed to see what the differences were. If the explanations for the remainer and the leavers are consistent with each other than they don't provide any explanatory power.
I'd add the (usual) proviso that people don't really know why they do what they do and self-reports are to be treated with some suspicion. It's generally more useful to see what people do rather than listen to what they say.
Finally, it would be interesting to compare these retention ratios to other things - religion, using a given tech product, dieting, etc. - it strikes me that, if some sense, 50% retention after 5 years might be pretty good in some sense, though I agree it's also worrying put another way.
So I want to be pretty careful about going into details, but I can mix some stories together to make a plausible sounding story based on what I have heard. Please keep in mind this story is a fiction based off a composite of case studies I’ve witnessed, not a real example of any particular person.
Say Alice is an EA. She learns about it in his first year of college. She starts by attending an EA event or two and eventually ends up being a member of his university chapter and pretty heavily reading the EA forum. She takes the GWWC pledge and a year later she takes a summer internship at an EA organization. During this time she identifies strongly with the EA movement and considers it one of her top priorities. Sadly, as Alice is away at her internship her chapter suffers and when she gets back she hits a particularly rough year of school and due to long term concerns, she prioritizes school over setting the chapter back up, mainly thinking about her impact. The silver lining is at the end of this rough year she starts a relationship. The person is smart and well suited, but does not share her charitable interest. Over time she stops reading the EA content she used to and the chapter never gets started again. After her degree ends she takes a job in consulting that she says will give her career capital, but she has a sense her heart is not as into EA as she once was. She knows a big factor is her boyfriend’s family would approve of a more normal job than a charity focused one, plus she is confident she can donate and have some impact that way. Her first few paychecks she rationalizes as needing to move out and get established. The next few to build up a safe 6 month runway. The donations never happen. There's always some reason or another to put it off, and EA seems so low on the priorities list now, just a thing she did in college, like playing a sport. Alice ends up donating a fairly small amount to effective charities (a little over 1%). Her involvement was at its peak when she was in college and she knows her college self would be disappointed. Each choice made sense at the time. Many of them even follow traditional EA advice, but the endline result is Alice does not really feel she is an EA anymore. She has many other stronger identities. In this story, with different recommendations from the EA movement and different choices from Alice, she could have ended up doing earning to give and donating a large percentage long term or working with an EA org long term, but instead she “value drifted”.
Many aspects of this story sound kinda like things that have happened to me to make me less hardcore. I definitely still strongly affiliate with EA, donate ~15% / $30K, and spend about 20hrs/week on EA projects, but my college EA idealistic self expected me to donate ~$100K/yr by now or work full-time 60hrs/week on EA projects. I'm unsure how "bad" of a "value drift" this is, but definitely short of my full potential.
Maybe your college EA idealistic self expectation's were never that likely, so you shouldn't beat yourself up about them.
Thanks. I don't feel guilty about it. I just chose a different life. EA is still very important to me, but not as important as it once was. I think a lot of it is, like Joey said, the slow build up of small path changes over time.
If you feel you've become much less EA, I wonder what many others who were very into it must feel. From the outside you seem extremely involved - .impact/Rethink Charity do a huge amount with limited resources, and it seems like you do substantial volunteering with them, which doesn't seem like putting little of yourself into EA. Thanks for what you do.
Ah, that's great. Thanks very much for that. I think "dating a non-EA" is a particularly dangerous(/negative impact?) phenomenon we should probably be talking about more. I also know someone, A, whose non-EA-inclined partner, B, was really unhappy that A wasn't aiming to get a high-paying professional job and it really wrenched A from focusing on trying do the most useful stuff. Part of the problem was B's family wanted B's partner to be dating a high earner.
This comment comes across as a tad cult-y.
I did think that while writing it, and it worried me too. Despite that, the thought doesn't strike me as totally stupid. If we think it's reasonable to talk about commitment devices in general, it seems one we ought to talk about in particular in one's choice of partner. If you want to do X, finding someone that supports you to towards you goal of achieving X seems rather helpful, whereas finding a partner that will discourage you from achieving X seems unhelpful. Nevertheless, I accept one of the obvious warning signs of being in a cult is the cult leaders tell you to date only people inside the cult lest you get 'corrupted'...
A particular word choice that put me at unease is calling "dating a non-EA" "dangerous" without qualifying this word properly. It is more precise to say that something is "good" or "bad" for a particular purpose than to just call it "good" or "bad"; just the same with "dangerous". If you call something "dangerous" without qualification or other context, this leaves an implicit assumption that the underlying purpose is universal and unquestioned, or almost so, in the community you're speaking to. In many cases it's fine to assume EA values in these sorts of statements -- this is an EA forum, after all. Doing so for statements about value drift appears to support the norm that people here should want to stay with EA values forever, a norm which I oppose.
haha yeah that was my take. I think the best norm to propagate is "go out with whoever makes you happy"
I think that there should be no norm here and we should simply consider the fact that dating a non-EA may cause a value drift before making decisions. Being altruistic sometimes means making sacrifices to your happiness. If having less money, less time and no children can be amongst the possible sacrifices, I see no reason why limiting the set of possible romantic partners could not be one of possible sacrifices as well. People are diverse. Maybe someone would rather donate less money but abstain from dating non-EAs, or even abstain from dating at all. One good piece of writing related to the subject is http://briantomasik.com/personal-thoughts-on-romance/
Males having a “dating EAs only” rule is also dangerous (for the health of the community) when 70% of the community identifies as male and only 26% as female. It’d promote unhealthy competition. What is more, communities are not that big in many of the cities which for many people would make the choice very limited. Especially since we should probably avoid flirting with newcomers because that might scare them away.
Maybe the partner doesn't have to be an EA to prevent the value drift, maybe the important thing is that the partner is supportive of EA-type sacrifices. I'll put this as a requirement in my online dating profiles. I think that people who are altruistic (but not necessarily EAs) are especially likely to be supportive.
This is a useful analysis, I expect it will be incorporated into our discussion of discount rates in the career guide.
Perhaps I missed it but how many of the 7 who left the 50% category went into the 10% category rather than dropping out entirely?
I think this is a direction Julia and I could have gone around 2011. We didn't donate for a year (Julia was in grad school, I took a pay cut to work at a startup trying to maximize risk neutral returns) and it would have been easy to drift away.
This also fits my experience.
A few other implications if value drift is more of a concern:
Indeed. I think there are a whole set of implications of value drift when it comes to movement building, particularity recruiting younger people who will not create huge amounts of good for a while.
Upvoted because this is an important topic I've seen little discussion of. Although you take pains to draw attention to the limitations of this data set, these caveats aren't included in the conclusion, so I'd be wary of anyone acting on this verbatim. I'd be interested in seeing drop out rates in other social movements to give a better idea of the base rate.
I agree. Other movement data would be interesting. The most relevant data I have seen is various veg rate studies (which generally shows like 80% dropout overall or the average person staying veg ~4 years). e.g. https://animalcharityevaluators.org/research/dietary-impacts/vegetarian-recidivism/
Very interesting. As you say, this data is naturally rough, but it also roughly agrees with own available anecdata (my impression is somewhat more optimistic, although attenuated by likely biases). A note of caution:
The framing in the post generally implies value drift is essentially value decay (e.g. it is called a 'risk', the comparison of value drift to unwanted weight gain/poor diet/ etc.). If so, then value drift/decay should be something to guard against, and maybe precommitment strategies/'lashing oneself to the mast' seems a good idea, like how we might block social media, don't have sweets in the house, and so on.
I'd be slightly surprised if the account someone who 'drifted' would often fit well with the sort of thing you'd expect someone to say if (e.g.) they failed to give up smoking or lose weight. Taking the strongest example, I'd guess someone who dropped from 50% to 10ish% after marrying and starting a family would say something like, "I still think these EA things are important, but now I have other things I consider more morally important still (i.e. my spouse and my kids). So I need to allocate more of my efforts to these, thus I can provide proportionately less to EA matters".
It is much less clear whether this person would think they've made a mistake in allocating more of themselves away from EA, either at t2-now (they don't regret they now have a family which takes their attention away from EA things), or at t1-past (if their previous EA-self could forecast them being in this situation, they would not be disappointed in themselves). If so, these would not be options that their t1-self should be trying to shut off, as (all things considered) the option might be on balance good.
I am sure there are cases where 'life gets in the way' in a manner it is reasonable to regret. But I would be chary if the only story we can tell for why someone would be 'less EA' are essentially greater or lesser degrees of moral failure, disappointed if suspicion attaches to EAs starting a family or enjoying (conventional) professional success, and caution against pre-commitment strategies which involve closing off or greatly hobbling aspects of one's future which would be seen as desirable by common-sense morality.
You discuss a case where there is regret from the perspective of both t1 and t2, and a case where there is regret from neither perspective. These are both plausible accounts. But there's also a third option that I think happens a lot in practice: Regret at t1 about the projected future in question, and less/no regret at t2. So the t2-self may talk about "having become more wise" or "having learned something about myself," while the t1-self would not be on board with this description and consider the future in question to be an unfortunate turn of events. (Or the t2-self could even acknowledge that some decisions in the sequence were not rational, but that from their current perspective, they really like the way things are.)
The distinction between moral insight and failure of goal preservation is fuzzy. Taking precautions against goal drift is a form of fanaticism and commonsense heuristics speak against that. OTOH, not taking precautions seems like not taking the things you currently care about seriously (at least insofar as there are things you care about that go beyond aspects related to your personal development).
Unfortunately I don't think there is a safe default. Not taking precautions is tantamount to making the decision to be okay with potential value drift. And we cannot just say we are uncertain about our values, because that could result in mistaking uncertainty with underdetermination. There are meaningful ways of valuing further reflection about one's own values, but those types of "indirect" values, where one values further reflection, they can also suffer from (more subtle) forms of goal drift.
What percent of those who drifted from the 50% category ended up in the 10% category instead of out of the movement entirely?
And would the graph of the number of people remaining in the 50% category over time look roughly linear or was drifting concentrated at the beginning or near the end? What about for the 10% category?
I did not break down the data that way when I made it, but a quick look would suggest ~75% moved from 50% to 10% and drifting was mildly concentrated at the beginning.
So, to confirm, are you saying that maybe 5 out of the 7 people who moved out of the 50% category moved in the 10% category? I think it's important to get clarity on this, since until encountering this comment I was interpreting your post (perhaps unreasonably) as saying that those 7 people had left the EA community entirely. If in fact only a couple of people in that class left the community, out of a total of 16, that's a much lower rate of drift than I was assuming, and more in line with anonymous's analysis of value drift in the original CEA team.
This is so necessary and helpful. This is a significant update for me toward a donor advised fund (and also reinforces my current practice of donating regularly rather than saving to donate).
This data to me suggests that the EA community may have made some mistakes in modeling our decisions as more rational than they are. Specifically, whether broad career capital makes sense depends a lot on whether we are rational and will optimize or whether we need commitment devices. Maybe we all need more of a behavioral econ update.
I think if people promise you that they'll do something, and then they don't answer when you ask if they did it, it's quite probably they did not do the thing.
Do you have any opinion on the role of community or social ties in preventing value drift in addition to individualized commitment mechanisms, like the GWWC Pledge.
Social ties seem quite important, particularly close ones (best friends, partners, close co-workers).
The social circle thing might interact in an interesting way with the apparently common insecurity of not being "EA enough". Suppose I think of myself as an EA, but due to random life fluctuation I find myself not being "EA enough" for some time. This makes me feel like an imposter at EA events, which makes me go to them less, which decreases my social ties to other EAs, which decreases my motivation for EA work, which makes me do less EA work, which makes me feel like more of an imposter at EA events. This feedback loop theory suggests that drifting out of EA social circles and having ones values drift are often intertwined phenomena.
Of course, that's just a guess. It seems like it would be valuable to get some anonymized stories from ex-EAs to see what is really going on.
Anyway, I think commenting on forums like this one can be good. Reading what other EAs are working on gets me excited about EA stuff, and leaving comments is a low-effort way to feel helpful. I don't typically feel like an imposter when I do this, because it usually seems like sharing my perspective would be valuable even if I was a complete non-EA.
What's your impression of how positively correlated close social ties are with staying altruistic among those individuals you surveyed?
My anecdata is that it's very high, since people are heavily influenced by such norms and (imagined) peer judgement.
Cutting the other way, however, people who are brought into EA _by_ such social effects (e.g. because they were hanging around friends who were EA, so they became involved in EA too rather than in virtue of having (always had) intrinsic EA belief and motivation) would be much more vulnerable to value drift once those social pressures change. I think this is behind a lot of cases of value drift I've observed.
When I was systematically interviewing EAs for a research project this distinction, between social-network EAs and always-intrinsic EAs was one of the clearest and most important distinctions that arose. I think one might imagine that social-network EAs would be disproportionately less involved, more peripheral members, whereas the always-intrinsic EAs would be more core, but actually the tendency was roughly the reverse. The social-network EAs were often very centrally positioned in higher staff positions within orgs, whereas the always-instrinsic EAs were often off independently doing whatever they thought was most impactful, without being very connected.
It appears the best of both worlds might be to seed local EA presence where the initial social network is composed of individuals who were always intrinsically motivated by EA who were also friends. I wouldn't be surprised if that's the story behind many local EA communities which became well-organized independent of one another. Of course if this is the key to bringing together local EA presences as social networks which tend toward lower rates of value drift, the kind of data we're collecting so far won't be applicable to what we want to learn for long. The anecdata of EAs who have been in the community since before there was significant investment in and direction of movement growth won't be relevant when we're trying to systematize that effort in a goal-driven fashion. As EA enters another stage of organization as a movement, it's a movement structured fundamentally differently than how it organically emerged from a bunch of self-reflective do-gooders finding each other on the internet 5+ years ago.
Does a donor-advised fund let you deduct money you put into the fund from your taxes? If so, that is a huge reason to use them.
Yes it does, and indeed that is another huge pro of them when compared to a normal savings fund. There are some cons of them often they are cumbersome to first set up and require a fairly large minimum deposit. But overall something I wish more EAs considered.
The other disadvantage of donor advised funds is that they often have restricted investment options. However, my financial advisor finally found one with investment freedom called the Community Foundation in Boulder (you don't have to be in Boulder Colorado to use it, but you would need to be in the US to get the tax deduction).
Thank you very much for this important work. This should be an important consideration for everyone and an important factor in career planning. I'll make sure to say something about that in our local EA group at some point.
First of all, thanks for this post -- I think it's really valuable to get a realistic sense of how these beliefs play out over the long term.
Like others in the comments, though, I'm a little critical of the framing and skeptical of the role of commitment devices. In my mind, we can view commitment devices as essentially being anti-cooperative with our future selves. I think we should default to viewing these attempts as suspicious, similarly to how we would caution against acting anti-cooperatively towards any other non-EAs.
Implicit is the assumption that if we change, it must be for "bad" reasons. It's natural enough -- clearly we can't think of any good reasons, otherwise we would already have changed -- but it lacks imagination. We may learn of a reason why things are not as we thought. Limiting your options according to your current knowledge or preferences means limiting your ability to flourish if the world turns out to be very different from your expectations.
More abstractly, imagine that you heard about someone who believed that doing X was a really good idea, and then three years later, believed that doing X was not a good idea. Without any more details, who do you think is most likely to be correct?
(At the same time, I think we're all familiar with failing to achieve goals because we failed to commit to them, even as we knew they were worth it, so there can be value in binding yourself. It's also good signalling, of course. But such explanations or justifications need to be strong enough to overcome the general prior based on the above argument.)
GWWC says 4.8% per year attrition. If we say the OP data is half life of 5 years and exponential decay, that is 13% attrition per year. That would mean an expected duration of being an EA of eight years. I think I remember reading somewhere that GWWC was only assuming three years of donations, so eight years sounds a lot better to me. Another thought is that the pledge has been compared with marriage, so we could look at the average duration of the marriage. When I looked into this, it appeared to be fairly bimodal, with many ending relatively quickly, but many ending till death do they part. GWWC argues that consistent exponential decay would be too pessimistic. If we believe the 13% per year attrition, that means we need to recruit 13% more people each year just to stay the same size.
Good work, it's great to have any numbers on this at all. Given these are acquaintances I wonder could you follow up to try to get some reasons from the drifters. I would like to be able to classiy the reasons for changes in behaviour in one of the following two buckets; 1) I am lazier, more self-centred than in the past 2) I was young and naive, I know better now
In combating future potential value drift, we are considering tactics to essentially coerce our future selves. If we are confident this is because 1) then I think this coercion is merited, but if it's 2) then maybe we are compounding an error?
Excellent post! I think value drift is one of the largest challenges of local groups: many people who seemed enthusiastic don't show up after a couple of times and it's hard to keep them motivated to keep going for the highest expected value in the long-term option.
The thing is, how do you communicate about the risk of value drift to others who are at risk? There is the problem of base rate neglect/bias blind spot: that people think the risk does not apply to them. For example, multiple people have expressed they don't understand that I took the giving pledge to commit my future self to this, while I believe I might otherwise not act on my (current) values.
Thanks for this, Joey!
I'd be very keen to see more thorough data on this, for example:
I think this is something we may look at with the 2018 EA Survey, hopefully in cooperation with GWWC and 80K and leveraging their data as well.
summary: changes in people's "values" aren't the same as changes in their involvement in EA and this analysis treats the two as the same thing; also, some observations from my own friendgroup on values changes v. retention
It sounds like no differentiation between "lowered involvement in EA and change in preferences" and "lowered involvement in EA while remaining equally altruistic" was made here, given the wording used in "The Data" section.
I can think of 3 people I've known who were previously EAs (by your six-months involvement definition) and then left, but who remained as altruistic as before, and two more who really liked the core ideas but bounced off within the first month and remained as altruistic as before. There's 2 I know (both met the six months involvement measure) who left and ended up being less altruistic afterwards.
Which, really, is irrelevant, since you'd need a much more systematic effort at data collection to reach any serious conclusions about 'value drift', but changes in people's "values" aren't the same as changes in their involvement in EA. I'm sure there's some non-EA literature on values and changes in values you'd benefit from engaging with.
(The two who became less altruistic were riding a surge of hope related to transhumanism that died down with time, and they left when that went out; the other five left for some mix of disliking the social atmosphere of EA and thinking it ineffective at reaching its stated goals. These are very different types of reasons to leave EA! I put scare quotes around values and value drift because I find it more informative to look at what actions people take rather than what values are important to them.)
I'd like to see this. I have some data on this from the EA Survey and intend to follow up on something similar later this year.
Please do share that data when you get a chance. You guys have a lot of fascinating data in those survey results, and while I understand you have limited time/resources, it would be a shame to see them go untapped.
Thanks. Not publishing what I have on this is a 2017 regret of mine and I hope not to repeat it in 2018.
I think the 10% versus 50% descriptions are useful, and I'm surprised I have not seen them before on the forum, except for my comment here. In that comment, I was arguing that free time could be defined as 40 hours a week, so if you volunteer effectively four hours a week, that would make you a 10% EA. But this also means if you donate 50% and spend 50% of your free time effectively (like I try to do), you would be a 100% EA. Another way is having an EA job (which is typically half of market salary, so it is like donating 50%) that is nominally 40 hours a week, but actually working 60 hours a week, so it is like you are volunteering half of your "free" time. Then it would be nice clean orders of magnitude. But 100% is not very common, and it could be misleading, so 50% is ok.
If you gave 60% of your income would that make you a 110% EA? If so, I think that mostly just highlights that this metric should not be taken too seriously. (I was going to criticize it on more technical grounds, but I think to do so would be to give legitimacy to the idea that people should compare their own "numbers" with each other, which seems likely to be to be a bad idea)
Correct - to make this physically realistic (not able to exceed 100%), you would need to say that someone who donates 10% of money and does no volunteering is dedicating 5% of their total "potential effort." But it is more intuitive to say that GWWC is a "10%" EA.
For people reading this post now as part of the decade review, I think this article was useful to get people thinking about this issue, but the more comprehensive data in this later post is more useful for actually estimating the rate of drop out.
I don't think the source of value drift is mysterious.
Can we get the 1-3 sentence summary before committing to 43min of talks?
Oh, I meant those as references thinking many had already seen them rather than 'watch these". They are both talks about the motivations for altruism given at EAG 2014-2015 by economists. tl;dr if you;re surprised that altruistic intentions tend to drift with age, other types of predictable value shifts your model is probably not taking into account some things known by economists.
This doesn't add much to the conversation. Obviously people get over-excited by EA and the personal and philosophical opportunities it provides to make an impact will lead lots of people being overconfident in their long-term commitment, and they'll turn out not to be as altruistic as they think. The OP is already concerned about a default state of people becoming less altruistic over time, and focuses on how we can keep ourselves more altruistic than we'd otherwise tend to be, long-term, through things like commitment mechanisms. So theories of psychology which don't specify the mechanisms by which commitment devices fail aren't precise enough to be useful in answering the question of what to do about value drift to our satisfaction.
I wasn't commenting on the overall intention but on enumerations of causal levers outlined by economists in the talks given. I was objecting to the frame that these causal levers are obfuscated. I think presenting them as such is a way around them being low status to talk about directly.
Thanks for the context. That makes a lot of sense. I've undid my downvote on your parent comment, upvoted it, and also upvoted the above. (I think it's important, as awkward as it might be, for rationalists and effective altruists to explicate their reasoning at various points throughout their conversation, and how they update at the end, to create a context of rationalists intending their signals to be clear and received without ambiguity. It's hard to get humans to treat each other with excellence, so if our monkey brains force us to treat each other like mere reinforcement learners, rationalists might as well be transparent and honest about it.)
It would appear the causal levers aren't obfuscated. Which ones do you expect are the most underrated?