Recently there has been a spate of discussion on the EA Forum and elsewhere about increased spending in EA and its potential negative consequences.
There are various potential concerns one might have about this, and addressing all of them would require a much longer discussion. But it seems like one common worry is something like:
- Having a frugal EA movement has positive selection effects.
- Living frugally is a costly signal.
- This ensures people will only want to join if they’re very altruistically motivated.
- Conversely, spending more on community building, EA salaries, etc has negative selection effects.
- It will attract people who are primarily motivated by money rather than altruism.
I think this argument conflates two separate questions:
- How demanding should EA be?
- How should we value EA time compared to money?
These two questions seem totally separable to me. For instance, say A works at an EA org.
- His work produces $500/hour of value.
- He gets paid $50/hour by his employer.
- He has a fraudulent charge of $100 on a card that he could dispute.
- This requires him to spend 1 hour on the phone with customer service.
- He is indifferent between this and spending an hour doing a relatively unpleasant work task.
As things currently stand, he might spend the hour to recover the $100. But I think it would clearly be better if someone paid him $100 to spend an hour doing the unpleasant work task for his organization rather than trying to recover the money. It would keep his utility (and thus demandingness) constant, while resulting in $400 of surplus value created.
I think in the current EA movement:
- I feel unsure about whether it would be better to increase or decrease demandingness.
- It seems like a tough tradeoff.
- Increasing demandingness pushes people to do and sacrifice more, as well as selecting for altruistically motivated people.
- On the other hand, it may exclude people who could make valuable contributions but aren’t as dedicated, as well as leading to demotivation and burnout.
- I do think it would be better to increase the monetary value we assign to the time of people doing EA work on average.
- Given the current stock of funding vs human capital in EA, I think the time of the highest performing EAs is worth a lot.
- I suspect the current artificially low salaries in EA often lead to people making inefficient time/money tradeoffs.
I think many people have an intuitive worry that paying EAs more will cause EA to lose its edge. Initially EA was a scrappy movement of people who really cared, and they worry that giving people more money will make it soft and cushy.
I’m sympathetic to that, but I think there are a lot of ways EA can be demanding that don’t rely on frugality. We could expect EAs to:
- work 7 days a week
- prioritize work over socializing or hobbies
- work long hours and be constantly available/responsive outside work hours
- leave a fun and/or high-status job for an unpleasant and/or low-status one
- do job tasks that are valuable even if they’re ones they don’t enjoy
- move to a location they don’t like for work
- prioritize their impact over relationships with family, friends, romantic partners, or children
- admit when they were wrong even if it’s painful
- train skills they think are valuable even if it feels unnatural and hard
- act in a way that represents EA well, even if they’d rather be petty and uncharitable
- practice the virtue of silence
There are probably many other things I’m not thinking of here that are both demanding and potentially quite valuable. Many of the most effective EAs I know have done and continue to do a bunch of these things, and I think they’re pretty awesome and hardcore for doing so.
I think these are all more efficient costly signals than frugality, but my impression is that they tend to be regarded by people (both inside and outside EA) as worse signals of altruism, and I’m wondering why that is.
Thanks Caroline for writing this! I think it's a really rich vein to mine because it pulls together several threads I've been thinking a lot about lately.
One issue it raises is should we care about the "altruist" in effective altruists? If someone is doing really useful things because they think FTX will pay them a lot of money or fund their political ambitions, is this good because useful things happen or bad because they won't be a trustworthy agent for EA when put into positions of power? My instinct is to prefer giving people good incentives than selecting people who are virtuous: I think virtue tends to be very situationally dependent and that very admirable people can do bad things and self-deceive if it's in their interest to do so. But it's obviously not either-or. I also tend to have fairly bourgeois personal preferences and think EA should aspire to universality such that lots of adherents can be materially prosperous and conventionally successful and either donate ~20% of their income or work/volunteer for a useful cause (a sort of prosperity gospel form of EA amenable to wide swathes of the professional and working class rather than a self-sacrifice form that could be more pure).
A separate issue is one of community health. So on an individual level maybe it could be fine if people join EA because the retreats are lit and the potential for power and status is high, but as a group there may be some like tipping point where people's self-identity changes as the community in fact prizes the perks and status over results. This could especially be a concern insofar as 1. goals that are far off make it easy to self-deceive about progress and 2. building the EA community can be seen as an end in itself in a way that risks circularity and self-congratulation. You can say the solution here is to really elevate people who do in fact achieve good results (because achieving good things for the world is what we care about), but lots of results take a long time to unfold (even for "near-termist" causes) and are uncertain (e.g. Open Phil's monetary policy and criminal justice reform work, both of which I admire and think have been positive). For example, while I've been in the Bahamas, people have been very complementary of 1Day Sooner (where I work and which I think EAs tend to see as a success story). I'm proud of my work at 1Day and hopeful what we've already done is expanding the use of challenge studies to develop better vaccines, but despite achieving some intermediate procedural successes (positive press coverage, some government buy-in and policy choices, some academic and bioethics work), I think the jury is very much still out on what our impact will end up being and most of our impact will likely come from future work.
The point about self-identity and developing one's moral personhood really drives me in a direction of wanting to encourage people to make altruist choices that are significant and legible to themselves and others. For example, becoming a kidney donor made me identify myself more with the desire to have an impact which led me further into doing EA types of work. I think the norm of donating a significant portion of your income to charity is an important one for this reason, and I've been disappointed to see that norm weaken in recent years. I do worry that some of the types of self-sacrificing behavior you mention aren't legible enough or state change-y enough to have this permanent character/self-identity building effect.
There's an obvious point here about PR and I do think committing to behavior that we're proud to display in public is an important principle (though not one that I think necessarily cuts against paying EAs a lot). First, public display is epistemically valuable because (a) it unearths criticisms and ideas an insular community won't necessarily generate and (b) views that have overlapping consensus among diverse audiences are more likely to be true. Second, hiding things isn't a sustainable strategy and also looks bad on its own terms.
Last thought that is imperfectly related is I do think there may be a bit of a flaw in EA considering meta-level community building on the same plane as object-level work and this might be a driving a bit of inflation in meta-level activities that manifests itself in opulent EA college resources (and maybe some other things) that are intuitively jarring even as they can seem intellectually justified. So if you consider object and meta-level stuff on the same plane, the $1 invested in recruiting EAs who then eventually spend $10 and recruit more EAs seems like an amazing investment (way better than spending that $1 on an actual EA object level activity). But this seems intuitively to me like it's missing something and discounting the object level $ for the $ spent on the meta-level needed for fundraising doesn't seem to satisfy the problem. I'm not sure but I think the issue (and this also applies to other power-seeking behavior like political fundraising) is that the community building is self-serving (not "altruistic") and from an view outside of EA does not seem morally praiseworthy. We could take the position that that outside view is simply wrong insofar as it doesn't take into the account the possibility that we are in fact right about our movement being right. The Ponzi-ishness of the whole thing doesn't quite sit well, but I haven't come to a well-reasoned view.