Hide table of contents

129

This post is aimed at those working in jobs which are funded by EA donors who might be interested in voluntarily earning less. This post isn't aimed to influence pay scales at organisations, or at those not interested in earning less. 

When the Future Fund was founded in 2022,  there was a simultaneous upwards pressure on both ambitiousness and net-earnings in the wider EA community. The pressure to be ambitious resulted in EAs really considering the opportunity cost of key decisions. Meanwhile, the discussions around why EAs should consider ordering food or investing in a new laptop pointed towards a common solution: EAs in direct work earning more.

The funding situation has significantly shifted from then, as has the supply-demand curve for EA jobs. This should put a deflationary pressure on EAs' salaries, but I'd argue we largely haven't seen this effect, likely because people's salaries are "sticky". 

One result of this is that there are a lot of impactful projects which are unable to find funding right now, and in a similar vein, there's a lot of productive potential employees who are unable to get hired right now. There's even a significant proportion of employees who will be made redundant. 
 

This seems a shame, since there's no good reasons for salaries to be sticky. It seems especially bad if we do in fact see significant redundancies, since under a "veil of ignorance" the optimal behaviour would be to voluntarily lower your salary (assuming you could get your colleagues to do the same). Members of German labour unions quite commonly do something similar (Kurzarbeit) during economic downturns, to avoid layoffs and enable faster growth during an upturn 



Some Reasons you Might Want to Earn Less: 

  •  You want to do as much good as possible, and suspect your organisation will do more good if it had more money at hand.
  • Your Organisation is likely to make redundancies, which could include you.
  • You have short timelines, and you suspect that by earning less, more people could work on alignment.
  • You can consider your voluntary pay-cut a donation, which you can report on your GWWC account. (The great thing about pay-cut donations is you essentially get a 100% tax refund, which is particularly nice if you live somewhere with high income tax). 

 

Some Reasons you May Not Want to Earn Less: 
 

  • It would cause you financial hardship.
  • You would experience a significant drop in productivity.
  •  You suspect it would promote an unhealthy culture in your organisation.
  • You expect you're much better than the next-best candidate, and you'd be less likely to work in a high impact role if you had to earn less.
Comments11


Sorted by Click to highlight new comments since:

To put my money where my mouth is, I will be cutting my salary back to "minimum wage" in October.

I don’t believe this is an unbelievably terrible idea; it makes sense to do this in some circumstances. That said, take resentment buildup seriously! If you feel that you are the sort of person who has even a small chance of feeling resentful about this choice later on, it is probably not worth it. You need to feel unambiguously good about this decision in the short and long term.

More on the trade-offs around voluntary salary reduction: Passing up Pay.

You can consider your voluntary pay-cut a donation, which you can report on your GWWC account.

When I asked them about this last year they said you could count it if it was voluntary and easily reversed.

Yep! A few of our team members have chosen to do this (including myself).

(Note: It’s always been initiated by the team member themselves and there isn’t any expectation from the organisation which I think would be a problem.)

I do still donate to other things too. I think that beyond the direct impact of those donations it helps to be able to advocate to a broader audience when some of my impact-focused donations are more legible and relatable.

Also at GWWC we budget salaries at the full amount (calculated by a formula) as any salary sacrifices as donations are voluntary and reversible and we also want to ensure we have budgeted for the cost to replace someone using the same salary formula (eg if a researcher at our 3B level based in Oxford with 5 years experience leaves we’d want to have budgeted to replace them). For cost effectiveness calculations the salary sacrificed donations is ideally counted as income and the budgeted/offered salary counted as costs.

A reason that is missing from the "contra" list: You could stay at a higher salary and donate the difference to a more cost-effective org than the one you work for.

I would expect that most people who work in EA do not work for the org that they consider to have the highest marginal impact for an additional dollar (although certainly some do).

Accepting a lower salary can be more tax-efficient than donating if the donation is not tax-deductible. But if you think that cost-effectiveness follows a power law, then its quite possible that there is an org is more than twice as cost-effectiveness than your current employer.

Although if you think the major funders do a good job at allocating resources, it seems that the marginal additional dollar at org X should be roughly equal in effectiveness as the marginal additional dollar at your org. Given that the tax disadvantage of earn-then-donate in the US will generally range from about 20% (SS + Medicare + state taxes; no federal income tax due to itemizing) to 42%,[1] you'd need to think the funders made at least a moderate miss to outweigh the loss in tax advantages. Moreover, your contributions to the more cost-effective org could be significantly funged, which would reduce the benefit of you making a better allocation decision.

  1. ^

    I'm assuming that anyone in a marginal bracket over the 22% bracket is itemizing.

Thank you for the inspiration! I reduced my salary from now to the end of the year and will be able to contribute a mid-four-figure amount more due to saved social security costs compared to donating it. Additionally, it helps save taxes as only 20% of salary is tax-deductible in Germany. 

This post was pretty impactful!

One observation: if some but not all employees do this in an organization under financial pressure, it could change how and where any redundancies are applied. Salary reduction is sort of like giving the organization a grant that can only be used to employ a specific individual. If the org is only having to pay half your salary, it's much less likely to lay you off to conserve money vs. laying off a somewhat more productive employee at full salary. On the other hand, lower payroll costs --> fewer layoffs needed, which is good.

One crux might be if you think the organization's financial pressure is the new normal vs. a blip. If the latter, is there a greater risk that salary reduction could cause the org to lay off the "wrong" employees to minimize short-term pain?

I completely agree and the argument is compelling. I hope we see uptake on this suggestion.

I know its not part of the core reason for doing something like this, but taking a bit of a personal hit like this can also signal to those both inside and outside the movement how serious we are about trying to do the most good we can, potentially motivating others to think about taking EA more seriously.

I guess it's also just a different way to give. Some orgs have salary sacrifice schemes for this reason.

Related: Advantages of Cutting Your Salary

In all seriousness, I think this is a good point

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Neel Nanda
 ·  · 1m read
 · 
TL;DR Having a good research track record is some evidence of good big-picture takes, but it's weak evidence. Strategic thinking is hard, and requires different skills. But people often conflate these skills, leading to excessive deference to researchers in the field, without evidence that that person is good at strategic thinking specifically. I certainly try to have good strategic takes, but it's hard, and you shouldn't assume I succeed! Introduction I often find myself giving talks or Q&As about mechanistic interpretability research. But inevitably, I'll get questions about the big picture: "What's the theory of change for interpretability?", "Is this really going to help with alignment?", "Does any of this matter if we can’t ensure all labs take alignment seriously?". And I think people take my answers to these way too seriously. These are great questions, and I'm happy to try answering them. But I've noticed a bit of a pathology: people seem to assume that because I'm (hopefully!) good at the research, I'm automatically well-qualified to answer these broader strategic questions. I think this is a mistake, a form of undue deference that is both incorrect and unhelpful. I certainly try to have good strategic takes, and I think this makes me better at my job, but this is far from sufficient. Being good at research and being good at high level strategic thinking are just fairly different skillsets! But isn’t someone being good at research strong evidence they’re also good at strategic thinking? I personally think it’s moderate evidence, but far from sufficient. One key factor is that a very hard part of strategic thinking is the lack of feedback. Your reasoning about confusing long-term factors need to extrapolate from past trends and make analogies from things you do understand better, and it can be quite hard to tell if what you're saying is complete bullshit or not. In an empirical science like mechanistic interpretability, however, you can get a lot more fe
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while