Few people think of finance as an ethical career choice. Top undergraduates who want to “make a difference” are encouraged to forgo the allure of Wall Street and work in the charity sector. And many people in finance have a mid-career ethical crisis and switch to something fulfilling.

The intentions may be good, but is it really the best way to make a difference? I used to think so, but while researching ethical career choice, I concluded that it’s in fact better to earn a lot of money and donate a good chunk of it to the most cost-effective charities—a path that I call “earning to give.” Bill Gates, Warren Buffett and the others who have taken the 50% Giving Pledge are the best-known examples. But you don’t have to be a billionaire. By making as much money as we can and donating to the best causes, we can each save hundreds of lives.

There are three considerations behind this. First is the discrepancy in earnings between the different career paths. Annual salaries in banking or investment start at $80,000 and grow to over $500,000 if you do well. A lifetime salary of over $10 million is typical. Careers in nonprofits start at about $40,000, and don’t typically exceed $100,000, even for executive directors. Over a lifetime, a typical salary is only about $2.5 million. By entering finance and donating 50% of your lifetime earnings, you could pay for two nonprofit workers in your place—while still living on double what you would have if you’d chosen that route.

The second consideration is that “making a difference” requires doing something that wouldn’t have happened anyway. Suppose you come across a woman who’s had a heart attack. Luckily, someone trained in CPR is keeping her alive until the ambulance arrives. But you also know CPR. Should you push this other person out of the way and take over? The answer is obviously “no.” You wouldn’t be a hero; you wouldn’t have made a difference.

So it goes in the charity sector. The competition for not-for-profit jobs is fierce, and if someone else takes the job instead of you, he or she likely won’t be much worse at it than you would have been. So the difference you make by taking the job is only the difference between the good you would do, and the good that the other person would have done.

The competition for finance jobs is even more fierce than for nonprofits, but if someone else gets the finance job instead of you, he or she would not likely donate as much to charity. The average donation from an American household is less than 5% of income—a proportion that decreases the richer the household. So if you are determined to give a large share of your earnings to charity, the difference you make by taking that job is much greater.

The third and most important consideration is that charities vary tremendously in the amount of good they do with the money they receive. For example, it costs about $40,000 to train and provide a guide dog for one person, but it costs less than $25 to cure one person of sight-destroying trachoma. For the cost of improving the life of one person with blindness, you can cure 1,000 people of it.

This matters because if you decide to work in the charity sector, you’re rather limited. You can only change jobs so many times, and it’s unlikely that you can work for only the very best charities. In contrast, if you earn to give, you can donate anywhere, preferably to the most cost-effective charities, and change your donations as often as you like.

Not many people consider “earning to give” as a career path. But it’s proving popular. A good number of students that I’ve presented this idea to have pursued it. One student, convinced by these arguments, now works at Jane Street, the trading firm, giving 50% of his income, and thus can already pay the wages of several people for the not-for-profit work he could have been doing.

In general, the charitable sector is people-rich but money-poor. Adding another person to the labor pool just isn’t as valuable as providing more money so that more workers can be hired. You might feel less directly involved because you haven’t dedicated every hour of your day to charity, but you’ll have made a much bigger difference.

Part of Introduction to Effective Altruism

Previous: GiveWell's Top Charities for Giving Season 2013 • Next: High Impact Science

Comments5


Sorted by Click to highlight new comments since:

This looks at the tip of the iceberg above the surface but ignores the deeper systems below the surface that Wall Street sits on top of. Namely, Wall Street perpetuates wealth inequality and oftentimes deprives the poor simply by incentivizing the attention economy rather than the altruism economy.

I hate wasting time posting in the comments section, so the TL;DR is that John Doe or Sallie Mae who takes a corporate finance job to "earn to give" perpetuates the very issues they aim to solve with their donations. Lastly, I'll just echo the most poignant counterpoints I read in the below comments:


"having wealth makes it easier to get more wealth - taking with their left hands, before giving with their right"


"Imagine if Einstein or Michelangelo, may have take up a corporate job to benefit others"

Doh, this is so wrong, and I note in your criticism of Howard Buffet that you simply don't understand why.

Simply, the global system of capitalism is unfair.

The winners take order-of-magnitude multiples of reward, completely disproportionate to their efforts, compared to the poor. It's a power law, and part of that is the self-reinforcing effect that having wealth makes it easier to get more wealth. The only way this can happen is that wealth is shifted from the poor to the rich, and of course Wall Street is the very pinnacle of this system.

Like in sports. The best players are multi-millionaires, the worst get nothing at all, indeed they end up contributing towards the winners by buying sponsored sports equipment, paying for training, seminars, etc, attending sporting events, and often volunteering for free.

That might be fine for sports. But it's not what we want in life.

By proposing that do-gooders work on wall street, you're advocating people do exactly what Howard Buffet talks about. Taking with their left hands, before giving with their right.

The Aid industry is just a salve on the problem, and though it obviously helps individuals and saves lives, it doesn't address the underlying problem. Indeed it can make the most glaring of the symptoms disappear to a point that to some people the problem doesn't appear to need a solution.

Thanks for raising this topic. Your position probably captures what very many people think when hearing about "earning to give" for the first time. It's difficult to engage with most of your points, though, because in the last sentence of your reply you seem to be favoring a situation where conditions deteriorate, rather than improve, for impoverished people, so that political changes will take place that you believe will be ultimately beneficial. That's probably correct under some circumstances, but in general the burden of proof would be on you.

Alternatively, if you think political change is the way to ultimately help people, wouldn't you want high-earning persons to support efforts at political change, if there are advocacy organizations in need of funds? Would you agree that, in principle, the positive effects of that investment could outweigh negative effects from the marginal usefulness of that person to their employer, above their next-best-qualified potential employee?

The fact that so many people do this is exactly why we have harmful charities like AMF ranked as the top charities, because the people involved in the EA movement are not on the ground. They do not see the long-term harm that these charities do to communities. They fail to realize that they are actually increasing the amount of money that is destroying jobs, limiting freedom, and increasing dependence. When you are not on the ground you don't realize how harmful the charities promoted here are. I live in Africa and I have seen for years the long term harm that charities like AMF do. They are not merely ineffective, they do more harm than good. Here's the full argument

I have often heard in the flights that in case of emergency first help yourself and then help others. The concept of self has always had a preference over the personhood of others. Having said that, the argument of the author seems to suffer from some fundamental flaws: The premise of his argument is based on an assumption that that work done by an individual is independent of any utility that she may derive from her work. It further assumes that utility derived by doing charity might substitute the utility which one may derive from one's work. Both the arguments are interconnected. Say, it is difficult to say how long would merely a charity derive A to suck up to a corporate job which she doesn't enjoy. This is an economic argument.

There are social and moral drawbacks too. If one does a work merely to benefit another and not because one enjoys it then society may be deprived of works of excellence and this drag may impede the development of society at large. Imagine if Einstein or Michelangelo, may have take up a corporate job to benefit others.

Third and the most important argument against this approach is moral. We all have autonomy to choose what we want to. It's certainly that one makes a choice to take up a high paying job, to benefit others. The fundamental idea behind exercising autonomous choice is to find ones' meaning of life. As the Greek philosophy goes on this, the purpose of life is to find ones' own destiny. I would place certitude and conviction to be of paramount importance in making ones' choice. If one's experience, culture and other influences lead to that certitude and conviction about ones' decision to take up a high paying job and do charity then that's fine, else one may find herself in a drag and thoroughly despicable situation, certainly sailing through against fulfilling ones' destiny.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr