Effective altruism can be compatible with most moral viewpoints, and there is nothing fundamental about effective altruism that requires it to be a near-exclusive focus. There is, however, an analysis of effective altruism that seems to implicitly disagree, sneaking in a near-complete focus on effectiveness via implicit assumptions in analysis. I think that this type of analysis, which is (incorrectly) assumed by many people I have spoken to in the community, is not a necessary conclusion, and in fact runs counter to the assumptions for fiscal effective altruism. Effective careers are great, but making them the only "real" way to be an Effective Altruist should be strongly rejected.

The mistaken analysis goes as follows; if we are balancing priorities, and take a consequentialist view, we should prioritize our decisions on the basis of overall impact. However, effective altruism has shown that different interventions differ in their impact by orders of magnitude. Therefore, if we give any non-trivial weight to improving the world, then it is such a large impact, it will overbalance other considerations.

This can be illustrated with a notional career choice model. In this model, someone has several different goals. Perhaps they wish to have a family, and think that impacts on their family is almost half of the total reason to pick a career, while their personal happiness is another almost-half. Finally, in line with the financial obligation to give 10% of their income, they “tithe” their career choice, assigning 10% of the weight to their positive impact on the broader world. Now, they must choose between an “effective career,” or working a typical office job.

FactorFamilyPersonal HappinessBeneficenceOverall
Weight45%45%10%100%
PriorityRatingImpactRatingImpactRatingImpactRatingImpact
Effective Career3/1032/1029/1010003.15102.5
Office Job9/1099/1091/1018.28.2


 

As the table illustrates, there are two ways to weigh the options; either rating how preferred the option is as a preference, or assessing its impact. The office job effectively only has impact via donations, while the effective career addresses a global need. The first method leads to choosing the office job, the second to choosing the effective career. In this way, the non-trivial weight put on impact becomes overwhelming. (I will note that this analysis often fails to account for another issue that Effective Altruism used to focus on more strongly, that of replaceability. But even assuming a neglected area, where replaceability is negligible, the totalizing critique obtains.)

The equivalent fiscal analysis certainly fails; dedicating 10% of your money to effective causes does not imply that if the cause is very effective, it requires you to give more than 10% of your money. This is not to say that the second analysis is confused - but it does require accepting that under a sufficiently utilitarian viewpoint, where your decisions weight your benefit against others, even greatly prioritizing your own happiness creates a nearly-totalizing obligation to others. And that's not what Effective Altruism generally suggests.

And to be clear, my claim is not particularly novel. To quote a recent EA Forum post from 80,000 hours: “It feels important that working to improve the world doesn’t prevent me from achieving any of the other things that are really significant to me in life — for example, having a good relationship with my husband and having close, long-term friendships.”

It seems important, however, to clarify that in many scenarios it simply is not the case that an effective career requires anything like the degree of sacrifice that the example above implies. While charities and altruistic endeavors often pay less than other jobs, the extent of the difference is usually a fractional amount of income, not an order of magnitude difference. And charitable organizations are often as good as or better than commercial enterprises in terms of collegiality, flexibility, and job satisfaction. Differences in income certainly matter for personal satisfaction, but for many people, effective careers should be seen as a reasonable trade-off, and not as either the only morally acceptable choice, or an obviously inferior choice.

I think that many people who are new to EA, and those who are very excited about it, sometimes make a mistake in how they think about prioritizing, and don't pay enough attention to their own needs and priorities for their careers. Having a normal job and giving 10% of your income is a great choice for many Effective Altruists. Having a job at an effective organization is a great choice for many other Effective Altruists. People are different, and the fact that some work at EA orgs certainly doesn't prove they are more committed to the cause, or better people. It just means that different people do different things, and in an inclusive community focused on effectiveness and reasoning, we should be happy with the different ways that different people contribute. 

Comments8


Sorted by Click to highlight new comments since:

I don't really disagree with you (ex: 2016, 2022) but have you seen EA writing or in-person discussion advocating choosing an impactful job where you'd rate your happiness 2/10 over a less impactful one where you'd rate it 9/10?

I have seen a few people in EA burn out at jobs they dislike because they feel too much pressure and don't prioritize themselves at all, and I've seen several people trying to find work in AI safety because it's the only effective thing to do on the issue that they were told was most important, despite not enjoying it. Neither of those is as extreme as the notional example, but both seem to be due to this class of reasoning.

(never spoke about this with anyone, but) I think about this like the classic balance between utilitarianism and deontology, "Go three-quarters of the way from deontology to utilitarianism and then stop".

I mean: Yeah, having a high impact is important, but don't throw out "enjoying life" and other things that might not be easily quantifiable. We're only humans, if we try to forcefully give one consideration 1000x the weight of another, it totally might mess up our judgement in some bad way.

[this doesn't feel so well phrased, hopefully it still makes some sense. If not, I'll elaborate]

I think this is roughly right.

(That said, it's a balance, and three-quarters of the way to 100% EA dedicate-ism will sometimes feel and look quite a lot like crazy sacrifice, IMO.)

Yeah I have no idea if 75% is the correct constant. I mainly read this as "definitely not 100% and also not 99%"

[not a philosopher]

Yes - this post comes drafting  a larger post I'm writing trying to deconstruct EA ideas more generally, and the way that EA really isn't the same as utilitarianism.

For EA s starting out, there should be some focus on just doing good and not necessarily trying to aggressively optimize for doing good better, especially if you don't have a lot of credibility in that space.

Also, at the end of the day EA is a just a principle/value system which you can rely on in pretty much any career you end up making. The part about EA being a support system and a place to develop your values is often left out and as a result a lot of early stage exicted EAs just want to "get into " or "get stuff" out of EA

I think that "focus on just doing good and not necessarily trying to aggressively optimize for doing good better" is the wrong approach. Doing something to feel like you did something without actually trying is, in some ways, far worse than just admitting you're not doing good at present, and considering whether you want to change that.

And "A is a just a principle/value system which you can rely on in pretty much any career you end up making" sounds like it's missing the entire point of career choice.

Curated and popular this week
 ·  · 1m read
 · 
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
 ·  · 2m read
 · 
Project for Awesome (P4A) is a charity video contest running from February 11th to February 19th, 2025. The public can vote on videos supporting various charities, and the ones with the most votes receive donations. Thanks to the support of the EA community, three EA charities received $37,000 each last year. Please help generate additional donations for EA charities again this year with just a few clicks! Voting is open until Wednesday, February 19th at 11:59 AM EST. You can find more information about P4A in this EA Forum post. On the P4A website, there are numerous videos showcasing different charities, including several EA charities. Feel free to watch the videos and cast your votes. Here’s how it works: „Anyone can go to the homepage of projectforawesome.com to see all videos. You can sort by charity category, pick from a dropdown of organization names, or search for a specific video. After you click on a video, look for a big red “VOTE” button either next to or below the video. You’ll have to check an “I’m not a robot” box, too.“ This year, there’s a new rule: „Our voting rule for Project for Awesome 2025 is one vote per charitable organization per device.“ So, you can vote for all the charities you want. List of videos about EA charities If you can’t find videos of EA-aligned charities directly, here’s a list: * Access to Medicines Initiative (Vote here) * ACTRA (Vote here) * Against Malaria Foundation (Vote here) * Animal Advocacy Africa (Vote here) * Animal Advocacy Careers (Vote here or here) * Animal Charity Evaluators (Vote here or here) * Animal Equality (Vote here) * Aquatic Life Institute (Vote here or here) * Center for the Governance of AI (Vote here) * Faunalytics (Vote here or here) * GiveDirectly (Vote here) * Giving What We Can (Vote here or here) * Good Food Institute (Vote here or here or here) * International Campaign to Aboli