Hide table of contents

In theory, the core principles of EA ("using reason and evidence to do as much 'good' as possible," for some definition of "good") can be applied to moral philosophies besides utilitarianism. What types of moral systems can be combined with EA?

Motivation: I would like to see EA being adapted to more different belief systems so it can appeal to more people; many of us already in the movement are not fully utilitarian anyway. Right now, it seems like most EA cause prioritization efforts use utilitarian reasoning, which limits how many people can apply them without doing the hard work of adapting them to their own moral frameworks.

19

0
0

Reactions

0
0
New Answer
New Comment

6 Answers sorted by

The following paper is relevant: Pummer & Crisp (2020). Effective Justice, Journal of Moral Philosophy, 17(4):398-415.

From the abstract: 
"Effective Justice, a possible social movement that would encourage promoting justice most effectively, given limited resources. The latter minimal view reflects an insight about justice, and our non-diminishing moral reason to promote more of it, that surprisingly has gone largely unnoticed and undiscussed. The Effective Altruism movement has led many to reconsider how best to help others, but relatively little attention has been paid to the differences in degrees of cost-effectiveness of activities designed to [in]crease injustice." 
 

In "The Definition of Effective Altruism", William MacAskill writes that 

"Effective altruism is often considered to simply be a rebranding of utilitarianism, or to merely refer to applied utilitarianism...It is true that effective altruism has some similarities with utilitarianism: it is maximizing, it is primarily focused on improving wellbeing, many members of the community make significant sacrifices in order to do more good, and many members of the community self-describe as utilitarians.

But this is very different from effective altruism being the same as utilitarianism. Unlike utilitarianism, effective altruism does not claim that one must always sacrifice one’s own interests if one can benefit others to a greater extent. Indeed, on the above definition effective altruism makes no claims about what obligations of benevolence one has.

Unlike utilitarianism, effective altruism does not claim that one ought always to do the good, no matter what the means; indeed...there is a strong community norm against ‘ends justify the means’ reasoning.

Finally, unlike utilitarianism, effective altruism does not claim that the good equals the sum total of wellbeing. As noted above, it is compatible with egalitarianism, prioritarianism, and, because it does not claim that wellbeing is the only thing of value, with views on which non-welfarist goods are of value.

In general, very many plausible moral views entail that there is a pro tanto reason to promote the good, and that improving wellbeing is of moral value. If a moral view endorses those two ideas, then effective altruism is part of the morally good life." (emphasis added)

This project might be of interest.  They tried to answer the following questions:
How can people with non-utilitarian ethical views, such as egalitarians and justice-oriented individuals, find a place in the effective altruism community?

And are effective altruism methods helpful when we seek to reduce systemic inequalities and social injustices?

And they tried to find the best charity to donate to for these goals.

There are chapters here on Buddhism, Orthodox Judaism and Christianity in this book on religion and EA. 

I think there is a simple reason why EA is compatible with many moral views: increasing welfare is an important element of any sensible moral view. Utilitarianism is just the view that this is the only element that matters. But any other sensible moral view will acknowledge that increasing welfare matters at least alongside other considerations. 
Plus: the element of increasing welfare has become more important in the past 3-4 decades since our opportunities for increasing welfare have increased a lot compared to the previous history of humanity. Thus, the 'utilitarian element' of any sensible moral view has become practically more relevant in the past 3-4 decades. And since EA helps us to exploit these opportunities, EA matters according to any sensible moral view.

Here is my rambling answer to your question.

I like virtue ethics, and I see it as compatible. I think that EA would be a slightly better movement is the level of utilitarianism was reduced by 5% and the level of virtue ethics was increased by 5%. My rough thoughts are that while I am influenced by various ethical ideas/schools of thought, I tend to be slightly less of a utilitarian and slightly more of a virtue ethicist than the EA-aligned people I see (which is admittedly a very small and non-representative sample).

I view "being a good person" not merely as "having large positive impact" but also conducting oneself properly. Thinking critically, treating people respectfully, and being honest are things I value, not as rules in a deontological sense, but as aspirations for the type of person I want to be. I find stoicism very influential on me, and the classic grouping of wisdom, justice, courage, and moderation lines up nicely with the type of person I want to be. This is, of course, aspiration. I am still very far from those ideals.

My rough impression (again, from a small and non-representative sample) is that EAs tend to not value justice very much, not value "proper conduct" very much, and not value wisdom very much. I find it strange to see people doing things that are not respectful of others, to see people not being gentle or kind or welcoming, to see people unaware of what causes happiness in themselves.

Honestly also seems undervalued among EAs. I dislike seeing people use inflated/exaggerated titles and descriptions, such as having a title of "director" or "president" when in reality they a working manager of one person at an organization they founded, or "invited to speak at Cambridge" when it was really EA Cambridge that invited them to present to a student group on a Zoom call (not real examples). Maybe these people have more impact as a result of this polishing/deception, but I wish that these EAs were more virtuous.

In a simplistic toy example, I find it odd that the person who turned $100 into $200 is lauded, and the person who turned $10 into $50 is ignored. I often think less of "what has this person accomplished" than "what choices has this person made, considering what this person has accomplished and considering what they started with and considering all the other challenges and assistance they had."

The above thoughts are sort of some of the reasons why I like Julie Wise and her writings so much. I don't know anything about her family background, but her writings strike me as much more humble and pragmatic, and are not littered with "I went to an expensive school" and "look how impressive I am" which I see elsewhere.

To heavily simplify my view, EA is largely formed by two key aspects:

  1. An ethical/normative view that “Maximizing [good] is ideal/desirable”;
  2. An epistemic and methods emphasis on things like open-mindedness, using more critical thinking vs. passion impulses, emphasizing the importance of research, thinking at the marginal level (debatably), etc.

The first aspect in my view is open to a lot of interpretation around the word “good,” and is the only aspect that should matter here, I think. Utilitarianism defines good in terms of consequences (either pleasure vs. suffering or preferences, depending on your flavor of util). Deontology defines good in terms of rights, duties, categorical imperatives, etc. Virtue ethics focuses on virtue… and so on. This shouldn’t pose any problem for alternative ethical theories.

I know that some ethical theories (somewhat strangely/artificially in my view) have this in-built thing saying “oh, you don’t need to maximize goodness, some good actions are just supererogatory” (@Deontology). This might seem to pose issues for compatibility, but to head off this rabbit trail (which I can explain in more detail if anyone is actually curious enough to ask), I don't think it is an issue.

The second aspect of EA is irrelevant to any (¿legitimate?) moral theories in my view, since I don’t think that “moral theories” should (definitionally speaking) go beyond identifying what is “good”/what makes one world better than an alternative world. (You could theoretically try to bundle a bunch of epistemic or prescriptive claims like “You should emphasize listening to women/marginalized groups” along with some ethical theory and call the whole bundle an ethical theory, but that would presumably be misleading)

However, the parenthetical above does hit on a potential key issue, which is that I think different ethical theories probably tend to be associated with different epistemic/etc. worldviews.

Comments4
Sorted by Click to highlight new comments since:

This isn't a substantive answer like those above - but I think you can get a lot of Effective Altruism off the ground with 2 premises that are widely agreed by most moral philosophies but are generally under-attended: 
1)  Consequences matter (which any moral philosophy worth its salt agrees, although to varying extent on what else matters)
2) Pay attention to scope, i.e. 100X lives saved is way, way better than saving one life. 

There's a lot more complexity and nuance to views in Effective Altruism, but I think this is a common core (in addition to lives having equal moral value etc.) that is robust for almost all plausible ethical approaches. 

Richard Ngo writes about this here.

Thanks for linking that! I couldn't remember where I had read the framing first

In 80K Hours' What is social impact? A definition,  under the subheading "Is this just utilitarianism?", Ben Todd wrote (bolded parts mine):

No. Utilitarianism claims that you’re morally obligated to take the action that does the most to increase wellbeing, as understood according to the hedonic view.

Our definition shares an emphasis on wellbeing and impartiality, but we depart from utilitarianism in that:

  • We don’t make strong claims about what’s morally obligated. Mainly, we believe that helping more people is better than helping fewer. If we were to make a claim about what we ought to do, it would be that we should help others when we can benefit them a lot with little cost to ourselves, which is much weaker than utilitarianism.
  • Our view is compatible with also putting weight on other notions of wellbeing, other moral values (e.g. autonomy), and other moral principles. In particular, we don’t endorse harming others for the greater good.
  • We’re very uncertain about the correct moral theory and try to put weight on multiple perspectives.

Read more about how effective altruism is different from utilitarianism.

Overall, many members of our team don’t identify as being straightforward utilitarians or consequentialists.

Our main position isn’t that people should be more utilitarian, but that they should pay more attention to consequences than they do — and especially to the large differences in the scale of the consequences of different actions.

If one career path might save hundreds of lives, and another won’t, we should all be able to agree that matters.

In short, we think ethics should be more sensitive to scope.

So this mirrors Ben's comment above. 

I'm personally quite glad to see this made explicit in an introductory high-traffic article like this.

Curated and popular this week
Relevant opportunities