I'm reading tea leaves here. Proceed with caution.

Firstly, some quotes:

“Of course, I'm stressed, I'm a utilitarian.”

“Mental health is important.”

“In Expected Value terms, it was a good life.”

“Leading a good life becomes more high stakes, the more possibility for good you have.”

“Getting a good night’s rest is also consequentialism.”

And have you read this “Definition of Effective Altruism” carefully? Look carefully at best and most influential writings of the past two years. What do you see?  

The beauty of Effective Altruism is that it is humanity’s first serious attempt to smash utilitarianism against the rocky shores of reality to see what sticks and what works. EA originally was a project to see what works for a community that is trying to do good within a consequentialist framework.

Another way to say it is that EA is an empirical attempt at moral philosophy, a lived midrash, an attempt at a philosophical way of doing good. It is not entirely a theoretical imposition of a moral philosophy on life, though many come into it by that door. This is why the wonderful members within the movement and the disaffected post-EAs feel so comfortable openly critiquing and amending what they say and how they frame activities. As a community, we are working out what works for each of us in doing good well. 

Here are some indicators that EA has, in practice, surpassed utilitarianism and has created the potential for a new moral theory:

  1. In any attempt to do good, not actual consequences, but Expected Value matters. While this view can certainly be accommodated within traditional theory, it is a modern twist.
  2. But how do we get people to perform acts that might not pan out? We celebrate, encourage, and offer social support to each other. This is the payoff for attempting good - community and belonging for the individual is frequently sufficient.
  3. Why? Because it is not merely the external consequences that matter, but the internal consequences of the type of person you become as a result.
  4. Your abstracted actions are not the only thing that matter, but your motivations are too. The motivation to seize the opportunity to do good (for many) overpowers the idea of a moral imperative, once you understand the opportunities available. Intrinsic motivation lasts and is important.
  5. Sometimes EA leaders encourage actions not for their direct positive consequences, but for their positive signals of internal disposition (consider large gwwc pledges and questions how to spend money internally).

Through a very expansive consequentialism, we are no longer dealing with traditional utilitarianism. And like a type-cast teenager, some members of the movement are having a crisis of faith, but no names are required here. We all feel it, I think. Yet, for all the hand wringing about the trajectory of the movement, a stable reworking of foundational principles is imminent.

Giving between 10% and 80,000 hours of your life to doing good is not the only goal of members in the movement, and that fact wags its spiky tail on the forum and in conversation incessantly. People are after some form of sound guidance about when to rest and when to work. “What good actions will be most harmonious to my circumstances now and in the future?” - Not “how can I do as much good as possible (as utilitarianism has been relegated to an ‘on-the-margin’ philosophy)?” At the forefront of EA practice the use of evidence and careful reasoning to try and improve the world remain top dog, but in the background, prudence with regard to the individual good and development of the practitioners keeps increasing in salience as the movement grows. Yet, both fall within a prudential framework of evidence and careful reasoning and can be managed.

Having read most of Will's academic and internet output, I notice a shift coming around the time he wrote Moral Uncertainty. I will not detail right now the long history or my manners of exegesis. Such a task would be so tedious for me to write and likely for you to read and so highly debatable at each turn that only the already convinced would find the treatise useful and those who don't wear the same glasses as me would just be confused. But here is the conclusion: Will seems to have seen that classical utilitarianism couldn't be the foundation-stone for the life of the movement. Moral Uncertainty tried to bridge the gap on how to account for moral uncertainty. The book did not find a satisfactory formula for EA. But Will, to his credit, does not let the lack of grounding EA in a totalizing moral philosophy get in the way of doing good better. And so here we are. 

As the notorious Yud once joked, “The rules say we must use consequentialism, but good people are deontologists, and virtue ethics is what actually works.” Call it what you want: broad-scope consequentialism or prudence-first virtue ethics, we have outgrown strict utilitarianism.

-1

0
0

Reactions

0
0

More posts like this

Comments7
Sorted by Click to highlight new comments since: Today at 7:34 AM
[anonymous]2y4
0
0

I don't understand why this was downvoted. I really liked this post but the title was misleading.

I strong-downvoted due to giving me the false impression that "Of course, I'm miserable, I'm a utilitarian" was a quote from Will (haven't read the OP beyond that). [EDIT from 2023-11-10: in reply to this comment the author has claimed that the above is in fact a quote from Will; so far I failed to verify that claim, but it can be true. The OP was published a long time ago, but I think what probably happened was that it gave me the false impression that that quote was taken from a certain linked interview, and when I found out that it was not I felt mislead).]

I don't believe expected value is a new twist on utilitarianism. Utility is impossible to maximize under uncertainty unless you have a definition of utility that accounts for uncertainty. That's why expected value has been a part of the canon of utilitarianism at least since Harsanyi's 1955 "proof" of utilitarianism using expected value. (https://forum.effectivealtruism.org/posts/v89xwH3ouymNmc8hi/harsanyi-s-simple-proof-of-utilitarianism)

The other aspects that you point out seem to be pretty standard fare for movements/communities, because they work for achieving a collective goal. That seems not really to be part of a moral theory.

"In any attempt to do good, not actual consequences, but Expected Value matters."

You ascribe probabilities to outcomes of your actions in your expected value model of your actions.

Accordingly, the altruism of the consequences of your actions is only certain in your mind when you believe that you know the actual consequences.

 You might believe that you only know the actual consequences:

  • through retrodiction (understanding your actions' past consequences from a present perspective).
  • through prediction (having certainty about your actions' future consequences).
  • through observation (observing the consequences occur)
  • through a thought experiment (you'll never know the real consequences of your actions).
  • through control (controlling what happens through your actions).
  • through real-time involvement (interacting with what happens as it occurs during your actions).
  • by some other mechanism (for example, moralist prescriptions of specific actions and predictions of consequences).

Do you believe that you do good for others through your actions before you believe that you know the actual consequences of those actions?

Do you believe that some actions you choose among cause good consequences at the time that you choose among them?

If you hold those beliefs, then why do Expected Value calculations matter to your doing good for others?

Curated and popular this week
Relevant opportunities