Hide table of contents

I wrote this post for my personal blog, Sunyshore, about why I think total utilitarianism is closest to the correct ethical theory and its implications for society. Since my blog is written for a general audience, this post explains a lot of basic concepts that most users of the EA Forum may be familiar with, and it's written in a more casual tone than I'd write for the EA Forum.

I've pasted the text of my blog post below, but I encourage you to check out the original version with images on Substack, and to subscribe so you'll receive my next post as soon as I publish it.


Happy new year, and welcome back to Sunyshore!

This is the first in a new series of posts about the foundations of my ethical and political worldview. Currently, I support effective altruism, which uses reason and evidence to benefit humans and other sentient beings as much as possible. At the level of public policy, I identify foremost as a social liberal—I support liberal democracy and nearly-free markets together with government intervention to reduce inequalities and provide public goods.

I expect my beliefs to change over time—they fluctuate from day to day depending on what I learn and experience—and this post is just a snapshot of my beliefs in the present moment. It may not reflect my beliefs a year from now.

I intend to cover a lot of topics in this series, ranging from economic systems to technological progress. In this first post, I will discuss utilitarianism and the foundations of my worldview.

So, let’s get to it!

Moral agents and moral patients

First, let me explain what I mean by “moral agent” and “moral patient,” since I will be using these terms throughout this post and future posts on ethics. These terms are seldom used outside of moral philosophy and are often conflated into the single concept of “moral personhood.”

  • Moral patients are beings whose welfare (pleasure minus suffering) is morally relevant. To me, moral patienthood requires both sentience (the ability to have feelings) and qualia (conscious experience).
  • Moral agents are beings whose actions are morally relevant. Moral agency requires the ability to reason about one’s actions, so that one can be held morally responsible for them.

Humans are moral patients because they can experience emotions such as pain and pleasure, and moral agents because they can reason about and take moral responsibility for their actions. Autonomous robots are moral agents because they reason about the effects of their actions on the real world, but are not moral patients because they lack sentience and conscious experience. By contrast, some non-human animals, such as chickens and cattle, are moral patients because they experience pleasure and pain, but are not moral agents because they cannot be meaningfully held responsible for their actions by humans (who lack the ability to communicate with them).

The veil of ignorance

In this section, I present an argument for why I believe total utilitarianism—which aims to maximize the total well-being of all moral patients—is closest to the correct ethical theory.

The original position is a well-known thought experiment in ethics, in which members of a society are given a chance to decide how that society should work, like a role-playing video game in which players decide on the game mechanics before they start playing. The players deliberate behind a veil of ignorance, in which they don’t know ahead of time anything about who they will be—including their social status, race, ethnicity, gender, or where and when they will be born. Because players negotiate from a position of ignorance about their specific stations in the resulting society, they must deliberate impartially, as if any of them could end up as the richest person or the poorest person; a light- or dark-skinned person; an able-bodied or disabled person; a person born with male, female, or intersex reproductive traits.

The most famous version of the veil of ignorance was developed by philosopher John Rawls in his book, A Theory of Justice (1971). However, Rawls borrowed this concept from previous thinkers, including philosopher Immanuel Kant and economist John Harsanyi. Harsanyi believed that people deliberating behind the veil of ignorance would design their society in such a way that maximizes their expected, or average, utility.

But wait. Does expected utility really mean average utility? Average utility refers to the average welfare of moral patients, whereas total utility also depends on the number of patients that exist. Depending on the society chosen by our players in the original position, different numbers of people will be instantiated. For example, if humanity goes extinct by 2100, then anyone slated to be born after 2100 will not exist. If everyone prefers to exist, then those people will prefer a world in which humanity survives past 2100. In general, each person will want to maximize the total utility of everyone instantiated, which depends on the probability that they will exist and the average utility of the people who do exist.

Similarly, we can show that the players will want to maximize the utility of all moral patients, not just human beings. Even though the players are capable of reasoning (and thus moral agency) while in the original position, they could be instantiated as humans or non-human animals, with or without moral agency. All players have a stake in the decision-making process whether or not they end up as moral agents.

Utilitarianism in the real world

Based on my (non-expert) knowledge of the social sciences, especially economics and political science, I think that a society with maximum total utility would have the following characteristics:

  • It would avoid unnecessary suffering and violence. Thus, it would provide for everyone’s safety while avoiding excessive or discriminatory punishments, and it would be free from war and armed conflict.
  • It would tolerate various ways of living in terms of religion, political belief, culture, sexuality, and so on; and it would be free from prejudice and discrimination based on morally irrelevant features like race and gender.
  • It would have a globalized, free-market economy (to promote economic efficiency and growth) with an effective welfare state (to limit inequality). Both markets and government intervention would work together to eliminate poverty and create wealth for all.
  • It would protect non-human animals and the natural and built environments, since everyone benefits from clean air, clean water, and a good climate.
  • It would protect humanity from existential risks, such as biological and nuclear weapons, so that humanity can survive and flourish for thousands, if not millions, of years.

It would have mechanisms to make progress and address new challenges. Thus, it would have inclusive, democratic institutions, as well as freedom of speech and assembly, so that people can openly propose and debate ideas for improvement.

In short, such a society would embrace economic, social, and political liberalism. It would be an open society in which everyone can fully participate, free from discrimination and violence. But the real world is full of suffering and injustice, even as it has improved so much in the last 200 years. How can we build a better world?

Countless intellectuals and social movements have dedicated themselves to improving the world. One such movement is liberalism, a diverse political movement that aims to achieve such goals as securing civil and political rights and creating shared prosperity through the reform of political and economic institutions. Liberalism came of age in the early 19th century, and it has come to dominate modern politics. More recently, the effective altruism movement has been applying careful reasoning and evidence to figure out how to help others as effectively as possible. Its successes include GiveWell, Open Philanthropy, and the Gates Foundation.

I plan to write more posts about how we can improve the world, drawing from both the liberal and effective altruist traditions—be sure to subscribe so you’ll receive them. Also, if you like this post, please share it with your friends.

In the meantime, you can learn more about utilitarianism at Utilitarianism.net, a website co-written by William MacAskill, a philosophy professor at Oxford and one of the founders of effective altruism, and Darius Meissner, an Oxford student and fellow member of the EA community.

Take care!

10

0
0

Reactions

0
0

More posts like this

Comments1


Sorted by Click to highlight new comments since:

I am attracted to utilitarianism, but also find some of the possible implications off-putting. But there are also some objections I have from first principles. 

One objection is that any numbers we use in practice just have to be made up. (This objection might be especially serious if we take animals into account, which I think we should.) So maybe utalitarianism  is the "correct" theory but if I don't have access to the correct utilities it is not clear whether I should use some made up numbers to do the expected utility calculations. One might compare with theorems saying that individual rational choice is equivalent to maximizing a von Neumann-Morgenstern utility function. Yet very few people, even economists, try to do that in practice and it is not clear that people would be less irrational in practice if they tried to do calculations with their expected utility in various circumstances. 

A second theoretical objection I have is that if we suppose there is any chance that humanity, or sentient life, will survive forever, then the universe will contain infinite amounts of pain and pleasure, all calculations become divergent, and the theory gives no guide at all. You might object that this is impossible with current scientific theories, but the conclusion goes through no matter how small the probability is. Surely there is a 1/Ackerman(1000) chance that our current understanding of physics is wrong? 

Curated and popular this week
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
 ·  · 8m read
 · 
In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward.  The ideas I think could have the highest impact are:  1. Government placements/secondments in key GHW areas (e.g. international development), and 2. Expanded (ultra) high-net-worth ([U]HNW) advising Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I’m excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups.  I can’t commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to articulate a few of the ideas I’ve been considering.  I’d love to hear your thoughts, both on these ideas and any other gaps you see in the space! Introduction I’m Mel, a Senior Program Associate at Open Philanthropy, where I lead grantmaking for the Effective Giving and Careers program[1] (you can read more about the program and our current strategy here). Throughout my time in this role, I’ve encountered great ideas, but have also noticed gaps in the space. This post shares a list of projects I’d like to see pursued, and would potentially want to support. These ideas are drawn from existing efforts in other areas (e.g., projects supported by our GCRCB team), suggestions from conversations and materials I’ve engaged with, and my general intuition. They aren’t meant to be a definitive roadmap, but rather a starting point for discussion. At the moment, I don’t have capacity to more actively explore these ideas and find the right founders for related projects. That may change, but for now, I’m interested in
Relevant opportunities