Hide table of contents

Shall we all re-read Parfit’s ‘Overpopulation and The Quality of Life’ before taking certain assumptions for granted?

I am motivated to write this post in response to @Rafael Ruiz ’s recent publication, but these are thoughts I’ve been contemplating for some time. Particularly, I’ve noticed how various members of the EA community not only engage in deep discussions about the hypothetical value of life for different arthropods and invertebrates but also experience genuine distress over the need to remove ants or moths from their homes. Similarly, I’ve read here about the ‘anxious, dizzying existential crisis I’m facing recently’ which is at least partly triggered by adhering to classical utilitarianism as a primary moral framework. Sometimes, it seems like this perspective is presented as if utilitarianism as an ethical stance is both obvious and unavoidable.

As the author points out and we all know, utilitarianism, particularly in its classical form, can indeed lead us to several ‘crazy towns’, and prima facie, one shouldn’t abandon these ‘crazy trains’ if they consider themselves to be intellectually and morally honest. I somewhat agree in a trivial sense, but it often seems as though we completely gloss over the crucial initial step of adopting classical utilitarianism as a main moral view—an assumption that is frequently taken for granted in many of these forum discussions.

I just find it surprising to encounter people who assert so categorically and nonchalantly that they are adherents of classical utilitarianism.   Given the caliber of this forum, I assume that the most obvious problems with utilitarianism have been discussed extensively, so I have no intention of reiterating those discussions. Personally, I find Derek Parfit’s analysis particularly illuminating; he clearly defines what is ultimately the central problem of utilitarianism with the repugnant conclusion. His famous essay ‘Overpopulation and The Quality of Life’ not only clarifies the issue but also suggests a possible solution with his concept of ‘Perfectionism’—a concept which, as far as I remember, he does not fully develop in ‘Reasons and Persons’.    This is not the only objection against the moral view, but I find that it's a particularly robust one, making it sufficient as a primary reference here as an example.

‘Intellectual and moral honesty’ is just as important, if not more, when addressing the problems of Utilitarianism

I sincerely believe that addressing these aspects of utilitarianism is crucial for anyone striving to be ‘intellectually and morally honest’, as it feels unfair to imply that anyone who gets out of the train is just due to intellectual or moral dishonesty. Neglecting these issues is equally indefensible, considering the significant consequences if utilitarianism is, even partially, flawed. A realistic possibility from carelessly adhering to a utilitarian stance might be an EA member inadvertently influencing a future AI to transform the universe into ‘happy ants’ (as an example, continuing the theme of the referenced post). If the repugnant conclusion poses a real problem, and if Parfit’s notion of Perfectionism (or any similar concept addressing current challenges in utilitarianism) holds true, then EA could end up being responsible for an immensely catastrophic act of evil. This would render incidents like the FTX debacle irrelevant in comparison, and become a maximum example of how ‘the road to hell is paved with good intentions’.

Again, the author expresses confusion, saying, ‘I’m baffled, I feel somewhat anxious, uneasy. I feel bad about any time I waste that isn’t related to astronomical numbers. It is just plain weird that alien invertebrates might be by far the most important moral subjects in the universe.’ What I want to point out is how such anxiety may not stem from being ‘intellectually and morally honest’, but instead might arise from failing to confront the serious issues within utilitarianism with the same level of intellectual and moral scrutiny. You might fully understand the implications of the repugnant conclusion and other critiques against utilitarianism and choose to ‘bite the bullet anyway.’ But that would be your prerogative, don’t presume it to be the obvious course of action. This stance is highly personal and subjective, not universally held among other thoughtful individuals. Again I am continually surprised by how frequently this oversight occurs in many discussions, to the point that it seems truly forsaken in their minds.

A question about EA’s Approach to Metaethical Issues

At this point, in my view if longtermism is indeed compelling, its primary value lies in giving us time to resolve ethical dilemmas  (a point that has been previously made in several discussions). But we should not jump the gun before we have truly arrived at solutions, as the potential consequences could be cosmically severe. 

Reflecting on the previous discussion, I would like to pose a more specific question regarding the EA approach to this topic. I understand that within the context of EA, there should also be a dedicated area for ‘funding’ or research aimed at addressing these important questions—deepening and clarifying the challenges associated with utilitarianism. What are these initiatives? Or is EA currently just accepting utilitarianism as the default normative ethic and limiting its efforts to merely optimizing ‘concrete implementations’ from various institutions and research endeavors?

I found @Wei Dai ’s comment in the previous post particularly interesting, as it highlights a neglected area, which is surprising since I believe this should be a priority for clarification before we proceed. He notes, 

‘As far as actionable points, I’ve been advocating for work on metaphilosophy or AI philosophical competence as a means to accelerate philosophical progress in general—ensuring it keeps pace with other forms of intellectual progress, such as scientific and technological advancements, which are likely to be accelerated by AI development. This would also improve the chances that human-descended civilizations will eventually arrive at correct conclusions on important moral and philosophical issues, and be motivated and guided by those conclusions.’ 

And I guess this is the kind of work I mean, I am keen to learn if anyone is aware of specific initiatives aimed at clarifying the metaethical landscape at this juncture (beyond simply employing longtermism to eventually ‘have time to resolve metaethics’). From a personal viewpoint, I believe the next significant step in our ethical and moral advancement will occur once we fully address the ‘easy problem of consciousness’ and better understand the translation from the physical to the phenomenal ( thou to reference another popular recent post in the forum, there is still no even consensus on whether phenomenal consciousness is indeed relevant for moral status here.    Again, we are at such an early stage, and our focus needs to be on enhancing our moral understanding.).

Finishing with a thought experiment, The Countless Ants Trolley, or how I would sincerely save your lovely child from arthropods

Lastly, I want to present a personal thought experiment that I find useful for linking the repugnant conclusion with the invertebrate issue that Rafael mentions. Although there’s nothing fundamentally new here—it’s essentially a reiteration of the repugnant conclusion—I still find it illuminating. If anyone has a catchy name for this, I’m all ears; perhaps something like the ‘Countless Ants Trolley’ as I say in the title heh, thou I suspect similar descriptions might already exist. 

 The scenario involves a classic trolley problem: on one side of the track stands a bright and kind human child, perhaps your own son or daughter (I literally picture this while thinking about other people’s children, not my own) . This child is capable of deep love and compassion, curious about mathematics and philosophy, and any other traits you find meaningful and important. They have the potential to push the boundaries of human knowledge and exhibit the best virtues of the human condition—traits that are rare but do exist. This child not only brings unique and profound happiness to their parents but also shares in this deep joy. These moments, like tucking them in at night and embracing after a day filled with shared discoveries and growth, foster a mutual bond and contentment that material means alone could never achieve.   Some parents here may know exactly what I’m talking about, yet, I fear that I am too clumsy with words to fully capture this experience.   Perhaps words themselves might never truly suffice.

Anyways, on the opposite track, imagine we must decide at what point the number of small invertebrates—be it ants, cockroaches, moths, or any sentient being deemed to have the minimal positive value of experience—would justify diverting the trolley away from the child. Assume the trolley, initially headed towards this track, would instantly and painlessly eliminate these insects, causing no explicit suffering.   So here is the thing: I firmly believe that if I were actually in such a situation, no finite number of ants could compel me to sacrifice your son or daughter. The value of that child’s life is immeasurable compared to the sum of the ants. They simply belong to different dimensions of valuation, they are from a different nature, one is not the result of any kind of aggregation of the other; they are just incommensurable. As I often summarize to myself, ‘You just can’t always do arithmetic with phenomenology,’ which is what I understand Parfit truly revealed with the repugnant conclusion.

In any case, this may all just be a meandering mess I’ve dabbled in as a gut reaction, compounded by the fact that English is not my native language, so please forgive my lack of precision and insight. I do understand that most people in these forums are already trying harder and with more sincerity than most. However, I can’t shake this nagging feeling that concepts like Parfit’s Perfectionism should be more thoroughly considered, or at least, more present before making certain assumptions about our starting moral stance. Love to everybody.

15

0
0
1

Reactions

0
0
1

More posts like this

Comments5


Sorted by Click to highlight new comments since:

You might like my 'Nietzschean Challenge to Effective Altruism':

The upshot: I’ll argue that there’s some (limited) overlap between the practical recommendations of Effective Altruism (EA) and Nietzschean perfectionism, or what we might call Effective Aesthetics (EÆ). To the extent that you give Nietzschean perfectionism some credence, this may motivate (i) prioritizing global talent scouting over mere health interventions alone, (ii) giving less priority to purely suffering-focused causes, such as animal welfare, (iii) wariness towards traditional EA rhetoric that’s very dismissive of funding for art museums and opera houses, and (iv) greater support for longtermism, but with a strong emphasis on futures that continue to build human capacities and excellences, and concern to avoid hedonistic traps like “wireheading”.

P.S. I think you mean to talk about 'ethical theory'. 'Metaethics' is a different philosophical subfield entirely.

Thanks @Richard Y Chappell🔸 , I truly enjoyed that one (you’re right that all this leans more towards ethical theory or normative ethics than metaethics; my apologies for the slip. I particularly resonated with:

That said, I do think the view contains some under-appreciated insights that are worth taking on board, at least under the remit of “moral uncertainty”. For those concerned about the Repugnant Conclusion, I think perfectionism at least offers a better alternative than bleak “negative” views that deny any positive value to our existence.

Moreover, I find the implicit critique of hedonism extremely compelling, and find that reflecting on Nietzschean perfectionism moves me more strongly towards some form of objective list theory of well-being. I think welfare objectivism is a view that EAs ought to take very seriously, and it especially ought to lead us to want to (i) rule out wireheading and other “cheap” hedonistic futures as involving unacceptable axiological risk, given how poorly such futures score on plausible non-hedonistic views

I completely agree that moving towards an objective list theory may not only be plausible but crucial, given the risks of overlooking the possibility that it may be closer to the truth.

In any case, this is precisely the type of topic and nuance that I find lacking in most EA discussions,  I find it surprising that considering such important questions is often not even seen as a possibility.

Are posts like this, then, a rarity within the EA context? Are there any sub-communities, study groups, or institutions that focus seriously on these types of issues? (I assume there aren’t, as you likely would have mentioned them, but I remain surprised)

Additionally, if you have any other references to essays or articles that explore different types of perfectionism as a potential solution to some of the challenges posed by the repugnant conclusion, I would greatly appreciate it .

Thanks again!

Thanks for the post. There are some writings out of the Center for Reducing Suffering that may interest you. They tend to take a negative utilitarian view of things, which has some interesting implications, in particular for the repugnant conclusion(s)

I've been trying to come up with my own version of utilitarianism that I believe takes better account of the effects of rights and self-esteem/personal responsibility. In doing so, it's become more and more apparent to me that our consciences are not naturally classic utilitarian in nature, and this is likely from whence some apparent disagreements between utilitarian implications and our moral intuitions (as from our consciences) arise. I'm planning on writing something up soon on how we might go about quantifying our consciences so that they could be used in a quantitative decision making process (as by an AI) rather than trying to make a full utilitarian framework into a decision making framework for an AI. This has some similarities to what is often suggested by Richard Chappell, i.e., that we follow heuristics (in this case, our consciences) when making decisions rather than some "utilitarian calculus."

Thank you very much Sean for your response, specially  I found Minimalist extended very repugnant conclusions are the least repugnant interesting, thou I feel that still kind of misses the broader point (or bite the wrong bullet) about how you can't really do "arithmetics with phenomenology" to begin with, in this way in which I think Parfit makes apparent in Overpopulation and The Quality of Life.

Good luck with your plans for quantifying our consciences so that they could be used in a quantitative decision making process , thou I'm afraid that anything close to that is going to be very hard until we somehow solve the "easy problem" of consciousness, (and not sure even if then..).

Executive summary: The author argues that the EA community often takes classical utilitarianism for granted without sufficiently addressing its theoretical challenges, which could lead to potentially catastrophic outcomes if flawed utilitarian assumptions influence future AI systems.

Key points:

  1. Many EA discussions assume classical utilitarianism without critically examining its foundations and issues like the repugnant conclusion.
  2. Derek Parfit's work on population ethics and "Perfectionism" offers important critiques of utilitarianism that deserve more attention.
  3. Uncritically adhering to utilitarianism could lead to disastrous outcomes, like an AI optimizing for "happy ants" instead of human flourishing.
  4. The author questions whether EA is funding research to address utilitarianism's challenges or just accepting it as the default ethical framework.
  5. A thought experiment ("Countless Ants Trolley") illustrates how some values may be incommensurable, challenging utilitarian aggregation.
  6. The author calls for more rigorous examination of metaethical issues before proceeding with longtermist efforts that could have cosmic consequences.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr