Article published in the first volume of the philosophy journal Aperto Animo. Available here.

Abstract:

Let Reductionism be the correct account of personal identity. How does that change, or constrain our views in ethical theory? In this paper, I presuppose a popular account of Reductionism, the psychological criterion of personal identity, and explore its implications to ethical theory. First, I argue that this view most plausibly implies the extreme view, according to which the ethically significant metaphysical units are momentary experiences. I then argue that the extreme view appropriately responds to the nonidentity problem via rejecting the person-affecting view. I then defend that the extreme view provides support to utilitarianism. Moreover, the extreme view results in the so-called Repugnant conclusion, which says that for any population with very high welfare, there is a population containing more individuals with lives which are barely worth living whose existence, all else equal, is better. I then defend the extreme view’s plausibility in face of this result.

9

0
0

Reactions

0
0
Comments2


Sorted by Click to highlight new comments since:

I agree that the extreme view is inconsistent with "the person-affecting view" (the specific principle), or at least, conscious wellbeing would no longer be action-guiding, but there are other "person-affecting views" that would still be action-guiding and also avoid the repugnant conclusion when applied to "atoms", "person-moments" or individual experiences.

It's possible to both satisfy the procreation asymmetry and "solve" the nonidentity problem, with a wide asymmetric person-affecting view, at the cost of the independence of irrelevant alternatives, see

Or, with a narrow asymmetric person-affecting view, i.e. being indifferent when both experiences would be "positive", but choosing against the worst off one when it is negative, only negative states would matter. Negative utilitarianism could be founded on such a view. See also tranquilism, and this thread on my shortform.

You could also have something like a wide necessitarianism, which would "solve" the nonidentity problem but have nothing to say about whether adding extra experiences is good or bad (assuming no effects on others), regardless of their welfare. At the level of persons, I think Teruji Thomas' paper discusses this, and Christopher Meacham's approach could probably be modified for this, too. See also Dasgupta's approach, described in "The welfare economics of population" by John Broome, which could be combined with something like Meacham's counterpart relations.

Thank you very much for this great reply! I'll certainly check them out once I get back to thinking about population ethics.

I do want to make clear that the approach I took in this paper was one of seeing what views and implications would "flow" from reductionism, rather than one of finding the best theory to accommodate our existing moral intuitions in light of some new fact.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Neel Nanda
 ·  · 1m read
 · 
TL;DR Having a good research track record is some evidence of good big-picture takes, but it's weak evidence. Strategic thinking is hard, and requires different skills. But people often conflate these skills, leading to excessive deference to researchers in the field, without evidence that that person is good at strategic thinking specifically. I certainly try to have good strategic takes, but it's hard, and you shouldn't assume I succeed! Introduction I often find myself giving talks or Q&As about mechanistic interpretability research. But inevitably, I'll get questions about the big picture: "What's the theory of change for interpretability?", "Is this really going to help with alignment?", "Does any of this matter if we can’t ensure all labs take alignment seriously?". And I think people take my answers to these way too seriously. These are great questions, and I'm happy to try answering them. But I've noticed a bit of a pathology: people seem to assume that because I'm (hopefully!) good at the research, I'm automatically well-qualified to answer these broader strategic questions. I think this is a mistake, a form of undue deference that is both incorrect and unhelpful. I certainly try to have good strategic takes, and I think this makes me better at my job, but this is far from sufficient. Being good at research and being good at high level strategic thinking are just fairly different skillsets! But isn’t someone being good at research strong evidence they’re also good at strategic thinking? I personally think it’s moderate evidence, but far from sufficient. One key factor is that a very hard part of strategic thinking is the lack of feedback. Your reasoning about confusing long-term factors need to extrapolate from past trends and make analogies from things you do understand better, and it can be quite hard to tell if what you're saying is complete bullshit or not. In an empirical science like mechanistic interpretability, however, you can get a lot more fe
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while