I would say that I'm most sympathetic to consequentialism and utilitarianism (if understood to allow aggregation in other ways besides summation). I don't think it's entirely implausible that the order in which harms or benefits occur can matter, and I think this could have consequences for replacement, but I haven't thought much about this, and I'm not sure such an intuition would be permitted in what's normally understood by "utilitarianism".
Maybe it would be helpful to look at intuitions that would justify replacement, rather than a specific theory. If you're a value-monistic consequentialist, treat the order of harms and benefits as irrelevant (the case for most utilitarians, I think), and you
1. accept that separate personal identities don't persist over time and accept empty individualism or open individualism (reject closed individualism),
2. take an experiential account of goods and bads (something can only be good or bad if there's a difference in subjective experiences), and
3. accept either 3.a. or 3.b., according to whether you accept empty or open individualism:
3. a. (under empty individualism) accept that it's better to bring a better off individual into existence than a worse off one (the nonidentity problem), or or,
3. b. (under open individualism) accept that it's better to have better experiences,
then it's better to replace a worse off being A with better off one B than to leave A, because the being A, if not replaced, wouldn't be the same A if left anyway. In the terms of empty individualism, there's A1 who will soon cease to exist regardless of our choice, and we're deciding between A2 and B.
A1 need not experience the harm of death (e.g. if they're killed in their sleep), and the fact that they might have wanted A2 to exist wouldn't matter (in their sleep), since that preference could never have been counted anyway since A1 never experiences the satisfaction or frustration of this preference.
For open individualism, rather than A and B, or A1, A2 and B, there's only one individual and we're just considering different experiences for that individual.
I don't think there's a very good basis for closed individualism (the persistence of separate identities over time), and it seems difficult to defend a nonexperiental account of wellbeing, especially if closed individualism is false, since I think we would have to also apply this to individuals who have long been dead, and their interests could, in principle, outweigh the interests of the living. I don't have a general proof for this last claim, and I haven't spent a great deal of time thinking about it, though, so it could be wrong.
Also, this is "all else equal", of course, which is not the case in practice; you can't expect attempting to replace people to go well.
Ways out for utilitarians
Even if you're a utilitarian, but reject 1 above, i.e. believe that separate personal identities do persist over time and take a timeless view of individual existence (an individual is still counted toward the aggregate even after they're dead), then you can avoid replacement by aggregating wellbeing over each individual's lifetime before aggregating across individuals in certain ways (e.g. average utilitarianism or critical-level utilitarianism, which of course have other problems), see "Normative population theory: A comment" by Charles Blackorby and David Donaldson.
Under closed individualism, you can also believe that killing is bad if it prevents individual lifetime utility from increasing, but also believe there's no good in adding people with good lives (or that this good is always dominated by increasing an individual's lifetime utility, all else equal), so that the killing which prevents individuals from increasing their lifetime utilities would not be compensated for by adding new people, since they add no value. However, if you accept the independence of irrelevant alternatives and that adding bad lives is bad (with the claim that adding good lives isn't good, this is the procreation asymmetry), then I think you're basically committed to the principle of antinatalism (but not necessarily the practice). Negative preference utilitarianism is an example of such a theory. "Person-affecting views and saturating counterpart relations" by Christopher Meacham describes a utilitarian theory which avoids antinatalism by rejecting the independence of irrelevant alternatives.
If we are concerned with how vulnerable moral theories such as traditional total act-utilitarianism and various other forms of consequentialism are to replacement arguments, I think much more needs to be said. Here are some examples.
1. Suppose the agent is very powerful, say, the leader of a totalitarian society on Earth that can dominate the other people on Earth. This person has access to technology that could kill and replace either everyone on Earth or perhaps everyone except a cluster of the leader’s close, like-minded allies. Roughly, this person (or the group of like-minded people the leader belongs to) is so powerful that the wishes of others on Earth who disagree can essentially be ignored from a tactical perspective. Would it be optimal for this agent to kill and replace either everyone or, for example, at least everyone in other societies who might otherwise get in the way of the maximization of the sum of well-being ?
2. You talk about modifying one’s ideology, self-bind and commit, but there are questions about whether humans can do that. For example, if some agent in the future would be about to be able to kill and replace everyone, can you guarantee that this agent wi... (read more)
It is worth noting that this is not, as it stands, a reply available to a pure traditional utilitarian.
But a relevant question here is whether that also holds true given a purely utilitarian view, as opposed to... (read more)
That's why the very first words of my comment were "I don't identify as a utilitarian."
I think I, and maybe others, are still confused about the point of your top-level comment. Simon Knutsson's argument is against utilitarianism, and I think Richard Ngo wanted to see if there was a good counter-argument against it from a utilitarian perspective, or if a utilitarian just has to "bite the bullet". It seems like the motivation for both people were to try to figure out whether utilitarianism is the right moral philosophy / correct normative ethics.
Your reply doesn't seem to address their motivation, which is why I'm confused. (If utilitarianism is the right moral philosophy then it would give the right action guidance even if one was 100% sure of it and other considerations such as contractarianism didn't apply, so it seems beside the point to talk about contractarianism and overconfidence.) Is the point that utilitarianism probably isn't right, but some other form of consequentialism is? If so, what do you have in mind?
Okay, thanks. So I guess the thing I'm curious about now is: what heuristics do you have for deciding when to prioritise contractarian intuitions over consequentialist intuitions, or vice versa? In extreme cases where one side feels very strongly about it (like this one) that's relatively easy, but any thoughts on how to extend those to more nuanced dilemmas?