I would say that I'm most sympathetic to consequentialism and utilitarianism (if understood to allow aggregation in other ways besides summation). I don't think it's entirely implausible that the order in which harms or benefits occur can matter, and I think this could have consequences for replacement, but I haven't thought much about this, and I'm not sure such an intuition would be permitted in what's normally understood by "utilitarianism".
Maybe it would be helpful to look at intuitions that would justify replacement, rather than a specific theory. If you're a value-monistic consequentialist, treat the order of harms and benefits as irrelevant (the case for most utilitarians, I think), and you
1. accept that separate personal identities don't persist over time and accept empty individualism or open individualism (reject closed individualism),
2. take an experiential account of goods and bads (something can only be good or bad if there's a difference in subjective experiences), and
3. accept either 3.a. or 3.b., according to whether you accept empty or open individualism:
3. a. (under empty individualism) accept that it's better to bring a better off individual into existence than a worse off one (the nonidentity problem), or or,
3. b. (under open individualism) accept that it's better to have better experiences,
then it's better to replace a worse off being A with better off one B than to leave A, because the being A, if not replaced, wouldn't be the same A if left anyway. In the terms of empty individualism, there's A1 who will soon cease to exist regardless of our choice, and we're deciding between A2 and B.
A1 need not experience the harm of death (e.g. if they're killed in their sleep), and the fact that they might have wanted A2 to exist wouldn't matter (in their sleep), since that preference could never have been counted anyway since A1 never experiences the satisfaction or frustration of this preference.
For open individualism, rather than A and B, or A1, A2 and B, there's only one individual and we're just considering different experiences for that individual.
I don't think there's a very good basis for closed individualism (the persistence of separate identities over time), and it seems difficult to defend a nonexperiental account of wellbeing, especially if closed individualism is false, since I think we would have to also apply this to individuals who have long been dead, and their interests could, in principle, outweigh the interests of the living. I don't have a general proof for this last claim, and I haven't spent a great deal of time thinking about it, though, so it could be wrong.
Also, this is "all else equal", of course, which is not the case in practice; you can't expect attempting to replace people to go well.
Ways out for utilitarians
Even if you're a utilitarian, but reject 1 above, i.e. believe that separate personal identities do persist over time and take a timeless view of individual existence (an individual is still counted toward the aggregate even after they're dead), then you can avoid replacement by aggregating wellbeing over each individual's lifetime before aggregating across individuals in certain ways (e.g. average utilitarianism or critical-level utilitarianism, which of course have other problems), see "Normative population theory: A comment" by Charles Blackorby and David Donaldson.
Under closed individualism, you can also believe that killing is bad if it prevents individual lifetime utility from increasing, but also believe there's no good in adding people with good lives (or that this good is always dominated by increasing an individual's lifetime utility, all else equal), so that the killing which prevents individuals from increasing their lifetime utilities would not be compensated for by adding new people, since they add no value. However, if you accept the independence of irrelevant alternatives and that adding bad lives is bad (with the claim that adding good lives isn't good, this is the procreation asymmetry), then I think you're basically committed to the principle of antinatalism (but not necessarily the practice). Negative preference utilitarianism is an example of such a theory. "Person-affecting views and saturating counterpart relations" by Christopher Meacham describes a utilitarian theory which avoids antinatalism by rejecting the independence of irrelevant alternatives.
Sure. I’ll use traditional total act-utilitarianism defined as follows as the example here so that it’s clear what we are talking about:
Traditional total act-utilitarianism: An act is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.
I gather the metaethical position you describe is something like one of the following three:
(1) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I will think that any act I perform is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.’
This (1) was about which of your actions will be right. Alternatively, the metaethical position could be as follows:
(2) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I will think that any act anyone performs is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.’
Or perhaps formulating it in terms of want or preference instead of rightness, like the following, better describes your metaethical position (using utilitarianism as just an example):
(3) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I will want or have a preference for that everyone act in a way that results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.’
My impression is that in the academic literature, metaethical theories/positions are usually, always or almost always formulated as general claims about what, for example, statements such as ‘one ought to be honest’ means; the metaethical theories/position do not have the form ‘when I say “one ought to be honest” I mean …’ But, sure, talking, as you do, about what you mean when you say ‘I think utilitarianism is right’ sounds fine.
The new version of your thought experiment sounds fine, which I gather would go something like the following:
Suppose almost all humans adopt utilitarianism as their moral philosophy and fully colonize the universe, and then someone invents the technology to kill humans and replace humans with beings of greater well-being. (Assume it would be optimal, all things considered, to kill and replace humans.) Utilitarianism seems to imply that at least humans who are utilitarians should commit mass suicide (or accept being killed) in order to bring the new beings into existence, because that's what utilitarianism implies is the optimal and hence morally right action in that situation.