I would say that I'm most sympathetic to consequentialism and utilitarianism (if understood to allow aggregation in other ways besides summation). I don't think it's entirely implausible that the order in which harms or benefits occur can matter, and I think this could have consequences for replacement, but I haven't thought much about this, and I'm not sure such an intuition would be permitted in what's normally understood by "utilitarianism".
Maybe it would be helpful to look at intuitions that would justify replacement, rather than a specific theory. If you're a value-monistic consequentialist, treat the order of harms and benefits as irrelevant (the case for most utilitarians, I think), and you
1. accept that separate personal identities don't persist over time and accept empty individualism or open individualism (reject closed individualism),
2. take an experiential account of goods and bads (something can only be good or bad if there's a difference in subjective experiences), and
3. accept either 3.a. or 3.b., according to whether you accept empty or open individualism:
3. a. (under empty individualism) accept that it's better to bring a better off individual into existence than a worse off one (the nonidentity problem), or or,
3. b. (under open individualism) accept that it's better to have better experiences,
then it's better to replace a worse off being A with better off one B than to leave A, because the being A, if not replaced, wouldn't be the same A if left anyway. In the terms of empty individualism, there's A1 who will soon cease to exist regardless of our choice, and we're deciding between A2 and B.
A1 need not experience the harm of death (e.g. if they're killed in their sleep), and the fact that they might have wanted A2 to exist wouldn't matter (in their sleep), since that preference could never have been counted anyway since A1 never experiences the satisfaction or frustration of this preference.
For open individualism, rather than A and B, or A1, A2 and B, there's only one individual and we're just considering different experiences for that individual.
I don't think there's a very good basis for closed individualism (the persistence of separate identities over time), and it seems difficult to defend a nonexperiental account of wellbeing, especially if closed individualism is false, since I think we would have to also apply this to individuals who have long been dead, and their interests could, in principle, outweigh the interests of the living. I don't have a general proof for this last claim, and I haven't spent a great deal of time thinking about it, though, so it could be wrong.
Also, this is "all else equal", of course, which is not the case in practice; you can't expect attempting to replace people to go well.
Ways out for utilitarians
Even if you're a utilitarian, but reject 1 above, i.e. believe that separate personal identities do persist over time and take a timeless view of individual existence (an individual is still counted toward the aggregate even after they're dead), then you can avoid replacement by aggregating wellbeing over each individual's lifetime before aggregating across individuals in certain ways (e.g. average utilitarianism or critical-level utilitarianism, which of course have other problems), see "Normative population theory: A comment" by Charles Blackorby and David Donaldson.
Under closed individualism, you can also believe that killing is bad if it prevents individual lifetime utility from increasing, but also believe there's no good in adding people with good lives (or that this good is always dominated by increasing an individual's lifetime utility, all else equal), so that the killing which prevents individuals from increasing their lifetime utilities would not be compensated for by adding new people, since they add no value. However, if you accept the independence of irrelevant alternatives and that adding bad lives is bad (with the claim that adding good lives isn't good, this is the procreation asymmetry), then I think you're basically committed to the principle of antinatalism (but not necessarily the practice). Negative preference utilitarianism is an example of such a theory. "Person-affecting views and saturating counterpart relations" by Christopher Meacham describes a utilitarian theory which avoids antinatalism by rejecting the independence of irrelevant alternatives.
My first objection is that you're using a different form of "should" than what is standard. My preferred interpretation of "X should do Y" is that it's equivalent to "I endorse some moral theory T and T endorses X doing Y". (Or "according to utilitarianism, X should do Y" is more simply equivalent to "utilitarianism endorses X doing Y"). In this case, "should" feels like it's saying something morally normative.
Whereas you seem to be using "should" as in "a person who has a preference X should act on X". In this case, should feels like it's saying something epistemically normative. You may think these are the same thing, but I don't, and either way it's confusing to build that assumption into our language. I'd prefer to replace this latter meaning of "should" with "it is rational to". So then we get:
"it is rational for humans who are utilitarians to commit mass suicide in order to bring the new beings into existence, because that's what utilitarianism implies is the right action."
My second objection is that this is only the case if "being a utilitarian" is equivalent to "having only one preference, which is to follow utilitarianism". In practice people have both moral preferences and also personal preferences. I'd still count someone as being a utilitarian if they follow their personal preferences instead of their moral preferences some (or even most) of the time. So then it's not clear whether it's rational for a human who is a utilitarian to commit suicide in this case; it depends on the contents of their personal preferences.
I think we avoid all of this mess just by saying "Utilitarianism endorses replacing existing humans with these new beings." This is, as I mentioned earlier, a similar claim to "ZFC implies that 1 + 1 = 2", and it allows people to have fruitful discussions without agreeing on whether they should endorse utilitarianism. I'd also be happy with Simon's version above: "Utilitarianism seems to imply that humans should...", although I think it's slightly less precise than mine, because it introduces an unnecessary "should" that some people might take to be a meta-level claim rather than merely a claim about the content of the theory of utilitarianism (this is a minor quibble though. Analogously: "ZFC implies that 1 + 1 = 2 is true").
Anyway, we have pretty different meta-ethical views, and I'm not sure how much we're going to converge, but I will say that from my perspective, your conflation of epistemic and moral normativity (as I described earlier) is a key component of why your position seems confusing to me.