I would say that I'm most sympathetic to consequentialism and utilitarianism (if understood to allow aggregation in other ways besides summation). I don't think it's entirely implausible that the order in which harms or benefits occur can matter, and I think this could have consequences for replacement, but I haven't thought much about this, and I'm not sure such an intuition would be permitted in what's normally understood by "utilitarianism".
Maybe it would be helpful to look at intuitions that would justify replacement, rather than a specific theory. If you're a value-monistic consequentialist, treat the order of harms and benefits as irrelevant (the case for most utilitarians, I think), and you
1. accept that separate personal identities don't persist over time and accept empty individualism or open individualism (reject closed individualism),
2. take an experiential account of goods and bads (something can only be good or bad if there's a difference in subjective experiences), and
3. accept either 3.a. or 3.b., according to whether you accept empty or open individualism:
3. a. (under empty individualism) accept that it's better to bring a better off individual into existence than a worse off one (the nonidentity problem), or or,
3. b. (under open individualism) accept that it's better to have better experiences,
then it's better to replace a worse off being A with better off one B than to leave A, because the being A, if not replaced, wouldn't be the same A if left anyway. In the terms of empty individualism, there's A1 who will soon cease to exist regardless of our choice, and we're deciding between A2 and B.
A1 need not experience the harm of death (e.g. if they're killed in their sleep), and the fact that they might have wanted A2 to exist wouldn't matter (in their sleep), since that preference could never have been counted anyway since A1 never experiences the satisfaction or frustration of this preference.
For open individualism, rather than A and B, or A1, A2 and B, there's only one individual and we're just considering different experiences for that individual.
I don't think there's a very good basis for closed individualism (the persistence of separate identities over time), and it seems difficult to defend a nonexperiental account of wellbeing, especially if closed individualism is false, since I think we would have to also apply this to individuals who have long been dead, and their interests could, in principle, outweigh the interests of the living. I don't have a general proof for this last claim, and I haven't spent a great deal of time thinking about it, though, so it could be wrong.
Also, this is "all else equal", of course, which is not the case in practice; you can't expect attempting to replace people to go well.
Ways out for utilitarians
Even if you're a utilitarian, but reject 1 above, i.e. believe that separate personal identities do persist over time and take a timeless view of individual existence (an individual is still counted toward the aggregate even after they're dead), then you can avoid replacement by aggregating wellbeing over each individual's lifetime before aggregating across individuals in certain ways (e.g. average utilitarianism or critical-level utilitarianism, which of course have other problems), see "Normative population theory: A comment" by Charles Blackorby and David Donaldson.
Under closed individualism, you can also believe that killing is bad if it prevents individual lifetime utility from increasing, but also believe there's no good in adding people with good lives (or that this good is always dominated by increasing an individual's lifetime utility, all else equal), so that the killing which prevents individuals from increasing their lifetime utilities would not be compensated for by adding new people, since they add no value. However, if you accept the independence of irrelevant alternatives and that adding bad lives is bad (with the claim that adding good lives isn't good, this is the procreation asymmetry), then I think you're basically committed to the principle of antinatalism (but not necessarily the practice). Negative preference utilitarianism is an example of such a theory. "Person-affecting views and saturating counterpart relations" by Christopher Meacham describes a utilitarian theory which avoids antinatalism by rejecting the independence of irrelevant alternatives.
I originally wrote a different response to Wei's comment, but it wasn't direct enough. I'm copying the first part here since it may be helpful in explaining what I mean by "moral preferences" vs "personal preferences":
Each person has a range of preferences, which it's often convenient to break down into "moral preferences" and "personal preferences". This isn't always a clear distinction, but the main differences:
1. Moral preferences are much more universalisable and less person-specific (e.g. "I prefer that people aren't killed" vs "I prefer that I'm not killed").
2. Moral preferences are associated with a meta-preference that everyone has the same moral preferences. This is why we feel so strongly that we need to find a shared moral "truth". Fortunately, most people are in agreement in our societies on the most basic moral questions.
3. Moral preferences are associated with a meta-preference that they are consistent, simple, and actionable. This is why we feel so strongly that we need to find coherent moral theories rather than just following our intuitions.
4. Moral preferences are usually phrased as "X is right/wrong" and "people should do right and not do wrong" rather than "I prefer X". This often misleads people into thinking that their moral preferences are just pointers to some aspect of reality, the "objective moral truth", which is what people "objectively should do".
When we reflect on our moral preferences and try to make them more consistent and actionable, we often end up condensing our initial moral preferences (aka moral intuitions) into moral theories like utilitarianism. Note that we could do this for other preferences as well (e.g. "my theory of food is that I prefer things which have more salt than sugar") but because I don't have strong meta-preferences about my food preferences, I don't bother doing so.
The relationship between moral preferences and personal preferences can be quite complicated. People act on both, but often have a meta-preference to pay more attention to their moral preferences than they currently do. I'd count someone as a utilitarian if they have moral preferences that favour utilitarianism, and these are a non-negligible component of their overall preferences.