A popular notion for some people interested in effective altruism is moral uncertainty1.
There is much disagreement by intelligent people over what it is right to do, and while we each have our own beliefs about the matter, it would be foolish to be 100% certain that we are correct while all these others are not. Until further evidence is in, or until much wider consensus is reached, the only rational approach is to spread your degrees of belief between different ethical theories.
I find this notion unconvincing because I view morality as subjective. For example, if I believe bribery is wrong and someone else believes it's morally acceptable, I think neither of us is (in)correct. Instead, we simply differ in our moral preferences, similarly to how I might think apples are tasty while someone else thinks they're gross. This view is sufficient to reject the notion of moral uncertainty described above, but there are at least two ways we can plausibly attempt to preserve the conclusion that we should accommodate alternative moral frameworks in our decision-making.
First, although it seems to me very unlikely and even incomprehensible to have a correct2 morality, one that is objective true independent of whose perspective we use, the existence of a correct morality (this belief is often called moral realism) is an empirical belief so I do accept a modesty argument1 for its possibility.
Start with the premise of
Nonrealist Nihilism: If moral realism is false, then nothing matters.
Now, suppose you think the probability of moral realism is P. Then when you consider taking some action, the expected value of the action is
P * (value if moral realism is true) + (1-P) * (value if moral realism is false)
= P * (value if moral realism is true) + (1-P) * 0
where the substitution with 0 follows from the Nonrealist Nihilism premise. Therefore, we can prudentially assume that moral realism is true.
In this link above, Brian Tomasik points out some good reasons for rejecting this argument. But even given the argument holds, moral realism seems so weird and incomprehensible that I see no reason to prefer any possible moral realisms (e.g. utilitarianism, libertarianism) over any other, so each possible moral realism is canceled out by an equally likely opposite realism (e.g. utilitarianism says happiness is good, but anti-utilitarianism says happiness is bad so these cancel out) and the argument fails.
The second rescue of moral uncertainty is more tenable and, as far as I know, was first formalized by Ruairí Donnelly. We can call it anti-realist moral uncertainty. This is how I'd describe this idea:
I accept some uncertainty in what my future self will morally value. If I am a utilitarian today, I accept I might not be a utilitarian tomorrow. Therefore, I choose to give some weight to these future values.
I, personally, choose not to account for the values of my future self in my moral decisions. If I did, I might expect my future values to be more similar to the average human morality. So, it seems to be simply a matter of personal preference. I think this reframing of moral uncertainty makes sense, even if I don't see reason that I should adopt it.
Lastly, it is important to note that even without accommodating for moral uncertainty, there are still good reasons to cooperate with people whose value systems differ from our own.
[1] I am not sure who first came up with these two terms (moral uncertainty and the modesty argument), so I just used the most frequently used write-ups I know of for each term.
[2] By correct, I mean correct in the way it is correct that the sky is blue.
You don't account for the value of your future self, but do you account for the values of a version of yourself that is idealized in some appropriate way? E.g. more rational, thought about morality for longer, smarter etc. Whether this would have significant impact on your values, is an open question, which also depends on how you'd 'idealize' yourself. I'd be very interested in thoughts on how much we should expect our moral views to change upon further deliberation by the way.
On moral realism, I assume you mean that we have absolutely no evidence about the truth of either utilitarianism or anti-utilitarianism so we should apply a principle of indifference as to which one is more likely? I think I agree with that idea, but there still remains slightly higher chance that utilitarianism is true - simply because more people think it is, even if we find their evidence for that questionable. Then of course there's still the question of why one should care about such an objective morality anyway - my approach would be to evaluate whether I'm an agent who's goal it is to do what's objectively moral or who's goal it is to do some other thing that I find moral.