A couple comments:
1. Evolution does not imply that every organism has an "intrinsic desire for survivability and reproduction." Rather, it implies that organisms will tend to act in ways that would lead to survival and reproduction in their ancestral environment, but these actions need not be motivated by a drive to survive and reproduce. In slogan form: We are adaptation executors, not fitness maximizers.
For example, the reason people nowadays eat Twinkies is not because we want to survive or reproduce, but because we like the taste! This preference for sugary foods would have been fitness enhancing in our ancestral environment, but it is maladaptive in our modern one. Yet people continue eating Twinkies anyways.
2. The "core" of your argument doesn't seem sound to me. You say that hyper-sentient (V1) aliens wouldn't eat humans because other super duper sentient (V2) aliens might eat them. And V3 aliens might eat the V2 aliens, and so on. But... the mere possibility of other aliens is not a strong reason to do anything. After all, it's hypothetically possible that the V2 aliens would be positively elated that the V1 aliens are eating humans and reward the V1 aliens even more. What matters though is not what's possible but rather about what the expected effects of one's actions are.
If V2 aliens don't actually exist and the V1 aliens know this, what other prudential reason would the V1 aliens have for refraining from eating humans? I don't see any.
A couple thoughts on this:
1. Perhaps it's true that elections are mostly sold to the highest bidder in developing poor countries. (I'm not familiar with the research on this, and I'd be reluctant to simply trust your Wikipedia link.) Should EAs help the "better" candidate buy their way to power? It seems like this risks undermining the legitimacy of their elections.
2. It's not clear to me that it's easy to figure out who the better candidate is. In one's own country that can often be difficult. Understanding the politics of a foreign country would be even harder. And I'm skeptical that we can just defer to whatever the majority of a country wants because a) it won't always be clear what the majority wants and b) there are reasons to think the majority will be mistaken due to bias or ignorance.
And I don't see how the footnote you cite on this point supports your position. It summarizes research about the effects of information dissemination on voters' choices. It finds that in some cases voters change their decisions after receiving information about social policies or political candidates. In other words, it shows that the citizens were ignorant--they did not know what was happening in politics. As the researchers note:
Voters may lack information about the qualifications and policy positions of candidates, making it difficult to make an informed vote choice.
Moreover, the upshot of the research is that sometimes people change their minds when given information about policies or candidates. This doesn't show that they ended up choosing the better policy or candidate.
I think the standard of evidence needs to be much higher before EAs get involved in foreign countries' political affairs.
I second Hay's suggestion of making a more formal argument. The unstructured sections of this post made it unclear which propositions you took to support which.
I'd also note that your definition of "objectivity" at the beginning makes it trivially true that morality is sometimes subjective, since people are surely at least sometimes biased by their emotions when discussing morality.
An alternative definition of "objectivity" that is pretty standard within meta-ethics goes something like this: X is objective if it is not constitutively dependent on the attitudes/reactions of observers. The funniness of a comedian is subjective because it is constituted by how amused the comedian makes people feel. In contrast, the solidity of a table is objective because it does not depend on anyone's reactions.
I don't have a problem with this in principle—I think immigration restrictions in the US are unjustly restrictive. But I think there are many problems in practice. For example:
You say that this "plainly pencils out as optimal," but you don't provide the penciling. I think a full accounting of this decision would show that's it's probably unwise.
Regarding 2: It can still be worth talking with someone who isn't willing to change their mind. For example, I know nothing about physics, so I wouldn't expect a physicist to seriously entertain the speculations that I have about the subject. But I think that I could learn a great deal about physics by talking to a physicist, so it makes the conversation worthwhile.
Also, in certain contexts, it can sound somewhat rude to ask someone if they are willing to change their mind. It can implicitly suggest that you think the other person is closed-minded, so it doesn't make sense to ask it explicitly. (Likewise, I think of each of the questions in your post are useful rhetorical tools in the right context, but they don't all need to be asked explicitly, even in an ideal conversation.) . An analogy: If you ask, "Did you smoke anything when you came up with that thought?" it implies that you have a low opinion of their intelligence.
Regarding 5: It seems false to say other people are never evil. Sometimes people do genuinely hold different values from us. And if they got what they wanted, it would be a significant set back to our own values. Eg. Some people place no weight on the welfare of humans in other countries or non-human animals.
I'm not understanding the distinction you're making between the "experience" and the "response." In my example, there is a needle poking someone's arm. Someone can experience that in different ways (including feeling more or less pain depending on one's mindset). That experience is not distinct from a response, it just is a response.
And again, assuming the experience of pain is inescapable, why does it follow that it is necessarily bad? It can't just be because the experience is inescapable. My example of paying attention to my fingers snapping was meant to show that merely being inescapable doesn't make something good or bad.
I agree that many of the goals that people pursue implicitly suggest that they believe pleasure and the avoidance of pain are "value-laden". However, in the links I included in my previous comment, I suggested there are people who explicitly reject the view that this is all that matters (a view known as hedonism in philosophy, not to be confused with the colloquial definition that prioritizes short-term pleasures). And you've asserted that hedonism is true, but I'm not sure what the argument for it has been.
So just to clarify, I see you as making two points:
I'm looking for arguments for these two points.
The foundational claim of inescapably value-laden experiences is that we do not get to choose how something feels to us
Well... this isn't quite right. A stimulus can elicit different experiences in a person depending on their mindset. Someone might experience a vaccine with equanimity or they might freak out about the needle.
But regardless, even if some particular experience is inescapable, I don't see how it would follow that it's inherently value-laden. Like, if I snap my fingers in front of someone's face, maybe they'll inescapably pay attention to me for a second. It doesn't follow that the experience of paying attention to me is inherently good or bad.
I challenge you to think about values we would agree are moral and see if you can derive them from pleasure and suffering
Some people explicitly reject the hedonism that you're describing. For example, they'd say that experiencing reality, the environment, or beauty are valuable for their own sake, not because of their effect on pleasure and suffering. I don't think you've given a reason to discard these views.
There's a common criticism made of utilitarianism: Utilitarianism requires that you calculate the probabilities of every outcome for every action, which is impossible to do.
And the standard response to this is that, no, spending your entire life calculating probabilities is unlikely to lead to the greatest happiness, so it's fine to follow some other procedure for making decisions. I think a similar sort of response applies to some of the points in your post.
For example, are you really going to do the most good if you completely "set aside your emotional preferences for friends and family"? Probably not. You might get a reputation as someone who's callous, manipulative, or traitorous. Without emotional attachments to friends and family, your mental health might suffer. You might not have people to support you when you're at your low points. You might not have people willing to cooperate with you to achieve ambitious projects. Etc. In other words, there are many reasons why our emotional attachments make sense even under a utilitarian perspective.
And what if we're forced to make a decision between the life of our own child and those of many others'? Does utilitarianism say that our own child's death is "morally agreeable"? No! The death of our child will be a tragedy, since presumably they could have otherwise lived a long and happy life if not for our decision. The point of utilitarianism is not to minimize this tragedy. Rather, a utilitarian will point out that the death of someone else's child is just as much a tragedy. And 10 deaths will be 10 times as much a tragedy, even if those people's lives aren't personally related to you. This seems correct to me.
Scott Alexander discusses this in his post here. I'm skeptical that humans will able to align AI with morality anytime soon. Humans have been disagreeing about what morality consists of for a few thousand years. It's unlikely we'll solve the issue in the next 10.