For me, basically every other question around effective altruism is less interesting than this basic one of moral obligation. It’s fun to debate whether some people/institutions should gain or lose status, and I participate in those debates myself, but they seem less important than these basic questions of how we should live and what our ethics should be.
Prompted by this quote from Scott Alexander's recent Effective Altruism As A Tower Of Assumptions, I'm linking a couple of my old LessWrong posts that speak to "these basic questions". They were written and posted before or shortly after EA became a movement, so perhaps many in EA have never read them or heard of these arguments. (I have not seen these arguments reinvented/rediscovered by others, or effectively countered/refuted by anyone, but I'm largely ignorant of the vast academic philosophy literature, in which the same issues may have been discussed.)
The first post, Shut Up and Divide?, was written in response to Eliezer Yudkowsky's slogan of "shut up and multiply" but I think also works as a counter against Peter Singer's Drowning Child argument, which many may see as foundational to EA. (For example Scott wrote in the linked post, "To me, the core of effective altruism is the Drowning Child scenario.")
The second post, Is the potential astronomical waste in our universe too small to care about?, describes a consideration through which someone who starts out with relatively high credence in utilitarianism (or utilitarian-ish values) may nevertheless find it unwise to devote much resources to utilitarian(-like) pursuits in the universe that we find ourselves in.
To be clear, I continue to have a lot of moral uncertainty and do not consider these to be knockdown arguments against EA or against caring about astronomical waste. There are probably counterarguments to them I'm not aware of (either in the existing literature or in platonic argument space), and we are probably still ignorant of many other relevant considerations. (For one such consideration, see my Beyond Astronomical Waste.) I'm drawing attention to them because many EAs may have too much trust in the foundations of EA in part because they're not aware of these arguments.
It seems empirically false and theoretically unlikely (cf kin selection) that our emotions work this way. I mean, if it were true, how would you explain things like dads who care more about their own kids that they've never seen than strangers' kids, (many) married couples falling out of love and caring less about each other over time, the Cinderella effect?
So I find it very unlikely that we can "level-up" all the way to impartiality this way, but maybe there are other versions of your argument that could work (in implying not utilitarianism/impartiality but just that we should care a lot more about humanity in aggregate than many of us currently do). Before going down that route though, I'd like to better understand what you're saying. What do you mean by the "intrinsic features" of the other person that makes them awesome and worth caring about? What kind of features are you talking about?