For me, basically every other question around effective altruism is less interesting than this basic one of moral obligation. It’s fun to debate whether some people/institutions should gain or lose status, and I participate in those debates myself, but they seem less important than these basic questions of how we should live and what our ethics should be.
Prompted by this quote from Scott Alexander's recent Effective Altruism As A Tower Of Assumptions, I'm linking a couple of my old LessWrong posts that speak to "these basic questions". They were written and posted before or shortly after EA became a movement, so perhaps many in EA have never read them or heard of these arguments. (I have not seen these arguments reinvented/rediscovered by others, or effectively countered/refuted by anyone, but I'm largely ignorant of the vast academic philosophy literature, in which the same issues may have been discussed.)
The first post, Shut Up and Divide?, was written in response to Eliezer Yudkowsky's slogan of "shut up and multiply" but I think also works as a counter against Peter Singer's Drowning Child argument, which many may see as foundational to EA. (For example Scott wrote in the linked post, "To me, the core of effective altruism is the Drowning Child scenario.")
The second post, Is the potential astronomical waste in our universe too small to care about?, describes a consideration through which someone who starts out with relatively high credence in utilitarianism (or utilitarian-ish values) may nevertheless find it unwise to devote much resources to utilitarian(-like) pursuits in the universe that we find ourselves in.
To be clear, I continue to have a lot of moral uncertainty and do not consider these to be knockdown arguments against EA or against caring about astronomical waste. There are probably counterarguments to them I'm not aware of (either in the existing literature or in platonic argument space), and we are probably still ignorant of many other relevant considerations. (For one such consideration, see my Beyond Astronomical Waste.) I'm drawing attention to them because many EAs may have too much trust in the foundations of EA in part because they're not aware of these arguments.
You're right, I misrepresented your point here. This doesn't affect the broader idea that the apparent symmetry only exists if you have strange ethical intuitions, which are left undefended.
I stand by my claim that 'loving non-kin' is a stable and fundamental human value, that over history almost all humans would include it (at least directionally) in their personal utopias, and that it only grows stronger upon reflection. Of course there's variation, but when ~all of religion and literature has been saying one thing, you can look past the outliers.
I'm not explaining myself well. What I'm trying to say is that the symmetry between dividing and multiplying is superficial - both are consistent, but one also fulfills a deep human value (which I'm trying to argue for with the utopia example), whereas the other ethically 'allows' the circumvention of this value. I'm not saying that this value of loving strangers, or being altruistic in and of itself, is fundamental to the project of doing good - in that we agree.