Some excerpts:
Philosophical discussion of utilitarianism understandably focuses on its most controversial features: its rejection of deontic constraints and the "demandingness" of impartial maximizing. But in fact almost all of the important real-world implications of utilitarianism stem from a much weaker feature, one that I think probably ought to be shared by every sensible moral view. It's just the claim that it's really important to help others—however distant or different from us they may be. [...]
It'd be helpful to have a snappy name for this view, which assigns (non-exclusive) central moral importance to beneficence. So let's coin the following:
Beneficentrism: The view that promoting the general welfare is deeply important, and should be amongst one’s central life projects.
Clearly, you don't have to be a utilitarian to accept beneficentrism. You could accept deontic constraints. You could accept any number of supplemental non-welfarist values (as long as they don't implausibly swamp the importance of welfare). You could accept any number of views about partiality and/or priority. You can reject 'maximizing' accounts of obligation in favour of views that leave room for supererogation. You just need to appreciate that the numbers count, such that immensely helping others is immensely important.
Once you accept this very basic claim, it seems that you should probably be pretty enthusiastic about effective altruism. [...]
Even if theoretically very tame, beneficentrism strikes me as an immensely important claim in practice, just because most people don't really seem to treat promoting the general welfare as an especially important goal.
From the links you posted, the most powerful argument for effective altruism to me was this:
"(Try completing the phrase "no matter..." for this one. What exactly is the cost of avoiding inefficiency? "No matter whether you would rather support a different cause that did less good?" Cue the world's tiniest violin.)"
Unless someone had a kind of limited egotism (that perhaps favored only themselves and their friends, or themselves and their family, or themselves and their country, etc.), or a sadist, I don't see how they could disagree that making the world a better place in the best way possible is the moral thing to do.
Here is one criticism of EA that I have found powerful:
"Since many people are driven by emotion when donating to charity, pushing them to be more judicious might backfire. Overly analytical donors might act with so much self-control that they end up giving less to charity."
However, many of the charities that one wouldn't give to might have been harmful. So while one might miss opportunities by being analytical, they would also avoid mistakes. Also, it would be desirable to know what actions are helpful and for what reasons, so such actions can be sustained and not just happen some of the time by chance. Sustaining those actions would be better over the long term.