Some excerpts:
Philosophical discussion of utilitarianism understandably focuses on its most controversial features: its rejection of deontic constraints and the "demandingness" of impartial maximizing. But in fact almost all of the important real-world implications of utilitarianism stem from a much weaker feature, one that I think probably ought to be shared by every sensible moral view. It's just the claim that it's really important to help others—however distant or different from us they may be. [...]
It'd be helpful to have a snappy name for this view, which assigns (non-exclusive) central moral importance to beneficence. So let's coin the following:
Beneficentrism: The view that promoting the general welfare is deeply important, and should be amongst one’s central life projects.
Clearly, you don't have to be a utilitarian to accept beneficentrism. You could accept deontic constraints. You could accept any number of supplemental non-welfarist values (as long as they don't implausibly swamp the importance of welfare). You could accept any number of views about partiality and/or priority. You can reject 'maximizing' accounts of obligation in favour of views that leave room for supererogation. You just need to appreciate that the numbers count, such that immensely helping others is immensely important.
Once you accept this very basic claim, it seems that you should probably be pretty enthusiastic about effective altruism. [...]
Even if theoretically very tame, beneficentrism strikes me as an immensely important claim in practice, just because most people don't really seem to treat promoting the general welfare as an especially important goal.
There was a bit of discussion on Twitter about this post. Rob Bensinger had a thread that included this comment:
One (maybe slight boring) option would be something like "soft welfare-maximisation", where "soft" just means that it can be subjected to various constraints.
Another term for a related concept is Richard Ngo's "scope-sensitive ethics" (or "scale sensitive" as Ben Todd suggests), which he takes to "the core intuition motivating utilitarianism". However, that doesn't include any explicit reference to welfare or maximisation.