Thanks for your answer, Vera. I think there's a significant issue with how the article comes across to readers. While it's clear that your text addresses a specific EA—the utilitarian type—the article reads as though it's describing the entire EA community rather than a very particular subspecies of it. Additionally, I find the framing of "philosophical foundations of EA" problematic. These (or other) philosophies may have been an inspiration or influence for some people in the movement, but the project of EA is not based on these "foundations" in the way your article suggests. If the article clearly signaled upfront that it's analyzing one particular philosophical approach within EA rather than EA as a whole, that would avoid much of this confusion (but it would be less engaging, I guess).
Effective altruists subscribe to a version of utilitarianism according to which actions are to be judged by their consequences.
While many EAs subscribe to utilitarianism, many others don't. Andreas Mogensen is just one example. The movement doesn't officially endorse utilitarianism either, as you can see in the objections here.
New episode of La Bisagra de la Historia:
Jaime Sevilla sobre las tendencias en los modelos de inteligencia artificial
As I said last time, trying to quantify agreement/disagreement is much more confusing to determine and to read, than just measuring, out of an extra $100m, how many $ millions people would assign to global health/animal welfare. The banner would go from 0 to 100, and whatever you vote, let say 30m, would mean that 30m should go to one cause and 70m to the other. As it is, just to mention one paradox, if I wholly disagree with the question, it means that I think it wouldn't be better to spend the money on animal welfare than on global health, which in turn could mean a) I want all the extra funding to go to global health, b) I don't agree at all with the statement, because I think it would be better to allocate the money differently, say 10m/90m. Now if you vote as having a 90% of agreement, it could mean b, or it could mean that you almost fully agree for other reasons, for example, because you think there's a 10% chance that you are wrong.
I think I would prefer to strongly disagree, because I don't want my half agree to be read as if I agreed to some extent with the 5% statement. This is because "half agree" is ambiguous here. People could think that it means 1) something around 2,5% of funding/talent or 2) that 5% could be ok with some caveats. This should be clarified to be able to know what the results actually mean.
I guess phrases like the one cited above, or "I hope that by drawing attention to a central philosophical problem of the approach, it will encourage animal advocates to look for alternatives, both philosophical and strategic." made me think that your view was that EAs were already committed (in some way or other) to some utilitarian type of philosophical foundation. I'll reread your article more carefully to see if I got it all wrong or if there's some ambiguity at play here.