Thank you! That's very interesting r.e. Gates; that wasn't my impression at all but to be honest I may very well be living in a bubble of my own making, and I'm sure I've missed plenty of criticism. That said, I think I might still suggest that there's two different kinds of criticism here: EA gets quite a bit of high-status criticism from fairly mainstream sources (academics, magazines, etc.); if Bill Gates's criticism comes more from conspiracy loons then I would suggest it's probably less damaging, even if it's more voluminous. (I think both have got a lot of flak from those development orgs who were quite enjoying being complacent about whether they were actually being successful or not.)
And yes I completely agree r.e. longtermism & PR! I wrote something quite similar a couple of months ago. It seems to me that longtermism has an obvious open goal here and yet hasn't (yet) taken it.
Hello! Thank you for such a thoughtful comment. You're obviously right on the first point that Singer/Ord/MacAskill have tried to appeal to non-utilitarians, and I think that's great - I just wish, I suppose, that this was more deeply culturally embedded, if that's a helpful way to put it. (But the fact this is already happening is why I really don't want to be too critical!)
And I fully, completely agree that you can't do effective altruism without philosophy or making value-judgements. (Peter made a similar point to yours in a comment to my blog). But I think that what I'm trying to get at is something slightly different: I'm trying to say that at a very basic level, most moral theories can get on board with what the EA community wants to do, and while there might be disagreements between utilitarians and other theories down the line, there's no reason they shouldn't be able to share these common goals, nor that non-utilitarians' contributions to EA should be anything other than net-positive by a utilitarian standard. To me that's quite important, because I think a great benefit of effective altruism as a whole is how well it focusses the mind on making a positive marginal impact, and I would really like to see many more people adopt that kind of mindset, even if they ultimately make subjective choices much further down the line of impact-making that a pure utilitarian disagrees with. (And indeed such subjective and contentious moral choices within EA already happen, because utilitarianism doesn't tell you straightforwardly how to e.g. decide how to weight animal welfare, for example. So I suppose I really don't think this kind of more culturally value-plural form of EA would encounter philosophical trouble any more than EAs already do.)
On Gates and Singer's philosophical similarities, I agree! But I think Gates wears his philosophy much more lightly than most effective altruists do, and has escaped some ire because of it, which is what I was trying to get at - although I realise this was probably unhelpfully unclear.