I worry a lot that I’m wrong, or that I’m overconfident. Why? Well, because most people are overconfident about most things.
If I’m overconfident about my ability to guess the total egg production of the US, that’s not that problematic. But if I’m wrong about the best ways of doing good, that would be a disaster.
So I take very seriously the extent of empirical and moral disagreement, and am attracted to causes that are valuable on a wide variety of moral theories - such as building a community of people who place high value on both altruism and on having the correct moral and empirical beliefs.
But, even then, I worry that I’m making some fundamental mistake - that there’s some component of my thinking that’s so deeply wired that I struggle to even acknowledge it as a component. This is most worrying when I get objections to EA activities that are common but that I don’t even understand. Let me explain.
If someone complains that earning to give is ‘double counting’, because it means that both the person earning to give and the charity worker get ‘credit’ for saving lives, then I understand the objection, recognise the mistake, and can refer the objector to Five Mistakes in Moral Mathematics in Derek Parfit’s Reasons and Persons.
But if someone complains that earning to give is wrong because it is supporting an unjust system, or because the very idea of trying to work out how much good you can do is mistaken, then I feel much less confident that I’ve accurately diagnosed their objection. I try to parse it into terms that I can understand and sympathise with, for example: “Are you saying that there’s a moral prohibition against being complicit in injustice, which can’t be violated even in pursuit of a greater good?” or “Are you saying that weighing expected costs and expected benefits is just so difficult that we’re almost certain to be misled by doing so, and it’s better to go with gut intuitive judgments?”
But, with my self-skeptical hat on, I get an uneasy sense that none of my translations are quite right - especially when they don’t convince the other person.
So far, there hasn’t been much in-depth discussion of effective altruism from people who have these sorts of objections, and I think a decent proportion of my worry could be resolved if I did have more discussion from such people. So if any reader has a friend or colleague who understands effective altruism well, is well-informed about the relevant issues, but thinks that the whole idea is badly mistaken, I’d welcome them to write a post or series on this blog explaining why. (I’d welcome the same for people who majorly disagree with not just the specifics but the very project or spirit of Bayesian epistemology).