Motte-and-Bailey Explanations of EA

byBitton4y17th Feb 20153 comments


This is my submission to February's FGO topic was about explaining effective altruism, especially in person. (We aren't searching for new explanations, as much as we are for meta-advice.)

I've barely ever tried to explain EA-ish ideas to anybody in person. The few times that I have didn't go very well. I didn't receive any counterarguments, it was just clear that the people I was talking to were generally unconvinced. The main reason why I don't tell anyone around me about effective altruism is because I expect these reactions.

The closest I'll come is telling some people I plan to donate a lot of money in the future and that I want to be smart about how I do it. I think this substitute explanation of EA (one that doesn't actually explain EA at all but instead offers a socially acceptable proxy) is likely to get far better reactions. In my personal experience, people are supportive of philanthropy so long as you don't come off as weird or as some sort of fanatic.

This is similar to the motte-and-bailey fallacy, where effective altruists tell each other that they're about, say maximizing total expected utility according to a hedonistic utilitarian framework, but only tell "the public" that they want to donate a lot and make sure their donations count. The "true" explanation of EA - that is, the definition that most of us more or less actually agree with - is swapped for a more easily defensible, socially acceptable explanation.

Maybe this is just routine, common sense marketing.