Robert Wiblin recently wrote a good post with the self-explanatory title "Disagreeing about what’s effective isn’t disagreeing with effective altruism". At the end, there is one parapgrah which I think concedes too much, however, regarding the triviality of the EA message. He writes:

 Have I now defined ‘effective altruism’ to be so obvious that nobody could challenge it?

Here he seems to agree that if this were the case, then that would be a problem. However, he then goes on to argue that this isn't in fact the case - people do challenge the EA message, even if we understand it in his thin sense (as trying to do the most good you can, using analysis and evidence).

I guess it's true that some people do challenge the EA message, but I don't agree with the notion that it would be a problem if that weren't the case. It would be if we were at an academic seminar, trying to argue for a philosophical thesis. Philosophical theses shouldn't be trivial or obvious. We are not at an academic seminar, however - we are trying to change the world. And in the real world, lots of altruistic work isn't remotely effective. This includes lots of work done by people who would agree that it is obviously true that you should be effectively altruistic if asked. It is one thing to intellectually agree that you should be effectively altruistic, if someone asks you that question. It is quite another thing to actually be effectively altruistic. In order to be so, I would guess you need to be constantly reminded of the necessity of thinking about evidence and cost-effectiveness.

Here I think the EA movement has a very substantial role to play. My hypothesis is thus that the EA message is very fruitful even though it may be philosophically trivial. My anecdotal observations of EA members support this hypothesis - EAs seem to me to be highly effectively altruistic on average - but they're only anecdotal. Ultimately, this question needs to be settled through empirical research.

As Robert notes, lots of criticism against the EA movement is not directed against the central EA message - what Iason Gabriel in a critical piece calls "the thin version" - but rather against a number of "associated ideas", as Robert calls them (what Gabriel calls "the thick version"). These include criticisms of RCTs, particular conceptions of what's valuable, and so forth. I think this is at least partly due to confusion over the triviality objection.

To see that, note that Gabriel motivates his choice to criticize the thick rather than the thin version of the EA movement as follows.

This paper focuses on the thick version of effective altruism. Not everyone who identifies with the movement shares each individual belief, but, taken together, they capture much of what makes the approach interesting and unique. They also explain many of the moral judgments that effective altruists make.

My emphasis. Here Gabriel seems to imply that the thin version is trivial and not worth discussing. Well, it may be philosophically trivial, but again, the point isn't to be philosophically interesting, but to improve the world. And as a matter of fact, the EA movement is the first social movement which defines itself as a movement that is trying to improve the world in the most evidence-based and cost-effective way possible. Unlike other movements, the EA movement is thus not committed to any specific strategy, but rather to whatever strategy turns out to be most effective. Like I said above, my guess is that this means that the EA movement has an enormous potential for doing better than all other social movements, precisely because its members are constantly thinking about evidence and cost-effectiveness. Far from being uninteresting, the thin version of the EA movement is thus very interesting and unique indeed - it is one of the great innovations of the 21th century, as Pinker rightly has said. Hence those who want to discuss the EA movement has every reason to focus on the thin version.

As a side-note, it seems to me that the (thin) EA message has been explicitly designed to be trivial/obvious, at least for large swathes of the liberal, educated part of the population (whether this is indeed true someone who knows more about the EA history could perhaps tell me). The fact that the EA movement is such a "broad tent" - that it includes organizations which work on poverty reduction, animal suffering, X-risk and meta causes - makes it easy for these kinds of people to find a place within the EA movement. To my mind, this is not a weakness, but a strength.

[Slightly edited]

53

1
0

Reactions

1
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

I do not think that even the thin version of EA is trivial at all. Perhaps to those who come from very rational, non-religious societies, the message appears obvious. But for many people, EA is a massive philosophical shift from everything they have been taught. 

If you ask many people about charity, for example, they will focus much more on the giver than the beneficiary. Christianity, for example, focuses very strongly on the value of sacrifice, and most Christians would naturally judge the value of a given charitable act more based on how it impacted the donor than on how it impacted the recipient. An act which costs the donor greatly is valued highly even if the net impact on the recipient is minor. 

But this isn't just a Christian or religious idea. Look at all the half-marathons all over the world where people run to support charities. The message is: if you're willing to suffer through 20 km of pain, it feels justified that I give $20 to MSF. As if the suffering and commitment of the runner were related to the rightness of another person donating to a charity. Yet we find it perfectly natural. 

With effective altruism, we are not at all making a trivial argument. Rather we're asking people to take a dramatic philosophical jump, to focus not on the sacrifice but on the effect

Perhaps "earning to give" is the most obvious case in point. Imagine an engineer earning $500K/year and donating $100K tax-deductibly to effective charities, at a net cost to herself of just $50K, which she doesn't even notice. She may feel like she's not doing enough, she still has a great lifestyle and wants for nothing. 

If she were to quit her job and volunteer to go to work in Niger, making a great personal sacrifice, the vast majority of people would consider that a very altruistic act. They would focus on her - what's she's sacrificing and why. Radio stations would interview her, journalists would write about her, etc. 

But EA turns that logic on its head, and says "Listen, if you really want to do the most good possible, actually, your $100K is worth more to us than your presence in Niger. Please just stay in your job and keep your luxurious lifestyle and keep giving us the money."

This mentality is a radical philosophical shift for anyone educated in the Christian tradition, whether knowingly or otherwise. Christianity says that if giving $100K doesn't really cost you anything, doesn't make you suffer, than it doesn't count, it won't bring you closer to heaven. If you give up everything and devote your life to charity, that probably will get you into heaven. EA says the opposite - focus only on how to have the most impact. And if you can have the most impact without having to suffer, that is a win/win situation.  

Here I think the EA movement has a very substantial role to play. My hypothesis is thus that the EA message is very fruitful even though it may be philosophically trivial.

 

It seems correct given EA's goals, its effectiveness should not be measured philosophically -- instead it should be assessed practically.  If EA fails, likely it is because it becomes meta discussion (like this one) and fails to make a difference in this world.  (This is not intended as a dig against the present discussion).  My sense is that EA sometimes involves interested parties that are not directly involved in DOING the relevant activities in question.  Thus it is a kind of meta-discussion by its nature.  I think this is fine... as an AI guy I notice that practitioners rarely ask the hardest questions questions about what they are doing.  as a former DARPA guy I saw the same myopia in the defense sphere.  So outsiders may well be the right ingredient to add.

Personally I would assess EA on the basis of its subjectively/objectively-assessed movement of the Overton window for relevant decision makers.  e. g. company owner, voters, activists, researchers, etc.  The issues EA takes on are really quite large.  It seems hard to directly move that needle.  Still it seems plausible that EA could end up being transformative by changing very thinking of humanity.  And it seems possible that it gets wrapped up into its own sub-communities whose beliefs end up diverging from humanity at large and thus are ignored by humanity at large.

When I look at questions around AGI safety I think the tiny amounts of human effort and money expended by EA, perhaps this can be counted as a "win" ... that humanity's thinking is moving in directions that will affect large scale policy.  (On this particular issue, I fall into the "too little too late" camp) but still I have to acknowledge the apparently real impact EA has had in legitimizing this topic in practically impactful ways.


 

Thanks Stefan, this is a very good point.

Here I think the EA movement has a very substantial role to play. My hypothesis is thus that the EA message is very fruitful even though it may be philosophically trivial.

 

It seems correct given EA's goals, its effectiveness should not be measured philosophically -- instead it should be assessed practically.  If EA fails, likely it is because it becomes meta discussion (like this one) and fails to make a difference in this world.  (This is not intended as a dig against the present discussion).  My sense is that EA sometimes involves interested parties that are not directly involved in DOING the relevant activities in question.  Thus it is a kind of meta-discussion by its nature.  I think this is fine... as an AI guy I notice that practitioners rarely ask the hardest questions questions about what they are doing.  as a former DARPA guy I saw the same myopia in the defense sphere.  So outsiders may well be the right ingredient to add.

Personally I would assess EA on the basis of its subjectively/objectively-assessed movement of the Overton window for relevant decision makers.  e. g. company owner, voters, activists, researchers, etc.  The issues EA takes on are really quite large.  It seems hard to directly move that needle.  Still it seems plausible that EA could end up being transformative by changing very thinking of humanity.  And it seems possible that it gets wrapped up into its own sub-communities whose beliefs end up diverging from humanity at large and thus are ignored by humanity at large.

When I look at questions around AGI safety I think the tiny amounts of human effort and money expended by EA, perhaps this can be counted as a "win" ... that humanity's thinking is moving in directions that will affect large scale policy.  (On this particular issue, I fall into the "too little too late" camp) but still I have to acknowledge the apparently real impact EA has had in legitimizing this topic in practically impactful ways.


 

I agree with you on your central premise that philosophical triviality is okay if the idea is still valuable and important. But I think EA happens to be less trivial than even Rob says. It looks to me (from a cursory reading of Iason's paper) like the 'thick version' in Iason Gabriel's post is quite a bit thinner than the 'associated ideas' in Rob's post. The thick version involves assumptions that I think are pretty central to EA - the broadly consequentialist framework (which includes the erasure of the action/omission distinction common to many moral theories and a morally egalitarian ethos with regard to categories like nationality and species) and confidence in scientific methodology. The associated ideas go further than that - the idea that earning to give is highly effective or that RCTs are highly valuable same more specific than even the thick version. I think Jason's thick version is a better description of EA than the thin version (which is closer to what Rob uses in his post), and though it's more trivial than a version incorporating all of those incorporated ideas would be, it's significantly less trivial than the thin one.

Curated and popular this week
Relevant opportunities