Hide table of contents

What this post is about

Note, the article link is here: https://www.theguardian.com/business/commentisfree/2023/oct/08/getting-rich-in-order-to-give-to-the-poor-the-jury-is-out-but-it-seems-morally-shaky 

I've avoided referencing the headline, since that's likely written by a copywriter, and it's not needed to understand what Malik's arguments actually are.

Introduction

British writer and public intellectual Kenan Malik recently published an EA-critical article in The Observer.  This post is a consideration and refutation of the arguments that it presents against Effective Altruism both as a movement and as a broader philosophy.

I recently stated before that speaking up against unfair/incomplete/bad EA criticism could be my contribution to the emerging EA 'do-ocracy'. And so here is the first post in that spirit!

Why respond at all?

EA's last 12 months has been pretty challenging. In the light of various scandals and controversies, there seems to be increased scepticism of the movement, especially in public debates. Some of this pushback is valid, and it is up to us as a community/movement to respond accordingly. However, a lot of anti-EA takes I've seen this year have often been quite poor, showing lack of familiarity with the ideas and debates occurring within EA.

Despite this semi-hostile media landscape, my current understanding of what could be called 'EA comms strategy' is that all criticisms of this form are basically ignored, probably for reasons such as:

  • Responding to these criticisms is not a good use of time or effort
  • In fact, it may potentially backfire by bringing unwanted attention to the community/Forum
  • With small comms resources, it's better to communicate to specific people carefully (e.g., journalists at major newspapers)
  • Alternatively, it may be that there is no EA comms strategy as nobody can speak for decentralised movement (e.g., nobody can be a spokesperson for all of Feminism or Environmentalism)

I sympathise with a lot of these perspectives. And my intention is not to respond for the EA philosophy, but as an individual EA. Nevertheless, I do think the current comms strategy[1] has not fared well over the last year. As criticism of EA has grown, certain misconceptions, falsehoods, and bad arguments have essentially been left to spread without much engagement or pushback.

It's true that most public debates end without anyone on the stage changing their minds, and I think the same is true for social media. However, there are far more people reading and referencing social media than those who contribute to it, and the arguments and ideas they encounter do matter. So this post (and maybe sequence) is meant to be a way of rectifying this balance.

Why this article?

A friend of mine[2] shared it on WhatsApp this morning and it provoked some interesting discussion between us, but I found the article itself lacking.

The Observer is the sister paper of The Guardian, the UK's leading left-of-centre newspaper. The column therefore has a large amount of potential reach.

It is also another example of EA being portrayed negatively in the 'left-of-centre toward left' sphere. Counteracting the perception that such a split is inevitable, common, or necessary is another reason I wanted to start on this series of posts.

 

Onto the Article

The Main Argument

As this is a newspaper column and not an academic paper there isn't a structured, syllogistic argument laid out for us. However, one consistent theme emerged: Malik seemed to tie EA's role in the FTX scandal inevitably back to its philosophical commitments, hence leading to his pessimistic view on it as a whole.

The points below are therefore my summary from the article, as well as paraphrased. I'd suggest reading it yourself to get a clearer sense of what Malik thinks:

  1. EA holds that moral consequences of actions must be quantified, and our moral decisions ought to be taken on that basis.
  2. This leads into problems of measurability bias and only treating the 'symptoms' of world problems. Given its philosophical commitments, EA will ignore these kinds of interventions.
  3. As such, it means it gives backing to those with power in the existing status quo of the world, such as Silicon Valley billionaires or cryptocurrency whizzkids, and turns them into heroes.
  4. This philosophical cover in fact means that essentially any action of these powerful people can be justified under the EA framework.
  5. These structures and these people are actually causing harm, meaning that EA is unable to fulfil its goal of 'doing the most good'

Suffice to say, I disagree with a lot of this, though some parts more than others. I'd also want to re-iterate that this is my impression of Malik's case after reading (and re-reading) the article as well as drafting the post. In his actual article it flows together less systematically, but I'll quote specifics in the next section.

 

The Errors

Factual Errors and Lack of Evidence

In a variety of cases, Malik either makes claims that are wrong and could have been checked or could use more clarification. Again, it's a newspaper article rather than an academic article, but here are some examples of where I think he should have done better.

1) 

MacAskill encourages students to become not relatively poorly paid aid workers or doctors, but bankers or traders who might earn millions.

This is referencing MacAskill's 2013 Paper Replaceability, Career Choice, and Making a Difference. I also think MacAskill is somewhat standing in for 'EA Thinking' as a whole here. But by 2015 institutional EA and even Will himself was moving away from 'Earning to Give', both philosophically and in terms of EA outreach. 

Funnily enough, even in the 2015 Amia Srinivasan criticism that Malik cites, she also mentions that Will has already moved away from Earning-to-Give. Maybe 'students' actually means 'some students', but 'some' could mean 0.05% so the criticism becomes a lot less focused and loses its force.

2)

Finally, while there are references to MacAskill and Singer in the article, Malik also makes claims about the EA movement and Effective Altruists as a whole without referencing any empirical evidence on what its members actually believe. For example, there have been great surveys shared on this Forum that give good insight into how EA members have reacted to the FTX collapse, but Malik doesn’t seemed to have tried to take something like this into account.

This is a frequent problem I have with EA criticism. A lot is made of the ‘longtermist turn’ in EA, but many people in the movement still focus on Global Health & Development, and that’s still where the bulk of the movement’s money goes. Now I think there is merit to a criticism along these lines but the criticism is strongest, I think, from a counterfactual or opportunity cost perspective, rather than as a blanket statement about the what the EA movement is doing as a whole.

 

Philosophical and Conceptual Confusion

Some ideas in EA are confusing, others are unintuitive, and some are both! This doesn't mean that all EA ideas are like this though, which makes me confused when some critics think that there are simple philosophical criticisms that EAs simply haven't thought about. For example:

1)

Few people would dispute that the consequences of an act must play a major role in helping decide its moral worth. The trouble is, being merely “sensitive to numbers”, divorced from social context, can lead to perverse ends.

I found this criticism really odd to read. Malik is arguing against consequentialism here, but the grounds for that concern are 'perverse ends', which seems like an alternate phrasing of 'bad consequences'. This seems like a 'straw Vulcan' criticism, as a good consequentialist would be aware of the styles of thinking that would lead to these bad consequences. This PhD Thesis has a really good summary of different interpretations of consequentialism and shows that as early as JS Mill consequentialists were arguing that unifying ethical principles need not be required for every moral deliberation an agent undertakes. That PhD candidate's name? Toby Ord.

Note also how Malik shifted from quoting Singer saying Effective Altruists "are sensitive to numbers" to being "merely sensitive to numbers". That was a quick sleight of hand that passed me by the first time and is unhelpful not just because it isn’t what Singer (or any other EA) has said, but it once again crosses criticism of EA as a philosophy and EA as a movement.

2)

He [SBF] might have scammed investors but at least he gave more money to charity than another scammer might have done. Most EA supporters, I imagine, would reject such arguments, but in so doing they reveal that there is more to morality than numbers adding up, and that concepts such as dignity or intrinsic worth may be as important as consequences.

Once again, that EAs believe that there is no more 'to morality than numbers' is something which Malik has written into existence here. There is infinitely more to morality than numbers, even for EAs, but quantitative analyses (or ‘numbers adding up’ as Malik phrases it) are what many EAs use for cause prioritisation and deciding which problem to tackle. But these numbers often come after deciding what is valuable, and what might be an amenable proxy for tracking it. 

It is also confusing to me why someone wouldn't include 'dignity' or 'intrinsic worth' into their thoughts about how to do good. We could imagine a scenario in which there are two worlds, which contain the same number of persons, who experience the same hedonic utility but vastly different levels of dignity.[3] My assumption is that the vast majority of EAs would say that the world with more dignity is the world that is more moral. Alternatively, even a hedonic utilitarian could argue that in the real-world dignity and a sense of intrinsic worth so closely correlate states of the world that have high utility that in practice they will only come apart in convoluted thought experiments.

I think there is an underlying, unspoken objection to the whole premise of consequentialism here; and an assumption by Malik that morality is correctly viewed as a question of what makes a moral character rather than what makes an action moral. But that's a whole other debate, and one Malik does little if anything to set up in this piece, and even then one of the developments in EA moral thought over the last few years is how to deal with questions of Moral Uncertainty.

In summary, I think that this line of argument would need a lot more work to become convincing, and also suffers from many of the same confusions about consequentialism raised in the previous point.

3)

Effective altruists tend also to target the symptoms rather than the causes of social problems, as it is easier to do so, and “easier” becomes translated as “more effective”

There is a big debate on systemic vs marginal action in EA so I don't want to recap all of that here. But I do think the first half of Malik’s sentence here is already feeling a bit dicey to me unless you have an idea what 'the causes of social problems' is. I think that's actually a much harder thing to be sure about than many people claim.[4] Do social problems have a single cause'? Surely they are multicausal, and it'd be plausible to say that interventions that seem 'symptomatic' will actually feed back into society. If a child is saved by an EA-funded health intervention, are we assuming that this child will never have an impact on their society later? Perhaps this is a practical claim about where EAs send their money, but it doesn’t really go through as a criticism unless you think that symptomatic interventions are treated as less effective by EAs because they are not as easy.

But again, this feels like it isn’t telling the whole story, and is again neglecting a wide range of EA thought on this topic. Consider again the good old I-T-N Framework. Yes, marginal interventions are 'easier' than systemic changes in the sense that they are more tractable, but if the systemic changes were neglected and important enough then they would end up having more impact (i.e., be more “effective”) and they would be supported to some degree by EA. That's basically where I'd place Animal Welfare. Getting humanity to treat Animals better is an incredibly challenging task, but the alternative seems to be morally catastrophic, so it is worth trying and it is worthwhile for EA to support this effort.

Anyway, there's a lot more to be said about this topic, and I can’t do justice to it in this post. But I just want to note that, again, there doesn’t seem to be any recognition by Malik that many EAs have thought carefully about these issues in a variety of ways.

 

Object-level Disagreements

There are also some points where I think Malik disagrees with EA on an object level, and I think that EA is mostly correct (hence why I'm posting on the Forum, rather than the Guardian comments section)

1)

I think EA as a philosophy is against moral intuitionism as ending a debate as Malik does here, but I don't think it's against intuitionism as starting a debate. That's exactly what I think the 'expanding moral circle’ is. The intuition is that our moral concerns don’t seem to end with ourselves, our families, nations, or even the species as a whole. The question is then what are the implications for us, our actions, and our world. In general, I'm sceptical of how far moral intuitions should be used as cornerstones of our moral actions,[5] though I suspect they could be valuable as guardrails.

Again, the role of moral intuition in applied ethics and meta-ethical debate is another topic I can’t cover here. But Malik again throws this line in without seeming to think it through. The fact that strict consequentialism violates some of our moral intuitions in some cases is not a compelling argument against EA.

2)

Food banks address a pressing need, but not the underlying reasons for that need – poverty wages and abysmally low benefits. Many of us recognise the necessity both of providing immediate help for people failed by the system and of campaigning to transform that system, thereby removing the necessity for food banks.

The clearest piece by an EA I can think of that argues against systemic interventions is Beware Systemic Change by Scott Alexander. I don’t fully agree with it, but if you’re interested in this debate, it’s very worth reading. A very quick simplification of the argument it makes is that it's much easier to claim you have a fool-proof way to solve a systemic problem (e.g., remove the need for food banks and guarantee that no citizen will go hungry) but in practice this seems very difficult to achieve, and historically has led to a number of bad outcomes and moral atrocities. The current system has many flaws, but broadly it seems to have been a large part of how humanity has begun to eradicate poverty. To misquote Churchill "Capitalism is the worst form of Economic System except for all those other forms that have been tried.”

So, to return to Malik’s example, if you don’t think that there is a campaign to hand that can transform the system to remove the need for food banks, and if a person were considering shutting down their food bank (and they were the only person who could provide it) I think there would be reasons to get them to consider not doing so. I think an assumption Malik, and other EA critics, make on this issue is to assume that there’s an oven-ready systemic change we can immediately invest in that will have a positive impact, but I think that’s false or at least not clearly true. I think a lot of EAs are very interested in supporting positive systemic changes, but it turns out that in reality they’re very difficult to design and implement from the top down successfully

3)

Inevitably, it [EA] gives inordinate power to those with the biggest pockets, turning the likes of Sam Bankman-Fried into heroes

I think there's a version of this criticism that might have more bite (Srinivasan makes it more thoroughly and persuasively in her criticism), but it doesn't work well as a concluding line in this piece. I don't think that the 'inevitably' is justified at all. In fact, by arguing that we should take into account future generations or non-human animals as moral patients many parts of EA seem to extend moral worth to those who do not have the biggest pockets far more than many other moral frameworks that exist today. 

I think there are very valid criticisms of how EA institutions deal with power practically, but Malik doesn’t do anything to argue that this is an inevitable result of EA philosophy. A charitable reading would be something like ‘the main argument,’ but even here EAs negative consequences are a result of other contingent social factors as well, and this contingency means that the claim of inevitability regarding EA’s social consequences is not (and probably can't be) epistemically justified.

 

Final Thoughts

Another disappointing critique

As I read this article, I became increasingly disappointed at it. While I don't think a newspaper column should be held to the standards of an academic article, and Malik isn't writing for an EA-aware audience, it still comes off as a 'first-pass' criticism of EA that's also out-of-date in its perspective of the movement.

Of course I'm biased towards EA here. But I don't ask people to take my word for it. I think that the 'main argument' implicit in Malik's piece is unsupported, and that the specific extracts I've presented show that his arguments are relatively weak, overly general, and not supported by specific examples or evidence. It might be that he does have more worked-out versions of these arguments that do present a challenge to EA, but this article shouldn't make anyone think that EA is likely to be a wrong approach to doing good.

Why this matters

To re-iterate, I feel like this post is the result of sticking to a new principle of mine: don't let bad criticisms of EA slide. My hope is that if people link to Malik's piece or bring it up as a point in a discussion as evidence of arguments against EA, then this piece is here as a response. In turn, I hope it will lead all involved to a better understanding of EA perspectives and why EA is not as bad as our critics say it is. Failing that, I hope it will at least be useful in identifying where the key cruxes of our disagreement actually are, rather than having to fight an ‘Eternal September’ each time.

This isn't to say that there are no good criticisms of EA, or that we shouldn’t be doing more to encourage good criticisms from outgroups. I think there is a lot in the last 12 months that hasn't fully been reckoned with. But this piece was a nice crystallisation of what I consider to be representative of a large amount of poor EA criticism.

What's next

I hope that there will be no more bad criticisms of EA, but reality is often disappointing. I'll continue this series if I have the time, and probably prioritise criticisms based on:

  • How large the reach of the criticism is.
  • How egregious the anti-EA claims are, either by overclaiming or presenting weak arguments as obviously true.[6]
  • Whether it covers new arguments that haven't been covered before.

As a brief preview, however, Professor Noah Giansiracusa recently posted another EA weak take, and as they've been doing this consistently this year[7] they're probably a contender to be next on my list.

Anyway, that's it for now. Let me know if you like this post, if you have another 'bad EA criticism' for me to tackle next, or if you desperately suggest that I end the entire enterprise. Wherever you are reading this, I hope your motivations are altruistic and your actions effective.

 

  1. ^

    de facto if not de jure

  2. ^

    A friend who is "EA-adjacent" and was a lot more sympathetic to the article than I was

  3. ^

    If such a world is plausibly imaginable

  4. ^

    The closest Malik gives us is saying that they are "social and economic forces", without saying more. He gives a specific example in the case of Food Banks, addressed later in this piece.

  5. ^

    I think almost every moral action a human has taken could be justified by some intuition, whether good or evil

  6. ^

    For me, this article sits in camp 2

  7. ^

    The big tweet storm was this, Haydn Belfield from the Leverhulme Centre shared a good response here

Comments10
Sorted by Click to highlight new comments since: Today at 8:36 PM

I think doing this sort of thing is quite valuable, especially to be able to link to acquaintances that mention the article being criticised. 

However this particular post has a bunch of typos and missing words that mean I am unlikely to share it with others. I also found sections 2 and 3 in 'Philosophical and Conceptual Confusion' and section 2 in 'Object-level Disagreements' to be somewhat weak/inaccurate.

I did like how you pointed out that moral circle expansion can be connected with moral intuitionism, and the point about adding 'merely'.

Hi Rebecca, thanks for your feedback, it was helpful :) I was definitely trying to strike 'while the iron was hot', but in the future I'll try to balance doing things slower to catch issues like this.

I've gone through the post and edited out most of the typos/spelling errors I could find, and listened to it with text-to-speech to try to prevent my brain from auto-completing missing text. I can't claim that I've fixed everything, but I think the piece is in a lot better place now than it was when you originally read it, and I think you might find that the piece is both more understandable and perhaps worth sharing with others if you decided to give it a re-skim.[1] If there are any particular factual inaccuracies you want to point out that still exist, I'm happy to retract or correct those too.

On the systemic vs marginal issue itself, I don't think I'm trying to argue for an established EA position on these issues, and in fact I'd love to see pushbacks on what I've written here. That's kind of what I want the debate for or against EA to be, recognising what arguments already exist and taking them into account, rather than a critic saying 'EA doesn't care about systemic change' as a general statement as if there hasn't been a large amount of debate about this very topic within the movement already. I think Malik drops the ball here in his piece, especially around EtG and non-naïve consequentialism, and I hope my revised post makes this point a bit more clearly.

  1. ^

    I totally understand if you don't though

I see the same recycled and often wrong impressions of EA far too often, so I appreciate you taking the time and doing this!

Thanks for posting this detailed and thoughtful review. I think it’s very valuable to have such responses, even just on this forum.

This may be evident from the starting parts of your post (and may illustrate my own naivety), but is there no EA press or comms unit that journalists contact before publishing such articles? I appreciate much of the criticism is focused on the works by MacAskill and Singer so maybe their office was contacted. I also accept it’s not desirable to have a big comms. function that speaks for EA and makes the community more formal than it is. However, I agree with your reflection that having no rebuttal or standard lines (from say CEA) means no way to counter the potential damaging effects from an inaccurate narrative.

“is there no EA press or comms unit that journalists contact before publishing such articles” — sometimes CEA or Forethought get asked for comment on pieces, but the vast majority of the time no one contacts us. It’s quite frustrating.

The closest would be CEA's communication team, but as you point out: "it’s not desirable to have a big comms. function that speaks for EA and makes the community more formal than it is."

I think it'd be challenging (and not in good taste) for CEA to craft responses on behalf of the entire EA community; it is better if individual EAs critique articles which they think misrepresents ideas within the movement.

Yeah, almost made a post about this one because it was clickbaity enough to get me to click and read through, but then I kind of came to the end and thought "wow, maybe I shouldn't talk about this any further because it feels like the prose is kind of designed to keep drawing you in with outrageous claims without ever giving you much to actually engage with. 

I think have this up is fine, and I think there are benefits to collecting this sort of thing because it gives people a place to go to engage with it, but at the same time I feel like these pieces are waging an attention war, and that other posts highlighting them might suck in more people than it's productive for, so maybe reconsider creating a sequence with a bunch of different posts which would essentially be boosting that effect. (Alternatively, you could make one post with that title and just add new pieces to the comments, maybe). 

Yeah, I think your reaction to it was what my position used to be Tristan. But I think each article like this, even if clickbaity, provides another 'brick in the wall' against EA. Now I think it's a flimsy wall given the weak nature of the arguments, but it should still be demolished before it becomes too thick imo.

The point about not engaging in an attention war is valid, so I'd be willing to consider alternative ways of writing these pieces, not as 'dunks of anti-EA people' put more as picking out particular arguments from these pieces that I think are common but unconvincing as anti-EA arguments[1]. In other words, being very clear about 'playing the ball and not the man'

Would be great to hear your thoughts, but thanks for reading :)

  1. ^

    I use 'arguments' twice in this sentance but I think I mean too different things:

    1) an argument as a piece of evidence, e.g. this thought experiment makes us doubt consequentialism

    2) an argument as a general theory about morality/behaviour, e.g. EA is wrong and here's why

    would welcome any suggestions readers have

I think there might be a place for responding to writers of those pieces privately, correcting the biggest misconceptions - have you sent it to Malik?

If anyone is after a good example of EA criticism, I cannot strongly reccommend enough the Doing EA Better post by the ConcernedEAs group.

More from JWS
Curated and popular this week
Relevant opportunities