Crosspost of this on my blog.   

I really like the Bulwark. Their articles are consistently funny, well-written, and sensible. But recently, Mary Townsend wrote a critique of effective altruism titled “Effective Altruism Is a Short Circuit,” that seems deeply confused. In fact, I will go further and make a stronger claim—there is not a single argument made in the entire article that should cause anyone to be less sympathetic to effective altruism at all! Every claim in the article is either true but irrelevant to the content of effective altruism or false. The article is crafted in many ways to mislead, confuse and induce negative affect in the reader but is light on anything of substance.

For instance, the article begins with a foreboding picture of the notorious EA fraudster Sam Bankman Fried. This is not an explicit argument given, of course—it’s just a picture. If it were forced to be an argument, it would not succeed—even if Bernie Madoff gave a lot of money to the Red Cross and has some role in planning operations, that would do nothing to discredit the Red Cross; the same principle is true of EA. But when one is writing a smear piece, they don’t need to include real objections—they can just include things that induce disdain in the reader that they come to associate with the object of the author’s criticism. Such is reminiscent of the flashing red letters that are ubiquitous in attack ads—good if one’s aim is propaganda, bad if one’s aim is truth.

The article spends the first few paragraphs on mostly unremarkable philosophical musings about how we often have an urge to do good and we can choose what we do, filled with sophisticated-sounding references to philosophers and literature. Such musings help build the author’s ethos as a Very Serious Person, but does little to provide an argument. However, after a few paragraphs of this, the author gets to the first real criticism:

That one could become good through monetary transactions should raise our post-Reformation suspicions, obviously. As a simple response to the stipulation of a dreadful but equally simple freedom, it seems almost designed to hit us at the weakest spots of our human frailty, with disconcerting effects.

Effective altruism doesn’t claim, like those who endorsed indulgences, that one can become good through donating. It claims that one can do good through donating and that one should do good. The second half of that claim is a trivially obvious moral claim—we should help people more rather than less—and the first half of the claim is backed by quite overwhelming empirical evidence. While one can dispute the details somewhat, the claim that we can save the lives of faraway people for a few thousand dollars is incontrovertible given the weight of the available evidence—there’s a reason that critics of EA never have specific criticisms of the empirical claims made by effective altruists.

Once one acknowledges that those who give to effective charities can save hundreds of lives over the course of their lives by fairly modest donations, a claim that even critics of such giving generally do not dispute, the claim that one should donate significant amounts to charities in order to save the lives of lots of people who would otherwise have died of horrifying diseases ceases to be something that “raises our post-reformation suspicions.” One imagines the following dialogue (between Townsend and a starving child):

Child: Please, could I have five dollars. This would allow me to afford food today so I wouldn’t be hungry.

Townsend: Sorry, I’d love to help, but that one could become good through monetary transactions should raise our post-Reformation suspicions. In addition, though your frail arms seem almost designed to hit us at the weakest spots of our human frailty, with disconcerting effects, I do not think this is an adequate response to the stipulation of a dreadful but equally simple freedom.

Such a response would be, of course, bizarre—when people are struggling who we can help at minimal personal cost, we ought to help them. This is not intended as some grand solution to the fundamental human question—whatever that means—but as a basic ethical principle that is widely accepted: we should help those we can. When children are dying, who could be saved by foregoing some of the lucrative profits from the Bulwark and spending them on malaria nets rather than luxuries, one ought to forego said luxuries.

The second claim seems equally bizarre. Effective altruism doesn’t seem to be “a simple response to the stipulation of a dreadful but equally simple freedom,” and nor does “designed to hit us at the weakest spots of our human frailty, with disconcerting effects.” In fact, effective altruism seems to go against our human frailty—it instructs us to take into account the interests of those whom we cannot see, whose screams we cannot hear, simply because they matter. This runs counter to every bias in the book and is thus only followed by a vanishingly small portion of the population, but those who do follow it do it because they’re convinced that a far-away child dying of malaria or going blind from Vitamin A deficiency is just as important as they would be if they were nearer. The death and blindness of children who we can save imposes on us stringent duties to act.

Townsend spends the next few paragraphs remarking on the fact that we generally want to do good effectively and describing this as the motivation for effective altruism. Notably, she doesn’t argue against it, or explain why anyone would accept these surprising-sounding conclusions—which can be derived from simple axioms—but instead simply sneers at the odd-sounding implications. She ignores that Longtermism is just a small part of effective altruism, and that one can perfectly well donate to provide malaria nets without being a longtermist, which is what those suspicious of longtermism should do. She next says:

At its heart, longtermism is only the dorky science-fiction version of the nineteenth-century classical utilitarianism that English philosophers John Stuart Mill.

This is false! You don’t have to be a utilitarian to be a longtermist. As long as you think that lives going well is one of the things that matters—which is quite trivial—then making sure that lives go well for literally quadrillions of beings is important. In fact, even if you don’t think that, you should think that the risk of annihilation of life on earth, which experts consistently rate as being not terribly improbable, often giving it above a 10% chance of happening in the next century, is quite significant and worth working on. Thus, you don’t even need to be a longtermist to be sympathetic to nearly everything done by longtermists.

After falsely claiming that effective altruism requires utilitarianism, Townsend attempts to provide an account of why people find utilitarianism attractive. Here account is totally implausible and amounts to little more than the claim that effective altruists just love numbers:

But it continues to sound reasonable to us long after Mill’s time because it appeals to our desire for solid ethical action in a simple and direct way: through insisting it can offer us mathematical proof that we have done good—and because it exploits our native wish to be generous by offering us the possibility of rationalizing our own desire for comfort and extending its mathematical possibility to limitless, nameless others.

Several points are in order. For one, the psychology of EA doesn’t matter to the assessment of whether EA is good or bad—worthwhile or pointless. It could be that all EAs give to effective charities because they think Sauron from lord of the rings told them so, and that would be sublimely irrelevant to a normative assessment of what effective altruism actually does. In assessing the desirability of some movement or group, one doesn’t need to know anything about the psychology of its members.

For another, this seems like a wildly implausible psychological account of why people are sympathetic to EA or utilitarianism. If you talk to actually existing effective altruists and ask them, for instance, why they give away a sizeable portion of their lifetime earnings to charity, the answer you will get is never “I just love arithmetic.” Instead, they’ll generally note that doing so allows them to help others a lot, to do extraordinary amounts of good, and that such opportunities to save dozens or hundreds of lives shouldn’t be wasted. In fact, as anyone who witnessed the disaster of me taking calculus can attest, I have no great love of mathematics, yet I do still call myself an effective altruist and donate a sizeable chunk of my earnings to charities. Finally, Townsend gets to her core issue with effective altruism:

But here’s the problem: If your image of enacted goodness is entrusting resources to the person who is best at “running the numbers,” you’re assuming that person knows for sure what good to pursue with those resources. But it is impossible to simply “run the numbers” without being aware—or perhaps suppressing one’s awareness—that there are unanswered questions left hanging, somewhere. As a contributor to the official EA forum described this exact problem shortly before FTX imploded, “EA is about maximizing a property of the world that we’re conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA.”

. . .

The thing is, a moral claim like forecasted changes in quality-adjusted life-years provide an objective measure of right action sounds good because we are used to giving our trust to math and what is measurable by it; “maximization” sounds good because if one number is good then two are better, ad infinitum. This is why the QALY sounds less fictional to us than more amorphous claims to goodness. To be sure, it offers more plausibly imagined numbers than people toss out when speculating about how many humans there will be in a million years. But the addition of mathematics to a problem does not make it more practically addressable. Rather, an invented or stipulated unit like QALY is a conceptual apparatus that makes something theoretical appear practical, where our sense of what is “effective” is simply a better and shinier theory; and this is how the basic deception is practiced.

But this problem avails any attempts to do good. Whenever one tries to better the world, there is a risk they’re not doing any good or not doing the most good they can do. No one doubts that QALYs are an imperfect metric, and there can, of course, be mistaken judgments about the QALYs of some interventions.

But behind every QALY judgment is some intervention that saves or improves lives. It’s hard to figure out which intervention will help people more. But that’s a reason to think hard before deciding on what to do, rather than jettisoning the project entirely. The response to the difficulty of doing precise comparisons of the good cannot be to jettison effective giving entirely—and give to whatever charity you heard a nice story about from NPR.

Imagine this feat of illogic being applied to any other domain. Suppose one was deciding how to treat their grandmother’s cancer. They note that it’s really hard to know which cancer treatments are most effective. And so their solution is to ignore effectiveness entirely! After noting how difficult it is to decide what is best for their grandparent—how to trade off happy life vs slightly sickened life—it would be sick and perverse to just throw up one’s hands and not try to do what’s best for their grandparents.

But the core insight of effective altruism is that we should do the same for strangers as we do for our loved ones. It may be hard to precisely calculate the effectiveness of various interventions, but when children are starving, dying of disease, and going blind, it is callous and cruel to just throw one’s hands up and not even try to figure out what will help people the most. Because the faraway children matter just as much to their loved ones as our own children do to us.

Unfortunately, there are fates worse than death, and goods that sit beyond merely staying alive longer. And almost every human good there is beyond mere accumulation of healthy days or years—for instance, the goods of justice, love, truth, and compassion—are not amenable to numbers, let alone predictable by dint of them. For a previous generation, the appeal of utilitarian comfort rested in part upon the conviction that human life was real while things like justice or dignity, not visible to the naked eye, were not. But this is not a supposition that we can afford these days. And the question of what happens to the life of the human you’ve saved—before and after they don’t die from malaria, if indeed they don’t happen to die from anything else—is not something we can just leave alone. If you send a mosquito net to someone whose primary threats to healthy living include political oppression and genocide, something more is going on there than has been dreamt of in your philosophy.

That’s the entire rationale for QALYs. Because there are things that make a person’s life worse without killing them, if we are going to try to do good we will need some method of comparison. That’s why one of the top EA charities is primarily about averting blindness.

The final swing at EA in the article is perhaps the most bizarre:

Latent fanaticism is present in any kind of morality that prioritizes “more” good over the courageous challenge of doing the good present at hand, that denigrates the beautifully good in its pursuit of the unearthliness of more. And any morality that prioritizes the distant, whether the distant poor or the distant future, is a theoretical-fanaticism, one that cares more about the coherence of its own ultimate intellectual triumph—and not getting its hands dirty—than about the fate of human beings: muddy, hairy, smelly, messy people who, unlike figures in a ledger or participants in a seminar, will not thank you for handing them a dollar, and indeed, might be likelier to cuss you out for it, for the perfectly reasonable sake of their own dignity.

But EA requires no fanaticism, and such fanatical reasoning is frequently criticized by effective altruists, including MacAskill. To think we should do more good rather than less, one need not think we should violate rights or conduct grossly immoral acts in pursuit of some ultimate aim.

The idea that caring about future people is fanatical is bizarre. There’s no justification given for this claim in the article. Fanaticism is about what one is willing to do in pursuit of what they find valuable, not what they find valuable.

But the oddest idea is that caring about far-away people is fanatical. Any morality worth its salt will hold that where a person lives—whether far away or nearby—has nothing to do with our duty to help them. When a person boards a plane, my duties to them do not lessen the further away they fly. If we can help faraway people hundreds of times more than nearby ones, we ought to do so. For the faraway people are just as “muddy, hairy, smelly, messy,” and far less likely to thank you.

The reason EA urges giving up one’s money to help faraway people isn’t because of some fanatical devotion to mathematics. EA treats faraway people the way we treat nearby ones. EA is not about what one looks like when they help others or whether; it is, as giving should be, about those we can help.

Throughout the entire article, there is not one mention of the people whose lives have been bettered by effective altruism. The upwards of 100,000 people who would have otherwise died from malaria who now will not because of effective altruism’s bednet distribution. It’s all very well to muse about abstract morality and take potshots at effective altruists who one perceives to be annoying, but when children are dying of diseases that affluent westerners can stop at minimal personal costs, then self-righteous defenses of why one has no duty to lift a finger to save the lives of children start to ring hollow, especially when combined with scorn towards those who do anything to address these horrifying problems. The motivation for effective altruism has nothing to do with the aesthetic that upsets Townsend so, and everything to do with the children like Abdul who would be sicker and blinder if EAs had done nothing, many of whom would be dead, to whom Townsend turns a blind eye, while sanctimoniously declaring that doing anything to help them would be “fanatical.”





More posts like this

Sorted by Click to highlight new comments since:

Thanks for writing this. Whenever having these kinds of discussions, I encourage the other party to be "serious" because the stakes are really high here. When you're on a public stage disparaging the idea of doing more good, and being more evidence based, you can harm a lot of people.

Some evidence of the moral stakes of media: there's been a spread of media guidelines which censor the word "suicide" (and replace with "died"). There's been good evidence for awhile that media coverage on publicly committed suicides (for example, by subway) can increase the rate of future incidents in an effect called suicide contagion. This effect can be measured empirically, and journalists have began to respond to this because succinctly, nobody wants to write an article that will literally kill someone.

Just want to quickly register that I disagree with your comment (and disagree-voted). This proposed policy reminds me too much of the original meaning of "political correctness" and "party line." My guess is that we should not have a higher bar for critical voices than complimentary ones, no matter how righteous our cause areas might be. 

I can your perspective, and I recognise it's context dependent.

However, If journalist is writing, or publishing, about deleterious effects from vaccines, they should be very careful to ensure what they're writing is accurate, because we have a track record of said output being wrong and irreversible. [1]

I suspect I could make a similar argument for a philosopher writing about an ethical or moral movement. It might take more time, but conclude in a similar place.

  1. ^

Do you think there is a symmetrical obligation for people writing positive things about vaccines? If vaccines were in fact not safe or effective then promoting them would also be very harmful.

You're trying to reason from first principles without factoring in people's cognitive biases. In a world without cognitive biases, a symmetrical obligation towards "communication seriousnesses" makes sense. 

However, I suspect when I give the concrete example of vaccines above, you actually agree with the statement, because your brain is factoring in the negativity bias that exists towards vaccines. 

Just to make this more concrete: an incredible amount of work is done to ensure vaccines are safe, and that people trust them. But a handful of viral social media posts can erode people's confidence. In this world, a symmetrical obligation does not make sense. 

Edit: added last paragraph.

Curated and popular this week
Relevant opportunities