Hide table of contents

The original article is here: https://www.ft.com/content/128f3a15-b048-4741-b3e0-61c9346c390b

Why respond to this article?

When browsing EA Twitter earlier this month, someone whose opinions on EA I respect quote-tweeted someone that I don't (at least on the topic of EA[1]). The subject of both tweets was an article published at the end of 2023 by Martin Sandbu of the Financial Times titled "Effective altruism was the favoured creed of Sam Bankman-Fried. Can it survive his fall?" Given that both of these people seem to broadly endorse the views, or at least the balance, found in the article I thought it would be worthwhile reading to see what a relatively mainstream commentator would think about EA.

The Financial Times is one of the world's leading newspapers and needs very little introduction, and Sandbu is one of its most well-known commenters. What gets printed in the FT is often repeated across policy circles, not just in Britain but across the world, and especially in wonky/policy-focused circles that have often been quite welcoming of EA either ideologically or demographically.

As always, I encourage readers to read and engage with the original article itself to get a sense of whether you think my summarisation and responses are fair.

 

Reviewing Sandbu's Article

Having read the article, I think it's mainly covering two separate questions related to EA, so I'll discuss them one-at-a-time. This means I'll be jumping back-and-forth a bit across the article to group similar parts together and respond to the underlying points, though I've tried to edit Sandbu's points down as little as possible.

1) How to account for EA's historical success?

The first theme in the article is an attempt to give a historical account of EA's emergence, and also an attempt by Sandbu to account for its unexpected success. Early on in the article, Sandbu clearly states his confusion at how a movement with the background of EA grew so much in such a short space of time:

"Even more puzzling is how quickly effective altruism rose to prominence — it is barely a decade since a couple of young philosophers at the University of Oxford invented the term ... nobody I knew would have predicted that any philosophical outlook, let alone this one, would take off in such a spectacular way."

He doesn't explicitly say so, but I think a reason behind this is EA's heavy debt to Utilitarian thinkers and philosophy, which Sandbu sees as having been generally discredited or disconfirmed over the 20th century:

"In the 20th century, Utilitarianism… progressively lost the favour of philosophers, who considered it too freighted with implausible implications."

The history of philosophy and the various 20th century arguments around Utilitarianism are not my area of expertise, but I'm not really sure I buy that argument, or even accept how much it's a useful simplification (a potted history, as Sandbu says) of the actual trends in normative ethics. 

First, Utilitarianism has had plenty of criticism and counter-development before the 20th century.[2] And even looking at the field of philosophy right now, consequentialism is just as popular as the other two major alternatives in normative ethics.[3] I suspect that Sandbu is hinting at Bernard Williams' famous essay against utilitarianism, but I don't think one should consider that essay the final word on the subject.

In any case, Sandbu is telling a story here, trying to set a background against which the key founding moment of EA happens:

"Then came Peter Singer. In a famous 1972 article... [Singer] argued that not giving money to save lives in poor countries is morally equivalent to not saving a child drowning in a shallow pond... Any personal luxury, when excess income could combat hunger or poverty, would stand condemned. In my time in academia, Singer’s philosophical rigour was respected but also treated as a bit of a reductio ad absurdum. If your principles entail such all-consuming moral demands, then the principles are in need of revising."

I don't think being in conflict with a majority opinion should be dispositive at all, and given the mixed record of moral and social intuitions about what it is right, given that moral intuition often has a very 'mixed' as guide to moral action in human history, I think it's perfectly acceptable to claim that in this case one man's modus ponens is another man's modus tollens, and that if we live in a morally demanding universe, so be it. Some might find the demandingness objection intuitive, but others would say that this just shows how much moral improvement humanity has to do.

I'm not endorsing the fully-demanding position here, just pointing out that there's extra argumentation to be done to favour symmetry-breaking in favour of the "reductio ad absurdum" direction instead of the "guidance for right action" direction.

"A generation later, the seed planted by Singer found extraordinarily fertile soil. The founders of EA, MacAskill and Toby Ord, have both credited Singer’s article with their moral awakening in the mid-2000s. “Studying that paper and others, I wrote an essay for my BPhil at Oxford — ‘Ought I to forgo some luxury’,” Ord told me, which “forced me to think seriously about our moral situation with regard to world poverty”. Within a few years, Ord and MacAskill had founded Giving What We Can"

I think this paragraph is meant as a conclusion to Sandbu's history of the development of EA, but I think that it actually functions as the correct answer. Institutional EA developed because of the existence of EA ideas, starting with "Famine, Affluence, and Morality" as Sandbu pointed out earlier in the article. These ideas survived and spread in no small part because they were more self-correcting/consistent/action-guiding in a way that others philosophical ideas at the time weren’t. Or, to phrase it another way, because they're true.

Sandbu instead seems to point to environmental factors as being more influential, mostly settling on the fact that the generation EA first took hold in was one that was disenchanted with the world (through climate change, the financial crash, generational inequality etc). He also points out the presence of the philosophers Derek Parfit and John Broome assisting with the development of EA ideas in Oxford, particularly because they were advisers to MacAskill and Ord. But I feel the ideas are more important here - Parfit and Broome passed along a set of ideas to MacAskill and Ord, and there are many different ideas[4] primed to spread amongst those disappointed with the world, so on its own that isn't a strong explanation of EA's spread at all.

Of course, there's an issue here about motivated reasoning.  As an identifying EA, I might think EA spread because its ideas are good, but a critic might reject this perspective and look at external explanations for an ideology that they think has little to no merit. But here I think we're settling on the object level question of whether EA's ideas are actually sound or not, which is the second theme of the article.

 

2) What does EA get wrong?

The other theme that Sandbu explores in the article is whether the EA movement, and/or its ideas, are actually a good way to do good personal and a societal level.

"There are two ways to characterise EA. One is modest: it says that if you are going to donate to charity, pay attention to what you fund and choose the most effective charities. That is hard to argue with: why not want your charity dollars to do the most good possible?

But even this modest version leads to some uncomfortable implications: it is wrong to volunteer your time for a cause you can better advance by “earning to give” to it; it is wrong to choose an “inefficient” cause — say, research into an expensive-to-treat disease that killed a loved one.

I think many people do disagree with the premise that Sandbu says its 'hard to argue with'. They might do from an aversion to charity, or from different conceptions of the good, or just from a general intuition against ideas that seems 'weird'. In some sense, this hard-to-argue with version of EA is just as controversial as any other, because moral issues are puzzles for everyone, and not just EAs. 

The example Sandbu uses in the second paragraph is, however, a rather good one to consider. Would it be 'better' for a person working in a high-paying financial/legal/consulting role to spend a significant amount of their income to fund Vitamin A supplementation to children under 5, or to spend your life trying to fund a cure for Fibrodysplasia Ossificans Progressiva if a family member had previously suffered from it? I can sympathise with the person in the latter case, but I still think that the former choice would be the right one to make. 

"…if you take Singer-type ideas seriously, the modest version is not where you stop... how can you not apply it [cause prioritisation] to your career choices and how much money you could make to give away....how can you not ask whether you should really focus on the poor in the world, or farmed animals’ suffering, if there is even a small chance that an asteroid or AI could deny trillions of potential future lives their existence, and you could devote your resources to preventing that?

That is the familiar sound of the 'train to crazy town'. I have to imagine, given his background and reputation, that Sandbu has done more than a cursory investigation into EA. But if so, surely he'd be aware that there are very many discussions within EA about exactly these questions and how to resolve them? I'm definitely sure that Sandbu is familiar with the Secretary problem though, and that seems fairly analogous to this situation. 

You could, for example, spend your entire life researching what the right thing to do was, and then die before any action could be taken. The same logic applies here. One could ask these questions, but asking and trying to solve them comes at an (opportunity) cost, and simply stating that  a small probability that a large-impactevent might happen reasoning doesn't mean it actually is the best way to do good. (For more on that last point, see here and here for interesting explorations into the value of x-risk mitigation).

"Longtermism” and prospective future catastrophes, such as rogue AI, are taking up ever more of EA’s attention... One person familiar with the OpenAI boardroom conflict says that EA ideas did not play any role. If so, that must surely count as a huge missed opportunity to avert potentially devastating future harm, which by EA lights is as morally bad as causing it.

I think the first sentence here is just stated without evidence. I actually think that the 'longtermist takeover' in EA is actually vastly overstated, at least as usually presented. Not only have ideas around x-risk been around for a long time, but I don't actually think the majority of EAs prioritise x-risk/longtermist causes. Now, I think there is a claim somewhat like the one Sandbu is making that is true, perhaps when focusing on the opinions of EA "leaders", but the effort he gives here to make such a point isn't really good enough to get it to work.

As far as the supposed 'gotcha' about the OpenAI board, I think it's really just that, a 'gotcha'. One has to actually evaluate what happened in the case, and argue that different action by the board would be massively averting potential devastating future harm, instead of say thinking that OpenAI's position is net-positive. Again, I'm not arguing that this is the case, but critics should at least make the case. This frustration came up again for me in the passage below:

And EA ideas clearly did not discourage the fraud at FTX. It is not implausible to think Bankman-Fried knew he was breaking the law but concluded... that he had good enough odds to make the money back many times over... In other words, he may have thought the expected value of the fraud was distinctly higher than that of honesty. And, if this was the case, who are we to say he was not correct — just unlucky?

SBF may well have thought what he was doing was high EV, but sometimes I daydream that I'd be able to score a penalty in a World Cup final. Simply thinking things doesn't make them reasonable or true! 

In fact, the question Sandbu poses at the end of this extract is really a crucial part of EA. EAs don't just stop at "who's to say who is correct", they actually try and investigate the answer, and it turns out that commiting one of the largest financial frauds in history is not a very good way to be effective, or altruistic, and that the downside risk of it probably dwarfed everything else entirely.

But if I were an effective altruist, what would worry me most is that EA has failed to produce “the most good” in the two public cases where it should have made the biggest difference.

I think plenty of EAs are highly critical about how the movement, or parts of it, or particular individuals behaved in these two cases. Maybe Sandbu really is just thinking out loud here, but I feel like there's an insinuation here that EAs aren't thinking about this, and it'd at least be good to have mention of the internal discussions and disagreements in EA about what happened in both of these cases.

I also think Sandbu indexes too much on the aim to produce 'the most good' in some abstract sense.[5] The world has doesn't have only two binary states of 'the most good' and 'not the most good'. The ~entirety of moral difference in how the world is is contained in the latter. I'm less concerned by EA not doing 'the most good' if instead it did 95% of 'the most good' compared to the current moral status quo, and I think the issue in these cases isn't that EA failed to produce 'the most good' but that instead it may have contributed to harms.

Everyone in the EA community is adamant that Bankman-Fried’s conduct was nonetheless wrong... But that begs the question of what arguments, within effective altruism, could condemn what Bankman-Fried did. When I put this to him, Ord accepted I had a point. There must be constraints, he insisted, but admitted they are “not built into the philosophy”. This implies, it seems to me, that to achieve the most good we can do, we should not take EA too seriously.

I don’t really know what to make of how the article concludes, and I'm particularly confused as to Ord's response, so much so that I can't help thinking that there's been some miscommunication here? Ord isn't a naïve utilitarian by any stretch of the imagination, so I'm not sure 'you have a point' is actually conceding to Sandbu here, or Ord saying that this criticism 'has a point' against naïve utilitarianism but not EA?

In any case, while the final line is cute, I don't come away with a clear idea of what Sandbu actually means here. Does the injunction to "not take EA too seriously" mean we ought to ignore or disregard EA entirely, or does it mean just be non-totalising in our moral obligations? Without further explanation it seems an odd place to end the article, and I ended up disappointed in an article that seemed to start promisingly end with a whimper.

 

Finally, there are a few extra criticisms from people that Sandbu has interviewed for the piece, which I thought would be good to treat separately:

[Broome] adds: “I think these people are naive . . . The focus on philanthropy . . . gives cover to wealth and the increasing inequality there is in the world . . . Where the efforts of these altruists should be directed is towards ensuring that governments behave properly. They are not thinking enough about the political structure that underlies their ability to give money away.”

A big issue with Broome's[6] take here is that it's empirical basis just seems to be wrong. Worldwide wealth inequality has probably decreased over the last half century. Second, I don't (necessarily) want to get drawn into 'another internet debate about Capitalism'TM, I'd be remiss if I didn't point out that the political structure that "underlies their ability to give many away" also seems to be the structure that creates the money in the first place.

Third, there's a repeat of the known claim that EA needs to look at political structure more, in which case I assume that Broome would actually be more supportive of EAs move towards policy in the AI space and is a big fan of longtermist policy work as opposed to philanthropy for global health? I wouldn't expect so, but then I would ask those who agree with or sympathise with Broome's take here, but are sceptical of longtermism, to ask themselves why the evidence for the latter is not robust enough to take seriously, while the evidence for making sweeping structural changes that match Broome's politics is acceptable.[7]

As another philosophy professor put it to me, EA suggests to bright undergraduates that “the world is a problem they can solve, not through the difficult work of politics . . . but simply by applying an easy algorithm”. For Strudler, it reflects “a failure of imagination. [EA] is a substitute for hard moral judgment, but it’s a substitute that doesn’t work.”

I wish this had been presented with more critical commentary by Sandbu. I don't grt the impression that either the anonymous professor or Strudler have actually had much to do with EA or individual EAs, or at least not a representative one. I think many EAs find thinking about and acting on their moral judgements to be hard, and not easy. One of the best pieces on this kind of perspective, or at least one that really resonates with me, is Holly Elmore's analogy of doing good to triage in crisis. Even ignoring the chaser comment that EA "doesn't work" (presented without evidence), from what I can tell Strudler is actually the one with a singular failure of moral imagination, and is not really grappling with what it means to be good in a morally inconvenient universe.[8]

 

3) Factual/Object-Level Mistakes

Sandbu is a good writer, and I think on the whole I'd agree that he's trying to be balanced and think about Effective Altruism with an open mind. Nevertheless, I think there are some points where, instead of considering a philosophical debate, the piece just contains factual mistakes that significantly weaken its integrity:

"Meanwhile, OpenAI’s boardroom drama turned on whether commercial or safety considerations should set the pace of development."

Much of the what happened during the OpenAI boadroom drama remains unknown, and unfortunately many of those involved at the heart of this are not willing to tell the full story openly, either in the initial case or in the negotiations afterwards. Zvi has a good roundup on events, including here and here, which I'd probably recommend for good summaries of what happened after the dust settled.

It's undeniable that EAs were involved, and perhaps then considerations Sandbu mentions did play an important role, but it does seem that the drama 'turned' on trust between the members of the board and Altman, as well as between the employeed of OpenAI and the board. This is another claim where Sandbu states something as if it's clearly fact, but his claims don't actually stand up once you kick the tires a bit.

"stronger commitments inspire the fervour seen among EA’s young adherents — many of whom testify to how EA changed their life and gave it purpose."

Sure, I think this is true in individual cases, but no attempt is made here to actually investigate beyond a high-level vibes-based claim. For example, perhaps fervour is actually related to this who were originally part of EA, as opposed to new entrants? Maybe asking for analysis like this is asking too much of a newspaper opinion piece, but given Sandbu's reputation for thoughtful analysis it's still surprising to see these consistent over-claims about EA without the evidence to support it.

"These efforts [associated with such groups as GiveWell and Open Philanthropy] too, proclaim a desire to “do as much good as possible”. But while they share the empirical hardheadedness of the group that developed at Oxford, they seem less invested in the philosophical framework."

Again, is this true? I might be true in the sense that nobody is as invested in a philosophical framework as academic philosophers, but for the claim to have any useful content it needs to have something else. But Sandbu doesn't really go anywhere with this claim (he switches back to talking about Oxford), and I'm fairly sure that the people at GiveWell and OpenPhil would report that they take their philosophical frameworks serious, so again this just seems like 'vibes-based' reporting.

"Such “longtermism” is increasingly being adopted by the EA community and its Silicon Valley friends"

Once again, is this actually true? I feel like when you dive into considering the information actually available on this, the picture is actually a lot more mixed than many people expect. Instead, I think what has changed the most is people's perception that this is true, and that a large part of EA-critical articles, videos, and tweets is being caused by a preference cascade of sorts, rather than actual analysis of the sociology or history of the EA community.

"The reduction of moral questions to mere technical problems is surely one reason that EA spread in two particularly moneyed techie communities: Silicon Valley and quantitative finance."

This kind of sneaking in of ‘mere’ also occured in Kenan Malik's article on EA, where he rather insidiously misquoted Peter Singer as going from saying things are sensitive to numbers to being 'merely sensitive'. I just wanted to point this out, as I really don't like this sort of rhetorical device, and think it should be a red-flag of sorts when trying to understand issues like this.

It's also an attempt to sneak in too large and controversial claims without argument. 1) EA or Utilitarianism reduces moral questions to technical ones, whereas EA seems to be full of moral theorising, and 2) that this is a central causal role in EAs success in the two communities mentioned. 

EA “is quickly becoming the ideology of choice for Silicon Valley billionaires”, one sceptical academic philosopher complained to me

So this isn't Sandbu, but this "sceptical academic philosopher's" claim is presented too uncritically. Why are we only including Silicon Valley billionaires, and not multi-millionaires? Why only billionaires from the Valley? Less facetiously, when did this philosopher run a survey of Silicon Valley billionaires to actually get the evidence? There's simply none presented that this claim is actually true.

The kicker is, even if we accept that this is true, what does the insinuation even prove? If Silicon Valley billionaires collectively had a road to damascus moment and became Marxists in the hope of supporting a change to a communist regime, would this discredit Marxism purely by virtue of rapid uptake amongst Silicon Valley billionaires? I don't think so, and so I can only interpet this quip to be another exhortation of 'Silicon Valley people bad'.

 

Final Thoughts

Common Themes in Criticism

I think many people, when they write articles like this, often frame it implicitly or explicitly through a template of"EA was once good (global dev), EA is now bad (AI)". This is a nice and simple story to write, but I don't think it's particularly true, or at least it's a fragmentary attempt at reaching the truth. Much of EA (the majority of money, plurality of people?) are still focused on Global Health, and concern about AI risk was also a part of EA from the very beginning.

Articles like this also often raise moral questions or dilemmas EAs engage with, but either ignore the corresponding dilemma that the status quo or common-sense moral view faces, or assumes that EAs haven't discussed the difficulties of the case. I like a randomista vs systemic debate as much as the next EA, but it would be good to see an article recognise that EA isn't at step 1 on this topic, and consider that the people involved might have thought about it instead of being unaware of it.

Finally, I really wish EA got better at responding to pieces like this. I definitely feel it is at least de-facto true, if not de-jure the case, that the EA policy is to not respond to things like this at all. This doesn't have to have been a conscious decision by 'big EA', it can still occur if each EA thinks it isn't their responsibility to respond. But I think that the current state of affairs isn't good,[9] and it feels to me that a lot of EA is playing prevent-defense [10] right now, rather than either pushing back on bad criticism, or trying to integrate useful and valid critiques.

 

What's next for this sequence?

I was actually already writing a post for another response before this article popped up on my radar and I switched track.  The post in question was going to respond to a recent episode of Very Bad Wizards that's (mostly) about EA. Some of the criticisms, as well the same factual mistakes, are repeated but there are some interesting perspectives to consider, especially from Tamler who I think would defend a strong form of moral localism/partiality.[11]

Another public EA criticism I've started to look into is Eli Tyre's viral Twitter thread which provoked a lot of responses from across EA-adjacent Twitter. To be honest, I was quite surprised to see various people react initially positive to it, though my vague read of the Twitter tea-leaves is that the the later reaction was a bit less positive. You could probably guess what my overall reaction was (though hopefully not all my reasons why).

After that I think I want to start writing up some more positive posts (as in, making a positive case for something) about what Third-Wave EA could look like, instead of continuously falling behind an ever-growing in-tray of EA criticisms which don't seem to care enough to prevent robust supporting evidence for their general claims. This isn't to say that EA isn't worth criticising, or doesn't deserve criticism. If people could point me in the direction of higher-quality things to critique, I'd really appreciate it.

  1. ^

    For reference, this is the thread why. Giles seems to get lots wrong about EA, gets called out by Tom Chivers, and doesn't ever admit he's wrong or reply convincingly to the pushback. Really means he's on 'epistemic probation' for me when discussing EA.

  2. ^

    John Stuart Mill, for example, was already developing utilitarianism to be more advanced/less naïve than what he saw as Bentham's approach in the 1860s.

  3. ^

    Source is the Normative Ethics question in the 2020 PhilPaper's survey here

  4. ^

    Infinitely many, in fact

  5. ^

    Potentially, not necessarily, in a space/time impartial totalising sense, but I think other axiologies are compatible with EA too

  6. ^

    Yes, the same Broome who supervised both MacAskill and Ord 

  7. ^

    My main claim here isn't (necessarily) to argue for one political or economic view over another, but to state that any evidence we can have about which one is 'correct' will be limited

  8. ^

    There is a chance that Strudler gave a more worked-out case against EA, or presented a more nuanced one, but Sandbu cut this from the article.

  9. ^

    See the continuing stream of articles like this

  10. ^

    Prevent-Defence is a term from the NFL, generally describing cases where a team in the lead will play an incredibly-conservative defensive strategy in order to minimise giving up long-range passes or touchdowns. Anecdotally, this seems to often be too conservative in practice, and leads to the perception that it often ends up harming the team in the lead rather than helping.

  11. ^

    Sorry if I'm wrong about that Tamler!

Comments2
Sorted by Click to highlight new comments since: Today at 1:10 PM

I think the following is a good point, "Where the efforts of these altruists should be directed is towards ensuring that governments behave properly. They are not thinking enough about the political structure that underlies their ability to give money away." I have been volunteering in the space of Animal Rights since 2015 and have been reading about social changes and social movements in the past few years. From the literature I read, it does seem that EA movement as a whole is not as focused on creating positive legislative change as it could be. For example, slaves/serfs were freed and women got the right to vote due to social and legislative activism that demanded these changes. It would be great to see more donors supporting more work that is of legislative nature.

Executive summary: The post responds to a recent article critiquing effective altruism, arguing that many of the criticisms are unfounded or ignore internal debates within EA.

Key points:

  1. The article questions how EA became so prominent given its philosophical roots, but the ideas themselves explain the growth.
  2. Claims that EA over-prioritizes long-term issues lack evidence and ignore ongoing internal disagreements.
  3. Alleged failures around FTX and OpenAI are overstated and not representative of most EA efforts.
  4. Factually inaccurate statements weaken the article's critique about EA becoming an ideology of choice for technologists.
  5. Responding to critiques can improve public discourse, but prevent defense risks constructive dialogue.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

More from JWS
Curated and popular this week
Relevant opportunities