Doing Good Better, an introduction to effective altruism, was recently reviewed by Amia Srinivasan in the London Review of Books, Europe's most successful literary magazine.

The article reads as critical of effective altruism, though is also careful to point out its advantages and our responses.

Overall, it's some of the most thoughtful criticism of effective altruism from someone outside our community I've seen in some time. Amia made a real effort to understand in our position, and had a lengthy discussion with Will before she published the article. At the same time, Amia clearly has a different world view from most of us, so it can make for frustrating reading.

Because of this, I think we should make an effort to welcome this feedback and try to learn from it - it'll need to be an active effort because my gut response is defensive!

To help get us going, in the rest of this post I've taken some key extracts from the article and added my comments. 

A new generation of moral philosophers is determined to break with this tradition of ineffectuality. The goal of the ‘effective altruists’ is not only to theorise the world, but to use their theories to leave the world a better place than they found it.

The author focuses heavily on effective altruism's roots in moral philosophy, which makes sense in the context of a review of Will's book, but is a bit unfair on the movement overall. The founders of the other organisations mostly don't have backgrounds in academic philosophy.

MacAskill proposes that ‘good’, here, can be understood roughly in terms of quality-adjusted life-years (Qalys)

This may be a misunderstanding. Will talks at length about QALYs in the book as one way to compare outcomes, but he doesn't propose that's how to understand "good" in general. Rather "good" would be understood as the welfare of sentient beings, of which health is just one component.

[on the negative effects of seeking jobs in finance] Up until recently MacAskill argued that such effects were morally irrelevant

Not that they're irrelevant, just that they're (i) smaller than they look and (ii) permissible in the context of doing a large amount of good through donations.

MacAskill is evidently comfortable with ways of talking that are familiar from the exponents of global capitalism: the will to quantify, the essential comparability of all goods and all evils, the obsession with productivity and efficiency, the conviction that there is a happy convergence between self-interest and morality, the seeming confidence that there is no crisis whose solution is beyond the ingenuity of man. He repeatedly talks about philanthropy as a deal too good to pass up: ‘It’s like a 99 per cent off sale, or buy one, get 99 free. It might be the most amazing deal you’ll see in your life.’ There is a seemingly unanswerable logic, at once natural and magical, simple and totalising, to both global capitalism and effective altruism. That he speaks in the proprietary language of the illness – global inequality – whose symptoms he proposes to mop up is an irony on which he doesn’t comment. Perhaps he senses that his potential followers – privileged, ambitious millennials – don’t want to hear about the iniquities of the system that has shaped their worldview. Or perhaps he thinks there’s no irony here at all: capitalism, as always, produces the means of its own correction, and effective altruism is just the latest instance.

I'm not really sure what to make of this, maybe something like:


  • Our worldview is inherently biased in favor supporting this existing economic system, because we share many of its key assumptions, which will mean we'll fail to see potentially better ways of improving the world that involve changing the economic system.
I'm inclined to think you can separate basic ideas in economics (e.g. thinking at the margin) and ideas like the comparability of values from endorsing modern capitalism. 

Yet there is no principled reason why effective altruists should endorse the worldview of the benevolent capitalist. Since effective altruism is committed to whatever would maximise the social good, it might for example turn out to support anti-capitalist revolution....
Indeed one element of the movement is turning its attention towards what members like to call ‘systemic change’, taking up political advocacy on issues ranging from factory farming to immigration reform.



The more uncertain the figures, the less useful the calculation, and the more we end up relying on a commonsense understanding of what’s worth doing. Do we really need a sophisticated model to tell us that we shouldn’t deal in subprime mortgages, or that the American prison system needs fixing, or that it might be worthwhile going into electoral politics if you can be confident you aren’t doing it solely out of self-interest? The more complex the problem effective altruism tries to address – that is, the more deeply it engages with the world as a political entity – the less distinctive its contribution becomes. 

I think this is a good point, which was also made by Dylan Matthews. Nevertheless, even if effective altruism's contribution is smaller than it first looks, it could still be substantial. The changes many effective altruists have made to their lives bears this out.


[Drawing on Dylan Matthew's Vox piece about EAG] Thus the humanitarian logic of effective altruism leads to the conclusion that more money needs to be spent on computers: why invest in anti-malarial nets when there’s a robot apocalypse to halt? It’s no surprise that effective altruism is popular in Silicon Valley: PayPal founder Peter Thiel, Skype developer Jaan Tallinn and Tesla CEO Elon Musk are all major financial supporters of x-risk research. Who doesn’t want to believe that their work is of overwhelming humanitarian significance?

I really don't buy the "we're all rationalising to ourselves that AI is important" line. All the people around CEA and GiveWell started focused on global poverty (and US education for GiveWell), and hardly any of us have a background in computer science or technology. We became to support AI safety research after engaging in the arguments for it over a period of years.

Effective altruism, so far at least, has been a conservative movement, calling us back to where we already are: the world as it is, our institutions as they are. MacAskill does not address the deep sources of global misery – international trade and finance, debt, nationalism, imperialism, racial and gender-based subordination, war, environmental degradation, corruption, exploitation of labour – or the forces that ensure its reproduction. Effective altruism doesn’t try to understand how power works, except to better align itself with it. In this sense it leaves everything just as it is. This is no doubt comforting to those who enjoy the status quo – and may in part account for the movement’s success.

One clarification (that Amia also makes) is that effective altruism is only conservative in the sense of not questioning the existing political system. It's radical in the sense that members of the movement want to promote concern for all sentient beings, think you should give most of your money to charity, want to shape the future through AI and so on.

What of the criticism? I think there's a couple of ideas here that need different responses:

1. It's a good point that we should be wary of being biased in favour of actions that preserve the status quo politically. That's the direction we're most likely to be biased in.

2. I think it's also a good point that many in the community engage with an overly narrow range of causes, and have not yet considered the full range of options for doing good. Though this is beginning to change.

3. However, I think others in the community have considered working on areas like international trade, debt, nationalism and so on, and trying to cause systemic change, and rejected them.

In some cases, they might reject that these causes are the most important e.g. many people are highly uncertain what a reformed economic or political system should look like, so think promoting one has unclear value. Other people in the community simply disagree that these are the major causes in suffering the world; rather they might say it's lack of concern for animals, or lack of concern for the long-term future.

More normally, people in the community would concede these are important causes, but rather think they think they're not unusually tractable or neglected, so not the top priority for action e.g. promoting evidence-based international development seems much more tractable than any of these causes, while existential risks are similarly or more important, but far more neglected. 

It seems like Amia simply disagrees with this prioritisation, and thinks we're biased for making it. Moreover, to be fair, none of this is discussed in Will's book beyond the general cause framework.

Finally, Amia's criticisms are a long way from a positive argument that there's something better we could be working on. Saying that "nationalism is a problem" is a long way from knowing what to do about it.

MacAskill tells us that effective altruists – like utilitarians – are committed to doing the most good possible, but he also tells us that it’s OK to enjoy a ‘cushy lifestyle’, so long as you’re donating a lot to charity. Either effective altruism, like utilitarianism, demands that we do the most good possible, or it asks merely that we try to make things better. The first thought is genuinely radical, requiring us to overhaul our daily lives in ways unimaginable to most.

I think there may be a misunderstanding here. Effective altruism claims you should try to do the most good *with a significant proportion of your resources* (operationally defined as over 10%). It's not both claiming you should do the most good and merely make things better.

If you’re faced with the choice between spending a few hours consoling a bereaved friend, or earning some money to donate to an effective charity, the utilitarian calculus will tell you to do the latter. 

If effective altruists really are committed to doing the most good, they should say the same.

I doubt it.

Also, EAs are not aiming to be 100% dedicated to helping others, so can deny it even if utilitarians can't.

MacAskill thinks this self-transcendence – or as close as we non-saints can get to it – is essential if we are going to meet the ethical demands of our day. Wittingly or not, he believes, we are all like A&E doctors, forced to perform triage lest more people suffer and die than have to. What is required is impersonal, ruthless decision-making, heart firmly reined in by the head. This is not our everyday sense of the ethical life; such notions as responsibility, kindness, dignity and moral sensitivity will have to be radically reimagined if they are to survive the scrutiny of the universal gaze. But why think this is the right way round? Perhaps it is the universal gaze that cannot withstand our ethical scrutiny. 

Clearly we think the everyday notion of an ethical life has to change. But I think effective altruism strikes a good middle ground between complete transformation and existing common sense ethics. 

There is a small paradox in the growth of effective altruism as a movement when it is so profoundly individualistic. Its utilitarian calculations presuppose that everyone else will continue to conduct business as usual; the world is a given, in which one can make careful, piecemeal interventions. The tacit assumption is that the individual, not the community, class or state, is the proper object of moral theorising. There are benefits to thinking this way. If everything comes down to the marginal individual, then our ethical ambitions can be safely circumscribed;

How I read this paragraph is that effective altruists are likely going to miss out on the most important ways to change the world because we'll fail to consider systemic changes because we're overly focused on individual actions at the margin.

Here's how I think an EA should think about whether to advocate for systemic change:

1. Value of changed system = V

2. My probability of bringing about a changed system if I try is P

3. Expected value of working on changing the system = V * P

4. Work on changing the system if this value is greater than the next best alternative.

 I think Amia's criticism could be interpreted as one of:

1. EAs will fail to even consider that the value of a changing the system is large, because they only think about what individuals can do.

2. Because V*P is really hard to calculate, EAs will ignore this possibility in favor of more "common sense" and comfortable ways to do good that don't question the political system. This is especially true because we tend to think from the individual's perspective. Actually, pushing for systemic change is the best thing to do.

3. The value of pushing for a systemic change is low, but it's the right thing to do for non-consequentalist reasons.

4. The value of pushing for systemic change from an individual point of view is low, but that's because we're in a prisoners dilemma style situation. If we could find a way to coordinate, then it would be better to bring about systemic change.

What do I think's actually going on in the heads of most effective altruists when they don't work on large-scale systemic change? I think mostly they're just not sure whether the value of a changed system is large or not. The track record of trying to design a new political and economic system seems bad, and it's really hard to avoid unintended consequences. Instead, it seems much more tractable to push for marginal changes. There may be an element of bias in this thinking however.

I don't think (2) is true - EAs have shown themselves far more willing than most to go for low-probability high-value outcomes, such as in the case of existential risk.

If (4) turns out to be true, that's the most worrying for the movement, though I could conceive of a version of effective altruism that's much more focused on coordination problems than we are currently.

* * *

What are your thoughts? What are the best criticisms in the piece? What should we do differently?







More posts like this

Sorted by Click to highlight new comments since: Today at 9:55 AM

Pretty late to the party, but here are some thoughts.

I think one point that Amia might be making is a criticism of EA's culture. Amia seems to think that EA has a pro-political-status-quo culture. While EA people seem to share a number of basic assumptions about the world, an account of 'how power works' (that Amia would find acceptable) is not one of them. There is no prevailing attitude that capitalist political institutions are the root cause of a number of the world's most serious problems. Given Amia's political commitments, I think her view is that a prerequisite to driving morally valuable systemic change is the epistemic task of accepting a world view that has been advocated by socialist, feminist, and anti-racist scholars. It is not that EA should place a greater focus on systematic change. Rather, EA doesn't seem to take the epistemic task seriously enough.

If this is right, then it represents an opportunity for improvement. A closely related argument has been made by Kissel (2017). ( He writes: "... I think Effective Altruism will be less effective in realizing its own ends insofar as it fails to recognize that capitalism restricts the good we can do... I first argue that Effective Altruism and anti-capitalism are compatible in principle by looking at similarities between Effective Altruist theory and some Marxist writing. I then go on to show that the theoretic compatibility can be mirrored in practice... I conclude by suggesting that their reconciliation would lead to better outcomes from the perspective of a proponent of either view. In short, an “Anti-Capitalist Effective Altruism” is not just possible, it’s preferable."


Good dissection, but it's a bit hopeless to try to win a debate like this because you can't really argue when the problem is that we have the wrong worldview and the "ways of talking of global capitalism". If effective altruism stopped dealing with empiricism, probabilities and statistics then it just wouldn't be effective anymore. So we can take people's calls for radical change and try to see if they work according to solid criteria, but that will never satisfy them. They will turn away the moment that you try to bring up V*P because they just don't think that way. And given that the current groups who do support radical change of socioeconomic systems are incredibly ideologically fragmented and seem to fail to muster the level of funding and personal commitment that EA does, it doesn't seem to be a very fruitful target demographic for us to win more support.

I do expect that we will be steadily more focused on evaluating various types of political change in the future, but that is a function of the resources of the movement. The time and money that it would take to do a systematic evaluation of "should we overthrow the chains of the bourgeoisie" would take away from other things. Remember that this way of thinking is scarce outside of continental philosophy and similar subsets of academia. I don't expect it to have a significant impact upon the movement's strength.

I wonder if we could gain headway by emphasizing just how awful the track record of radical political change really is. (I'll give you a hint: it's worse than the track record of bad charities.) But that seems likely to just provoke unnecessary controversy and debate without changing anyone's mind. So the tone and measure of your writing is good. This kind of patient response to criticism helps to keep the rhetoric level calm and low while we focus on more important messages.

As a side note, I would caution everyone to think about the potential downsides if we did evaluation of radical political change. Just like the unfortunate but necessary situation with AI research, it would probably put off a large number of people who would begin to see us as radicals, communists, revolutionaries, or something of the sort. And those would be the people who have the funds and influence that we actually need.

(but I am not an expert in movement building - hopefully someone can correct me if I asserted anything dubious.)

Yes, I'd like to clarify I don't think we should think in terms of "winning the debate" but rather "understanding our critics and seeing what we can learn from them".

Yeah, of course. You've got an A+ attitude on all this.

I really want to pull good insights out of this to improve the movement. However, the only thing I'm really getting is that we should think more about systemic change, which a) already seems to be the direction we're moving in and b) doesn't seem amenable to too much more focus than we are already liable to give it, i.e., we should devote some resources but not very much. My first reaction was that maybe Doing Good Better should have spent a little bit of time mentioning why this is difficult, but it's a book, and really had to make sacrifices when choosing what to focus on, so I don't think that's even a possibe improvement. I think the best thing to come from this is your realization of potential coordination problems.

While I encourage well-thought-out criticism of the movement and different viewpoints for us to build off of, I can't help but echo kbog's sentiment that this seems a bit too continental to learn from. The feeling I get is that this is one of the many critiques I've encountered that find themselves vaguely uncomfortable with our notions and then paint a gestalt that can be slowly and assiduously associated with various negatives. There's a lot of interplay between forest and trees here, but it's really difficult to communicate when one is trying to work with concrete claims and another is trying to work with associations.

In summation, I think on most of these points (individualism, demandingness, systemic change, x-risk) we are pretty aware of the risky edges we walk along, and can't really improve our safety margins much without violating our own tenets.

I think that might be fair. I was thinking more last night about what behaviour I'd actually change in light of this, and wasn't thinking of many concrete actions. The main area would be to improve how we talk about cause selection so people don't think we're ignoring the issues she raises.

Actually I might take this back. The point about "EA hasn't thought about how to solve coordination problems" (criticism (4) at the end) needs more thought put into it.

I actually thought Amia Srinivasan's article was quite thoughtful and (together with Dylan Matthew's criticism) echo some of my own concerns as a beginner/fringe EA.

  1. How do we broaden the movement? As a brown, female, immigrant 40-something mid-career professional who is not a banker or philosopher and has a family (including aging parents who rely on us), math literate but not interested in the detailed calculations that populate the EA pages I don't feel like there are other EAs who share my profile.

  2. How do we work beyond individual donations to talk about systemic change or show how the current donations/recommendations and effective giving actually help reduce inequality in today's world? What other movements should we be supporting (maybe we are and I am just unaware of this) - immigration reform, corporate tax havens, reparations to colonial countries etc. Should we even be looking at this, or do they lose out in the trade-off?

  3. I think there are two parts to EA - giving a significant portion of your income AND ensuring it goes to effective charities. So, for example, donating $100 million of your $200 million income alone may not count as effective altruism. But do we want to focus on the former or latter i.e. get everyone giving 10% or more of their income (and giving it, say, to UNICEF) or giving even a small proportion of their income to effective charities. As a marketing professional, this is the classic penetration vs. frequency argument- do you get volume by having everyone consume your product vs. a small but loyal group consume it more frequently. At least in the consumer packaged goods industry where I work, you first need penetration for scale. So, I would urge us to look at getting everyone on board with the idea, donating even 1-2% of their income but donate it effectively.

  4. How do you combine pure reason with emotion. It is not helpful for most people to shift everything from say, giving a dollar to the homeless person on the street into charities. I find that I tend to give more to effective altruism charities when i give to the person on the street as well (though, living in New York city, arguably, they don't need it as much). In this context, Adam Grant's book "Givers and Takers" (not about charity) is very helpful. He calls it an "otherish" giving strategy - that replenishes the self as well as the givers. The key idea is to if you give something in concrete, tangible terms (e.g. volunteer with some students to see their scores go up) that energizes you to give to much more uphill, abstract concepts (like EA). I think this could be an interesting idea - may dilute the overall idea of EA but would help in the long run.

I wrote an article that is relevant to this topic and I want to post it. However, I'm new here and I don't have any karma points. Can you please give me five so I can post my article?

Thanks! :)

Since effective altruism is committed to whatever would maximise the social good, it might for example turn out to support anti-capitalist revolution....

Yes, it might. And it might turn out to support lebensraum for the German speaking peoples. However, we are committed to using empirical data to study the actual results of policies. As previous attempts to implement these policies caused tens of millions of deaths, alongside much other damage, it is extremely unlikely they maximise the good! Supporting anti-capitalist revolution should be the sort of crazy hypothetical philosophers discuss when they tire of trolly problems, not a plausible issue that can be seriously compared with things that save lives, like AMF.

Do we really need a sophisticated model to tell us that we shouldn’t deal in subprime mortgages

No, we don't 'need' a model to tell us this, because it's obviously false. Just because you have a poor credit history doesn't mean you shouldn't be allowed to get one, if you can find a willing lender and agreeable terms. Indeed, the creation of the subprime market was once viewed as a progressive cause.

X-risks could take many forms – a meteor crash, catastrophic global warming, plague – but the one that effective altruists like to worry about most is the ‘intelligence explosion’: artificial intelligence taking over the world and destroying humanity. Their favoured solution is to invest more money in AI research. Thus the humanitarian logic of effective altruism leads to the conclusion that more money needs to be spent on computers: why invest in anti-malarial nets when there’s a robot apocalypse to halt? It’s no surprise that effective altruism is popular in Silicon Valley: PayPal founder Peter Thiel, Skype developer Jaan Tallinn and Tesla CEO Elon Musk are all major financial supporters of x-risk research.​* Who doesn’t want to believe that their work is of overwhelming humanitarian significance?

This paragraph was frustrating to read. X-risk concerned EAs don't want more money invested AI research per se. Rather, they are interested to see more money invested in AI safety research in particular. X-risk concerned EAs are if anything bearish on spending money to advance generic AI research. Also, none of Thiel, Tallin, or Musk is an AI researcher, so I don't see why we should think they were attracted to x-risks because it's an idea that gave their work "overwhelming humanitarian significance".

And, I'm frustrated that the author seems to think it's adequate to dismiss the idea of AI risk with what ultimately amounts to an ad hominem attack: "AI risk worries these people, but of course they are the sort of people who would be worried, so there's no need to investigate further". I actually think the opposite argument applies. I would expect Musk and Thiel to be if anything techno-utopians, with their history of creating and funding breakthrough technology. So if they are worried about some future tech that seems like it should make us sit up.

That's a very good point. Promoting concern for AI safety is potentially against the interests of many people involved in the technology industry.

Yeah that bit is a totally disingenuous misunderstanding of what these people are doing.

Worth pointing out that the correctness of the "systemic change" critique may vary considerably by cause area. The arguments for focusing on systemic change in the context of poverty (like points about the global institutional arrangement by Thomas Pogge and the importance of institutions by Daron Acemoglu) are very different from the arguments for focusing on systemic change for animal advocacy.

Great second-level review! Just a few tidbits:

I think there may be a misunderstanding here. Effective altruism claims you should try to do the most good with a significant proportion of your resources (operationally defined as over 10%). It's not both claiming you should do the most good and merely make things better.

I’m of the same perception as Amia Srinivasan. The radical “doing the most good” approach is the one that I associate with EA, but I’ve noticed that others (e.g., Nick Cooney) apply yours. I don’t know which is in the majority, but I would’ve expected it’s mine. Really it might look very similar in the end.

Curated and popular this week
Relevant opportunities