Here's a link to my entry to the Criticism and Red Teaming Contest. 

My argument is that EA’s underlying principles default towards a form of totalitarianism. Ultimately, I conclude that we need a reformulated concept of EA to safeguard against this risk. 

Questions, comments and critiques are welcomed. 

 

EDIT 16 JUNE 2022: Just a quick note to thank everyone for their comments. This is my first full post on the forum and it's really rewarding to see people engaging with the post and offering their critiques.

Comments14
Sorted by Click to highlight new comments since: Today at 3:48 AM

I wasn't convinced by your argument that basic EA principles have totalitarian implications.

The argument given seems too quick, and relies on premises that seem pretty implausible to me, namely:

(a) that EA "seeks to displace all prior traditions and institutions"

(b) that it is motivated by "the goal of bringing all aspects of society under the control of [its] ideology"

Given that this is the weakest part of the piece, I think the title is unfortunate.

Thanks for your three comments, all of which make excellent points. To briefly comment on each one:

(1) 

The distinction you draw between (a) do the most good (with your entire life) and (b) do the most good (with whatever fraction of resources you've decided to allocate to altruistic ends) is a really good one. I firmly agree with your recommendation that the EA materials make it clearer that EA is recommending (b). If EA could reformulate its objectives in terms of (b) this would be exactly the type of strengthened weak-EA I am arguing for in my piece.

(2) 

Thanks for the links here. All of these are good examples of discussions of a form of weak EA as discussed by Michael Nielsen in his notes and built upon in my piece. I note that in each of the linked cases, there is a form of subjective 'ad-hocness' to the use of weak EA to moderate EA's strong tendencies. I therefore have the same concerns as outlined in my piece. 

(3) 

You've touched upon what was actually (and still is) my second largest concern with the piece (see my response to ThomasWoodside above for the first). 

I'm conscious that totalitarianism is a loaded term. I'm also conscious that my piece does not spend much time kicking the tyres of the concept. I deliberated for a while as to whether the piece would be stronger if I found another term, or limited my analysis to totalisation.  I expect that the critique you've made is a common one amongst those who did not enjoy the piece. 

My rationale for sticking with the term totalitarianism was twofold:

(A) my piece argues that we need to take (what I argue are the) logical outcomes of strong EA seriously, even if such consequences are clearly not the case today. As set out in my piece, my view is that the logical outcomes of an unmitigated form of strong EA would be (i) a totalising framework (i.e. it would have the ability to touch all human life), and (ii) a small number of centralised organisations which are able to determine the moral value of actions. When you put these two outcomes together, there is at least potential for an ideology which I think fits quite neatly into Dreher's definition of totalitarianism as used in my piece and applied in your comment above. I therefore reached the view that to duck away from use of the term would be unfaithful to my own argument, as it would be turning a blind eye to what I see as a potential strong EA of tomorrow due to  the state of EA today. 

(B) I thought totalitarianism was the best way of capturing  and synthesising the two separate strains of my argument (externalisation and totalisation). Totalisation is only one element of this. 

Thanks again for your really engaging comments. 

This reminds me of Adorno and Horkheimer'sThe Dialectic of Enlightenment, which argues, for some of the same reasons you do, that "Enlightenment is totalitarian." A piece that feels particularly related:

For the Enlightenment, whatever does not conform to the rule of computation and utility is suspect.

They would probably say "alienation" rather than "externalization," but have some of the same criticisms.

(I don't endorse the Frankfurt School or critical theory. I just wanted to note the similarities.)

One thing to consider is moral and epistemic uncertainty. The EA community already does this to some extent, for instance MacAskill's Moral Uncertainty, Ord's Moral Parliament, the unilateralist's curse, etc. but there is an argument that it could be taken more seriously.

This is a really interesting parallel - thank you!  

It ties neatly into one of my major concerns with my piece -whether it can be interpreted as anti-rationality / a critique of empiricism (which is not the intention). 

My reflexive reaction to the claim that "enlightenment is totalitarian" is fairly heavy scepticism (whereas, obviously, I lean in the opposite direction as regards to EA), so I'm curious what distinctions there are between the arguments made in Dialectic and the arguments made in my piece. I will have a read of Dialectic and think through this further. 

Strong EA "doing the most good", which has risks of slipping to "at any cost" and thus totalitarianism as you say, perhaps should be called "optimized altruism."

Thanks for engaging with my piece and for these interesting thoughts - really appreciate it. 

 I agree that, on a personal level, turning 'doing the most good' into an instrumental goal towards the terminal goal of 'being happy' sounds like an intuitive and healthy way to approach decision-making. My concern however is that this is not EA, or at least not EA as embodied by its fundamental principles as explored in my piece. 

The question that comes to my mind as I read your comment is: 'is instrumental EA (A) a personal ad hoc exemption to EA (i.e. a form of weak EA), or (B) a proposed reformulation of EA's principles?'

If the former, then I think this is subject to the same pressures as outlined in my piece. If the latter, then my concern would be that the fundamental objective of this reformulation is so divorced from EA's original intention that the concept of EA becomes meaningless. 

I think J.S. Mill's On Liberty  offers a compelling argument for why utilitarians (and, by extension, Strong EAs) ought to favour pluralism, "experiments in living", and significant spheres of personal liberty.

So, as a possible suggestion for the "What should EA do?" section: Read On Liberty, and encourage other EAs to do likewise.  (In the coming year I'll be adding a 'study guide' on this to utilitarianism.net, which should be more accessible to a modern audience than the 19th century original.)

fwiw, my sense is that more EAs already share a Millian ethos rather than a totalitarian one!  But it's certainly important to maintain this.

Thanks for the recommendation. This dovetails nicely with my 4th recommendation (identify a firm philosophical foundation for the weakened form of EA I am proposing). The 'spheres of personal liberty' concept sounds like a decent starting point for a reformulation of the principle. 

Hi, I enjoyed your article. Parts of this remind me of Popper's "Utopia and Violence" in Conjectures and Refutations. Given that (strong) longtermist philosophy leads one to consider the value of an action in light of how much it could help bring about a particular utopia (often a techno-utopia), you might find inspiration to expand your critique in Popper's essay. (I don't want to endorse any specific view here, I just thought this might help you build a better argument).

Some quotes:

That the Utopian method, which chooses an ideal state of society as the aim which all our political actions should serve, is likely to produce violence can be shown thus. Since we cannot determine the ultimate ends of political actions scientifically, or by purely rational methods, differences of opinion concerning what the ideal state should be like cannot always be smoothed out by the method of argument. They will at least partly have the character of religious differences. And there can be no tolerance between these different Utopian religions. Utopian aims are designed to serve as a basis for rational political action and discussion, and such action appears to be possible only if the aim is definitely decided upon. Thus the Utopianist must win over, or else crush, his Utopianist competitors who do not share his own Utopian aims, and who do not profess his own Utopianist religion.

But he has to do more. He has to be very thorough in eliminating and stamping out all heretical competing views. For the way to the Utopian goal is long. Thus the rationality of his political action demands constancy of aim for a long time ahead; and this can only be achieved if he not merely crushes competing Utopian religions, but as far as possible stamps out all memory of them. 

[...]

Work for the elimination of concrete evils rather than for the realization of abstract goods. Do not aim at establishing happiness by political means. Rather aim at the elimination of concrete miseries. Or, in more practical terms: fight for the elimination of poverty by direct means--for example, by making sure that everybody has a minimum income. Or fight against epidemics and disease by erecting hospitals and schools of medicine. Fight illiteracy as you fight criminality. But do all this by direct means. Choose what you consider the most urgent evil of the society in which you live, and try patiently to convince people that we can get rid of it. 

But do not try to realize these aims indirectly by designing and working for a distant ideal of a society which is wholly good. However deeply you may feel indebted to its inspiring vision, do not think that you are obliged to work for its realization, or that it is your mission to open the eyes of others to its beauty. Do not allow your dreams of a beautiful world to lure you away from the claims of men who suffer here and now. Our fellow men have a claim to our help; no generation must be sacrificed for the sake of future generations, for the sake of an ideal of happiness that may never be realized. In brief, it is my thesis that human misery is the most urgent problem of a rational public policy and that happiness is not such a problem. The attainment of happiness should be left to our private endeavours.

Thanks for this, and I can definitely see the parallels here. 

Interestingly, from an initial read of the extracts you helpfully posted above, I can see Popper's argument working for or against mine. 

On one hand, it is not hard to identify a utopian strain in EA thought (particularly in long-termism as you have pointed out). On the other, I think there is a strong case to be made that EA is doing exactly what Popper suggests when he says:  Work for the elimination of concrete evils rather than for the realization of abstract goods. Do not aim at establishing happiness by political means. Rather aim at the elimination of concrete miseries. I see the EA community's efforts in areas like malaria and direct cash transfers as falling firmly within the 'elimination of concrete evils' camp. 

I agree 100% that the EA community's efforts in areas like malaria and direct cash transfers are falling quite firmly within the 'elimination of concrete evils' camp. IIRC you differentiate between the philosophical foundations and actual practice of effective altruism in your essay. So even if most EA work currently is part of the aforementioned camp, the philosophical foundations might not actually imply this.

I'm skeptical of the section of your argument that goes "weak EA doesn't suffer from totalization, but strong EA does, and therefore EA does."

The presence of a weak EA does not undermine the logic of a strong EA. If EA’s fundamental goal is to achieve “as much [good] as possible”, its default position will always point towards totalisation.

Why do you take strong EA as the "default" and weak EA as something that's just "present"? I could equally say

The presence of a strong EA does not undermine the logic of a weak EA. If EA's fundamental goal of achieving as much good as possible is subject to various self-imposed exemptions, its default position does not point towards totalization.

Adjudicating between these boils down to whether strong EA or weak EA is the better "true representation" of EA. And in answering that, I want to emphasize - EA is not a person with goals or positions. EA is what EAs do. This is normally a semantic quibble because we use "EA has the position X" as a useful shorthand for "most EAs believe X, motivated by their EA values and beliefs". But making this distinction is important here, because it distinguishes between weak EA (what EAs do) and strong EA (what EAs mostly do not do). If most EAs believe in and practice weak EA, then I feel like it's the only reasonable "true representation" of EA.

You address this later on by saying that weak EA may be dominant today, but we can't speak to how it might be tomorrow. This doesn't feel very substantial. Suppose someone objects to utilitarianism on the grounds "the utilitarian mindset could lead people to do horrible things in the name of the greater good, like harvesting people's organs." They then clarify, "of course no utilitarian today would do that, but we can't speak to the behavior of utilitarians tomorrow, so this is a reason to be skeptical of utilitarianism today." Does this feel like a useful criticism of utilitarianism? Reasonable people could disagree, but to me it feels like appealing to the future is a way to attribute beliefs to a large group even when almost nobody holds them, because they could hold those views.

Moreover, I think future beliefs and practices are reasonably predictable, because movements experience a lot of path-dependency. The next generation of EAs is unlikely to derive their beliefs just by introspecting towards the most extreme possible conclusions of EA principles. Rather, they are much more likely to derive their beliefs from a) their pre-existing values, b) the beliefs and practices of their EA peers and other EAs who they respect. Both of these are likely to be significantly more moderate than the most extreme possible EA positions.

Internalizing this point moderates your argument to a different form, "EA principles support a totalitarian morality". I believe this claim to be true, but the significance of that as "EA criticism" is fairly limited when it is so removed from practice.

I agree with the following statement, which is well put:

EA needs to find a better way to articulate its relationship with the individual and with personal agency.

I think there are some good examples of this, but they're not sufficiently prominent in the introductory materials.

One I saw recently, from Luke Muehlhauser:

  1. I was born into incredible privilege. I can satisfy all of my needs, and many of my wants, and still have plenty of money, time, and energy left over. So what will I do with those extra resources?
  2. I might as well use them to help others, because I wish everyone was as well-off as I am. Plus, figuring out how to help others effectively sounds intellectually interesting.
  3. With whatever portion of my resources I’m devoting to helping others, I want my help to be truly other-focused. In other words, I want to benefit others by their own lights, as much as possible (with whatever portion of resources I’ve devoted to helping others).

In a not-very-prominent article in the Key Ideas series, Ben Todd writes:

One technique that can be helpful is setting a target for how much energy you want to invest in personal vs. altruistic goals. For instance, our co-founder Ben sees making a difference as the top goal for his career and forgoes 10% of his income. However, with the remaining 90% of his income, and most of his remaining non-work time, he does whatever makes him most personally happy. It’s not obvious this is the best tradeoff, but having an explicit decision means he doesn’t have to waste attention and emotional energy reassessing this choice every day, and can focus on the big picture.

There's also You have more than one goal, and that's fine by Julia Wise.

Thanks for the post. I'll post some quick responses, split into separate comments...


I agree that "do the most good" can be understood in a totalising way. One can naturally understand it as either:

(a) do the most good (with your entire life).

(b) do the most good (with whatever fraction of resources you've decided to allocate to altruistic ends).

I read it as (b).

In my experience, people who think there are strong moral arguments for (a) tend to nonetheless think that (b) is a better idea to promote (on pragmatic grounds).

I've long thought it'd be good if introductions to effective altruism would make it clearer that:

(i) EA is compatible with both (a) and (b)

(ii) EA is generally recommending (b)

Curated and popular this week
Relevant opportunities