Hide table of contents

Background

Hello all! This is my first ever post on the EA Forum, though I have been lurking  around these parts for a little while. This linkpost is for my very very tentative attempt to discuss whether EAs should adopt a deeper form of value-pluralism that considers utilitarianism to be one possible compatible moral theory among others, partly inspired by Tyler Cowen's recent comments

I wrote this because I sincerely want effective altruism to reach and resonate with many more people, and I think more overt value pluralism might help with precisely that. So although this post contains bits of criticism here and there, that criticism is only ever intended to be, above all, friendly and constructive. (And I certainly do not wish to castigate those who hold the kind of strong utilitarian beliefs I discuss!)

I should also note that this post is copied across from my blog, which means it suffers from a few peculiarities: it is written in probably unnecessarily flowery language, I have been told it is much too long to likely generate much engagement, and it assumes much less familiarity with effective altruism than I suspect everyone here actually has. Nevertheless, I thought I'd avoid confusion by leaving it as-is. 

Thank you for reading - I'd be very interested to hear what you have to think!

Introduction

Before we get onto the topic of this post — the case for greater moral pluralism within effective altruism — we ought to work out whether this really needs to be raised at all. Is effective altruism today all that distinctively committed to a particular moral theory? In other words, is it (currently) a utilitarian community?

Oddly, the answer to this question depends on who you ask, and how you ask it.

If you ask most people, they will say yes. When most people think of effective altruism, they think not of all the demonstrable good effective altruists have done, but of Sam Bankman-Fried. And of the many distinctive things about the worldview of Sam Bankman-Fried, one very obvious, widely-known thing, aside from the most obvious, widely-known thing, is that he believes in a full-on, no-holds-barred, zero caveat, bullet-biting, proudly philosophical utilitarianism:

COWEN: Are there implications of Benthamite utilitarianism where you yourself fell like that can't be right; you're not willing to accept them? What are those limits, if any?

BANKMAN-FRIED: I'm not going to quite give you a limit because my answer is somewhere between "I don't believe them" and "if I did, I would want to have a long, hard look at myself."

And though it’s hard to prove this, I reckon that this is what most people think effective altruism necessarily involves in general. That is, the public perception seems to be that you can’t be an effective altruist unless you’re capable of staring the repugnant conclusion in the face and sticking to your guns, like Will MacAskill does in his tremendously widely-publicised and thoughtfully-reviewed book. (Indeed a lot of criticism levelled at effective altruism, as I will discuss in this post, operates under this assumption, self-consciously or not.)

Yet if you asked most prominent effective altruists this question, you would receive the opposite answer. For instance, MacAskill’s “The definition of effective altruism” contains this:

Misunderstandings of effective altruism

Misconception #1: Effective altruism is just utilitarianism

And Benjamin Todd, who co-founded the excellent 80,000 Hours career advisory with MacAskill and wrote up the above paper here, says

"Doing the most good" makes it sound like EA says you're morally obligated to focus 100% on the thing that most helps others (ie utilitarianism). I think the best version of EA doesn't make any moral demands at all.

So that’s what you’d hear if you were to ask community leaders explicitly. Yet if you were more interested in reading between the lines, you would be forgiven for thinking the answer was quite different. Effective altruists’ revealed preferences seem much more in line with the public’s perception.

Effective altruists adopt utilitarianism at much higher rates than the public at large, and prominent effective altruists seem especially committed to this moral vision. In practice, therefore, the community operates with a default, background assumption of strong utilitarian thinking; even notionally acclaimed attempts to challenge this are, in the end, very much an outlier view. Besides that, the public’s perception of effective altruism is not based on nothing: it’s based on the works and ideas of the movement’s most famous figures, and in substance these works are all what we might call ‘Very Clearly Utilitarian’. (I thank Peter McLaughlin for driving this point home to me, among other valuable feedback.)

That is not to say that MacAskill’s ‘not-just-utilitarianism’ definition of effective altruism is misconceived. (Indeed in some ways this post is just an attempt to expand upon that idea.) But it is to say that Tyler Cowen was quite right to have described effective altruism as an offshoot of utilitarianism in some important respects, in spite of messaging that suggests otherwise.

In particular, Cowen’s view is that there are:

two ways in which effective altruism to me seems really quite similar to classical utilitarianism. The first is simply a notion of the great power of philosophy, the notion that philosophy can be a dominant guide in guiding one's decisions[, whilst the other] is a strong emphasis on impartiality.

Moreover, for him this inheritance is a mixed blessing. As he puts it:

At current margins, I'm fully on board with what you might call the EA algorithm. At the same time, I don't accept it as a fully triumphant philosophic principle that can be applied quite generally across the board, or as we economists would say, inframarginally.

Interestingly, what Cowen says here is surprisingly consistent with — albeit much more sympathetic in tone than — a lot of far more overt criticism of effective altruism as of late. As I discuss in this post, a perpetual theme of this body of work is that effective altruism’s deeds (i.e., doing charity effectively) are good; the ‘triumphant philosophic principle’ that comes with it (in the form of an all-explanatory utilitarianism) is, for one reason or another, a serious limitation.

Of course, a contented triumphant utilitarian might naturally respond to this that we simply can’t have one without the other. But: is that really true?

For effective altruists, that is certainly worth asking — whether critics’ distaste for this form of utilitarianism resonates or not. The utilitarian moral theory that presently encloses effective altruist thinking is not only especially strongly associated with the people who have brought it into disrepute, it is also highly particular: as I try to show in what follows, there are plenty of people who could be amenable to effective altruism but balk at it because they do not share what they (understandably) believe to be its necessary moral theory. Therefore, for big-tent seeking effective altruists, triumphant utilitarianism incurs a cost.

So, with this in mind, in this post I attempt to do two things.

In the first two sections, I try to show that effective altruism and moral philosophy do not naturally run together. Though the two seem linked, effective altruists are up to something substantively and categorically different to moral theorists, utilitarian or otherwise. This means that the justification of their actions need not depend on utilitarianism at all.

This argument would, however, mean abandoning any attempt to turn effective altruism into an all-explanatory moral theory. Is it worth paying the price?

In the final two parts of this essay, I try to argue so. Not only can utilitarianism be fully separated out from effective altruism, it ought to be. The pay-off would not just be improved public relations. It would open up effective altruism to a genuine methodological pluralism. And for those interested in movement-building, or maximising the amount of charitable work effective altruism gets up to, that methodological pluralism would not only be an intrinsic good: it would make effective altruism an altogether more natural home for many of those who currently believe themselves to be better-suited to a life outside it, too.

In some respects, this is hardly new. The view that there are good reasons for effective altruism to distance itself from utilitarianism is clearly tacitly taken already, or it wouldn’t have made sense for MacAskill and others to make the claims discussed above. Nonetheless, splitting the two apart in practice would require unpicking many more of the ties that demonstrably still exist.

In other words, this is inevitably an annoyingly lengthy post, but do bear with me.

I. The division of labour between effective altruists and philosophers

To begin prising apart effective altruism and moral philosophy, I want to return to a rather thorough critique of effective altruism that Amia Srinivasan wrote, seven-and-a-bit years ago, for the London Review of Books. Which ends as follows:

There is a small paradox in the growth of effective altruism as a movement when it is so profoundly individualistic. Its utilitarian calculations presuppose that everyone else will continue to conduct business as usual; the world is a given, in which one can make careful, piecemeal interventions. The tacit assumption is that the individual, not the community, class or state, is the proper object of moral theorising. There are benefits to thinking this way. If everything comes down to the marginal individual, then our ethical ambitions can be safely circumscribed; the philosopher is freed from the burden of trying to understand the mess we're in, or of proposing an alternative vision of how things could be. The philosopher is left to theorise only the autonomous man, the world a mere background for his righteous choices. You wouldn't be blamed for hoping that philosophy has more to give. 

As it happens, I think this criticism is question-begging. Srinivasan has decided in advance that we need ‘an alternative vision of how things could be’ — a vision of systemic reform — and criticises the logic of effective altruism for its failure to reach the same verdict. She doesn’t really want to debate whether that vision is necessary. For her, it’s a given. So effective altruism is bound to fail.

But that says something in itself. Effective altruists have a very different idea of what ‘positive change’ looks like compared to many philosophers and political theorists, like Srinivasan. Incremental action with relatively certain positive consequences has a higher ‘expected value’, in effective altruism’s terms, than the kind of systemic change that Srinivasan has in mind, which is almost unavoidably altogether more morally uncertain. (Srinivasan may think we need to dismantle capitalism, for instance, but some of us believe global capitalism is good; this kind of question ultimately turns on our deeper values, which are too subjective to be ranked as ‘better’ or ‘worse’ with any degree of confidence — or humility.)

To me (though I am not the first to say this), this disagreement about the nature of positive change reflects the fact that effective altruists and philosophers are up to two very different things. Effective altruists are primarily concerned with how best to use time and money for charitable ends: they seek to guide resource allocations in the here-and-now. Philosophers, by contrast, spend their time thinking about value-judgements in much more abstract ways, and, importantly, often over much longer time-frames (leaving aside longtermism for now). There is a clear division of labour between the two.

The difference in timescale matters. The longer you look into the future, the more you will need to confront the irresolvable nature of value conflict. At that scale, there are a vast range of different possible outcomes, and a similarly vast range of theoretical frameworks to consider in deciding which to pursue. For the effective altruist, all this can do is weigh down an expected value calculation: how can we decide whether something delivers value if we face so many choices about what ‘value’ even looks like? Hence effective altruism struggles to have all that much to say about truly big picture philosophical questions — just as Srinivasan argued.

But that’s fine. Insofar as effective altruism is, fundamentally, a way of helping people work out how best to use their resources to do good, it simply doesn’t needto worry about any of these long-term philosophical quandaries.

In the short term, it is perhaps surprisingly easy to avoid having to quibble in big philosophical terms about what constitutes a ‘good deed’. In practice, most moral systems converge on some basic commitments at this time scale, such as that things like ‘generosity’, ‘helping the poor’ and ‘effectiveness’ are all straightforwardly, incontrovertibly good. And such moral commitments are the only ones that effective altruists need in order to start working out how to allocate resources towards charitable ends in the manner that they do today.

Perhaps the easiest way to demonstrate this is to show that even effective altruism’s most implacable and embittered critics don’t actually disagree with effective altruists’ action-guiding recommendations — which implies they really are derived from uncontroversial foundations. For instance, part-way through her LRB critique, Srinivasan reveals she is in fact entirely on board with effective altruism’s view of what might be a ‘good’ way to tackle poverty, here and now:

Halfway through reading the book I set up a regular donation to GiveDirectly, one of the charities MacAskill endorses for its proven efficacy. It gives unconditional direct cash transfers to poor households in Uganda and Kenya.

Or for another example: Jimmy Lenman, in an otherwise scathing, anti-utilitarian attack piece, remains full of praise for what effective altruists actually do:

This movement originated with some young people in the late noughties determined to dedicate themselves to doing good in effective ways, many publicly pledging to give away a large percentage of their income over their lifetimes to charitable causes. It was all, at least at first, about mosquito nets and parasitic worm infections, advocating and promoting cost-effective charities doing good things for public health in some of the poorest countries in the world. 

Which is, of course, wonderful. Generosity is an important virtue and people exercising it on a large scale deserve our admiration and respect.

And somewhat amusingly, even Émile Torres, a strong contender for effective altruism’s worst-faith critic (albeit in a hotly-contested field), still did this:

It is, it seems, really quite hard to find issue with the way effective altruists go about practicing their side of the moral division of labour — resource allocation. Everyone with their head screwed on thinks that donating money to charity in effective ways is good — whether they are deontologists, virtue ethicists, or goodness-knows-whatever-else. And I think most would also recognise that the charitable sector could really, really do with some greater critical thinking.

The objections begin to creep in, instead, when these philosopher-critics start to fear that effective altruists are encroaching on their territory (when they fear they are attempting philosophy), which effective altruism, they imply, cannot provide the analytical tools for, since those tools were built for a very different enterprise.

Now we might plausibly bicker about whether these critics draw on fair examples to make their argument about the paucity of effective altruism as a philosophy, or whether they have really understood the goals and work of effective altruists correctly before attacking them. (Among various other conceivable objections.) But I think a much neater proposal might be this: to accept the point entirely. Effective altruism isn’t, or at least shouldn’t be, a big picture, ‘triumphant philosophic principle’. And nor, then, is it a cognate of utilitarianism.

II. A truly non-utilitarian effective altruism

This would not cause so many problems as one might think. Granted, the perception that effective altruism derives from triumphant utilitarianism is rather strong. Granted, lots of prominent effective altruists are in point of fact utilitarians. And granted, there is some — apparent — methodological overlap.

But we have already looked at how the action-guiding role of effective altruism is basically uncontroversial. It is uncontroversial because the belief that effective altruism can ‘do good’ is not contingent on holding any particular philosophical framework. Non-utilitarians can (and do) support what effective altruists do.

And in practice there is a relatively clear distinction between the two concepts.

Utilitarianism is a philosophical doctrine of the ‘big picture’, engaging-with-value-conflict type. The utilitarian claim is that the various values that divide people are actually commensurable with one another, via the higher value of ‘utility’. In other words, it doesn’t matter if some cherish truth where others value beauty: we can resolve these tensions by working out where the utility lies beneath. Now, obviously, utilitarians rely on utilitarianism to think about the decisions they make too, so it is action-guiding as well as ‘big picture’ thinking — but what makes ‘utilitarianismas a term mean anything (at least in a normative, triumphant-principle sense) is that utilitarians use its logic to make distinctive conclusions, which other moral frameworks would not advise.

Effective altruists, by contrast, need not think that values are commensurable, nor that utility is in any way an ultimate arbiter of philosophical life. None of this is necessary for their overriding objective of putting resources to good use. If resource allocation was dependent on such ethical positions, the deeds and aims of (classical) effective altruists would be morally controversial in a way they simply are not. (Again, leaving aside longtermism for now.)

In fact, we can go further: effective altruism presupposes that you cannot make values commensurate with one another, as utilitarianism suggests. That is why effective altruism discounts systemic change so heavily in its expected value calculations, to the point of conflict with Srinivasan — its hostility to large-scale philosophical conclusions is, implicitly, predicated on the very inescapability of moral conflict. By contrast, a committed utilitarian ought to be much more confident that it is possible to judge what moral ‘improvement’ looks like.

Another way to put it is this: utilitarianism is a worldview; effective altruism is a theory of resource allocation, which in practice operates under precisely the assumption that one ought not place much confidence in any given worldview.

This might seem to put quite a bit of distance between my conception of effective altruism and that of someone like MacAskill, who is clearly keen to leverage it into a form of utilitarian moral theory. Yet I think it fits quite neatly with the response Todd gave to Srinivasan’s argument at the time:

What do I think's actually going on in the heads of most effective altruists when they don't work on large-scale systemic change? I think mostly they're just not sure whether the value of a changed system is large or not. The track record of trying to design a new political and economic system seems bad, and it's really hard to avoid unintended consequences. Instead, it seems much more tractable to push for marginal changes.

Of course, there is an obvious objection to my argument. Effective altruism looks like what one imagines ‘doing utilitarianism’ looks like. Effective altruists weigh up various competing factors, work out what is most ‘good’ (a single, utility-like metric), and do that. Often they put numbers on subjective values too.

But, again, consider the implications of the above. If I am right that effective altruism is a decision mechanism for allocating charitable resources, then these ostensibly ‘utilitarian’ features are in fact not distinctively utilitarian at all. Whenever we consciously and deliberatively make complex decisions, we have to weigh up competing, non-commensurable factors, and somehow combine them into a single verdict. For instance, if you have two job offers — one that pays £30k, say, and one £40k — and the worse-paying job seems more fun than the better-paying one, you are forced to trade-off two values (remuneration and enjoyment) that are not easily or objectively tallied, and come to a single aggregated decision about which is best on net. If you do so, are you a utilitarian? No! Not unless you believe ‘utilitarianism is when you decide things’.

What is unique about effective altruists is not that they are ‘more utilitarian’ in how they think about decisions, but rather the extent to which they think about these decisions, and proactively regulate themselves in order to make the most rational choices in the circumstances possible. In other words, others who have been interested in improving the lives of those in need have often not, historically speaking, thought in the same level of detail — or with the same focus on succeeding in their stated aims — as effective altruists endeavour to. (For a really impressive recent example of this, see this animal welfare charity’s self-evaluation.) That does not mean those other charitable givers were less utilitarian.

Now you might instead argue that the utilitarianism creeps in at the objective-setting level, rather than the resource-allocation level: ‘improving the lives of those in need’, an objective of effective altruism, might be deemed a ‘more utilitarian’ goal than, for example, ‘improving the quality of opera’, as per arts philanthropists. Yet, once again, I think it would be absurd to claim that only utilitarianism offers the tools to allow someone to decide which of ‘opera’ and ‘those in need’ is more important. (Ask Amartya Sen, famed non-utilitarian, what he thinks.) That is, utilitarianism might be why, as a point of historical fact, effective altruists fixate on impact, but it needn’t be why in theory.

The non-necessity of utilitarianism is why effective altruists’ actual charitable endeavours are so uncontroversial, even by Lenman or Srinivasan’s lights. Moreover, it reminds us what is truly valuable and special about effective altruism as a whole. Here is Dylan Matthews on Open Philanthropy, from 2015:

What’s radical about GiveWell and Open Phil is their commitment to do substantial empirical research before deciding on causes. [...] "The vast majority of donors aren’t interested in doing any research before making a charitable contribution," Paul Brest, former president of the Hewlett Foundation, wrote in an article praising effective altruism. "Many seem satisfied with the warm glow that comes from giving; indeed, too much analysis may even reduce the charitable impulse." By contrast, effective altruists are obsessed with doing research into cause effectiveness. Open Phil has a literal spreadsheet ranking a number of different causes it might invest in.

That is, what has always made effective altruism valuable is how self-consciously it thinks through its objectives and how to meet them. Not how utilitarian it is.

That said, there is a downside to this. If you accept a non-utilitarian account of effective altruism, you ought to relinquish the view that effective altruism can or should be a form of triumphant moral philosophy, a way of explaining the world’s problems, or a way of adumbrating a future utopia. To find any of those things you will need to believe something else on top.

Equally, you are now free to articulate more or less whatever moral philosophy you like. And if your primary interest is in making an impact now, does the loss of an all-explanatory form of effective altruism really bear on you all that much anyway? Perhaps a non-utiliarian effective altruism could even be quite freeing.

III. The freedom of non-utilitarianism

Why does all of this matter? Well, I reckon this account of effective altruism would give the movement much broader appeal, whilst (as I hope I have shown) fully retaining what it is about effective altruism that makes it so valuable today.

There are a few reasons for this.

First, a less overtly triumphant-principle effective altruism would be far less susceptible to criticism, which seems to be a drag on the movement in various ways, not least in wasting effective altruists’ time. To my mind it is far better to get on with doing good in relative quiet than attract the public attention (and the inevitable, often ill-informed backlash) that comes with broad philosophical ambitions. Narrower efforts to ‘do philanthropy well’ without trying to make large claims in the arena of moral philosophy — think the Gates Foundation — do not get the kind of backlash that effective altruism gets.

Second, this version of effective altruism could be much more explicitly pluralistic in its approach to doing good, which would be good. An excellent recent forum post by Ryan Briggs argues that Sen’s capability approach ought to serve as an important complement to other ways of thinking about wellbeing. Of course effective altruists should be thinking this broadly about how to conceptualise improvements in human welfare — but if you hold that effective altruism is ‘utilitarianism applied’, you will find yourself predisposed against the value of exactly this kind of thing.

Pluralism matters for more reasons than you might think. Lots of people, as McMahan has written, criticise effective altruism because they agree with Bernard Williams’s criticisms of utilitarianism, and they think those criticisms straightforwardly apply to effective altruism. But as someone who broadly shares Williams’s criticisms of utilitarianism myself, it is very important to me that effective altruists are able to show why they aren’t subject to the same claims. It should be clear there is ample space for non-utilitarians in the community, which can only be true if we do not think of effective altruism as applied utilitarianism.

Finally, and most speculatively, this view of effective altruism might help us think about a case for longtermism that doesn’t rely on population ethics, or arguments involving colonising space, and so on.[1] So I will finish by briefly discussing this.

IV. The longtermist turn

Throughout this post I have tried to talk about classic effective altruist activity — that is, global health and wellbeing efforts — in isolation, deferring discussion of longtermism until now. And that is because I think longtermism complicates this story, in some potentially open-ended ways.

It strikes me as clearly true that the longtermist turn is effective altruism’s mostcontroversial intellectual development. It is also, it seems to me, most obviously ‘the clear next step for effective altruism’ for those who view effective altruism in the triumphant-moral-theory sense I have critiqued here. (Eric Schliesser’s multi-part criticism of MacAskill is, for my money, the most interesting stuff that has been written about any of this, if you’re keen.)

So on one hand there are understandable reasons for the widespread inclination to reject longtermism. If longtermism only follows from effective altruism’s pretensions to moral philosophy, then maybe it is one extension too far. Maybe its troubled reception is yet another sign that the attempt to elevate effective altruism into a moral theory is simply doomed.

But in my view longtermism doesn’t only follow from triumphant utilitarianism. That is, you can think effective altruism is doomed to fail as a moral theory, and yet still readily believe that the longtermist turn is a valuable complement to effective altruism’s traditional areas of interest.

In part, that is because the existential risks that longtermist charitable activity has so far concerned itself with are not actually very far away or removed at all. Experts think artificial general intelligence is coming within the next 40 years. Prediction markets think it is coming within sixteen. (Markets are less bullish.)

And artificial general intelligence is obviously a risk. We can argue about how much of a risk it is, and whether Eliezer Yudkowsky is more doomster than booster or whatever, but it would be obviously mad to ignore it completely.

As a recent tweet put it:

It’s perplexing how mentioning AGI within academia gets you weird looks despite the world’s leading labs being very explicitly devoted to it. Its as if physics profs in the 40’s were to shun all speculation of the far-out “nuclear weapons” while the Manhattan project was underway

So perhaps there are two reasons why artificial intelligence risk research is controversial. One is the understandable concern that it might simply be the product of the imaginations of people with indisputably ‘out there’ views about population ethics or space colonisation (not to mention far worse), and, as the critics of these people point out, if this kind of approach comes at the cost of traditional global wellbeing efforts, then that is really very bad indeed.[2]

But as that tweet suggests, perhaps this research is also controversial because people have simply overlooked how significant artificial intelligence actually is. Its consequences are going to be very big, and they are difficult to predict. So although it is perfectly defensible to be deterred by longtermism’s philosophical origins, to write off longtermists’ actual charitable work — their acts, not their philosophy, to use the distinction I have tried to draw across this essay — would be to make a major error indeed.

In other words, just as the non-utilitarian justification of effective altruism is that it seeks to correct for mistakes and irrationality in international development, there is a non-philosophical justification of longtermism, and it’s that it corrects for mistakes and irrationality in how most people perceive the course and scope of technological change. This applies not just to artificial intelligence, but also to longtermism’s other areas of focus, like bioterrorism, pandemics, or nuclear war.

And this justification, it ought to be added, should be similarly uncontroversial. It should be obvious to people of all philosophical inclinations that it is bad if we all die, and good if we try to assess and mitigate the risks of that happening. (Indeed, this easily passes our pre-existing cost-benefit analyses.) Just as it is uncontroversial that GiveDirectly or the Against Malaria Foundation do good.

But that is not what people think longtermism is about, mainly because that is not where longtermism comes from, nor how its most famous adherents justify it. (Perhaps some longtermists’ relatively heterodox methodologies play a part, too.) And with all that in mind, who can blame those new to longtermism for looking at it, believing it to be a form of highly particular moral theory, and coming away with at least as many reservations with it as Srinivasan had of classical effective altruism more than seven years ago?

If anything, the advent of longtermism is perhaps the very clearest reason to abandon the strong utilitarian account of effective altruism as a whole. It is here that, more than anywhere else, effective altruists’ pretensions to moral philosophy have most strongly put off outsiders. This has led many otherwise persuadable people to overlook what could, and should, be a very widely-appealing rationale for longtermist charitable work. And if effective altruists want to bring existential risks to greater public attention, then that is quite the error indeed.

  1. ^

    I don't want to get into a debate about these theories themselves here; my point is just that it would be good if we could get behind longtermist cause areas without necessarily needing to agree with everything else. (As indeed I think we can.)

  2. ^

    This is not to complain that existential risk research is happening; it's to say that if all that justified existential risk research was moral philosophy about population ethics and space colonisation, I would be considerably more sceptical about whether it was a good idea. But plenty else justifies it too!

41

0
0

Reactions

0
0

More posts like this

Comments10
Sorted by Click to highlight new comments since: Today at 9:48 AM

Thank you for this, Keir. I agree that some conclusions that EAs have come to are uncontroversial among non-utilitarians. And EAs have tried to appeal to non-utilitarians. Singer’s Drowning Child thought experiment does not appeal to utilitarianism. Ord and MacAskill both (while making clear they are sympathetic to total utilitarianism) try to appeal to non-utilitarians too.

However, there are some important cause prioritisation questions that can’t really be answered without committing to some philosophical framework. It’s plausible that these questions do make a real, practical difference to what we individually prioritise. So, doing EA without philosophy seems a bit like trying to do politics without ideology. Many people may claim to be doing so, but they’re still ultimately harbouring philosophical assumptions.

You bring up the comparison between donating to the opera and donating to global health as one that non-utilitarians like Sen can deal with relatively easily. But Amartya Sen is still a consequentialist and it’s notable that his close colleague (and ideological soulmate) Martha Nussbaum has recently written about wild-animal suffering, a cause which utilitarians have been concerned about for some time. Consequentialists and pluralists (as long as they include some degree of consequentialism in their thinking) can still easily prioritise. It’s less clear that pure deontologists and virtue ethicists can, without ultimately appealing to consequences.

Finally, I don’t think there’s much philosophical difference between Bill Gates and Peter Singer. Gates wrote a blurb praising Singer’s The Most Good You Can Do, and in his recent annual newsletter he said that his goal is to “give my wealth back to society in ways that do the most good for the most people”.

Hello! Thank you for such a thoughtful comment. You're obviously right on the first point that Singer/Ord/MacAskill have tried to appeal to non-utilitarians, and I think that's great - I just wish, I suppose, that this was more deeply culturally embedded, if that's a helpful way to put it. (But the fact this is already happening is why I really don't want to be too critical!)

And I fully, completely agree that you can't do effective altruism without philosophy or making value-judgements. (Peter made a similar point to yours in a comment to my blog). But I think that what I'm trying to get at is something slightly different: I'm trying to say that at a very basic level, most moral theories can get on board with what the EA community wants to do, and while there might be disagreements between utilitarians and other theories down the line, there's no reason they shouldn't be able to share these common goals, nor that non-utilitarians' contributions to EA should be anything other than net-positive by a utilitarian standard. To me that's quite important, because I think a great benefit of effective altruism as a whole is how well it focusses the mind on making a positive marginal impact, and I would really like to see many more people adopt that kind of mindset, even if they ultimately make subjective choices much further down the line of impact-making that a pure utilitarian disagrees with. (And indeed such subjective and contentious moral choices within EA already happen, because utilitarianism doesn't tell you straightforwardly how to e.g. decide how to weight animal welfare, for example. So I suppose I really don't think this kind of more culturally value-plural form of EA would encounter philosophical trouble any more than EAs already do.)

On Gates and Singer's philosophical similarities, I agree! But I think Gates wears his philosophy much more lightly than most effective altruists do, and has escaped some ire because of it, which is what I was trying to get at - although I realise this was probably unhelpfully unclear. 

Thanks for your response. I don’t think we disagree on as much as I thought, then! I suppose I’m less confident than you that those disagreements down the line aren’t going to lead to the same sort of backlash that we currently see. 

If we see EA as a community of individuals who are attempting to do good better (by their own lights), then while I certainly agree that the contributions of non-utilitarians are net-positive from a utilitarian perspective, we utilitarian EAs (including leaders of the movement, who some might say have an obligation to be more neutral for PR purposes) may still think it’s best to try to persuade others that our preferred causes should be prioritised even if it comes at the expense of bad PR and turning away some non-utilitarians. Given that philosophy may cause people to decisively change their views on prioritisation, spreading certain philosophical views may also be important. 

I guess I am somewhat cheekily attempting to shift the burden of responsibility back onto non-utilitarians. As you say, even people like Torres are on board with the core ideas of EA, so in my view they should be engaging in philosophical and cause prioritisation debates from within the movement (as EAs do all the time, as you note) instead of trying to sabotage the entire project. But I do appreciate that this has become more difficult to do. I think it’s true that the ‘official messaging’ has subtly moved away from the idea that there are different ‘wings’ of EA (global health, animal welfare, existential risk) and toward an idea that not everyone will be able to get on board with (though I still think they should be able to, like many existing non-utilitarian EAs). 

Trust seems to be important here. EAs can have philosophical and cause prioritisation disagreements while trusting that people who disagree with them are committed to doing good and are probably doing some amount of good (longtermists can think global health people are doing some good, and vice-versa). Similarly, two utilitarians can as you say disagree empirically about the relative intensity of pleasure and suffering in different species without suspecting that the other isn‘t making a good faith attempt to understand how to maximise utility. On the other hand, critics like Torres and possibly some of the others you mentioned may think that EA is actively doing harm (and/or that prominent EAs are actively evil). One way it could be doing harm is by diverting resources away from the causes they think are important (and instead of trying to argue for their causes from within the movement, they may, on consequentialist grounds, think it’s better to try to damage the movement). 

All of this is to say that I think these ‘disagreements down the line’ are mostly to blame for the current state of affairs and can’t really be avoided, while conceding that ‘official EA messaging’ has also played its part (but, as a take-no-prisoners utilitarian, I’m not really sure whether that’s net-negative or not!)

A nit picking (and late) point of order I can’t resist making because it’s a pet peeve of mine, re this part:

“the public perception seems to be that you can’t be an effective altruist unless you’re capable of staring the repugnant conclusion in the face and sticking to your guns, like Will MacAskill does in his tremendously widely-publicised and thoughtfully-reviewed book.”

You don’t say explicitly here that staring at the repugnant conclusion and sticking to your guns is specifically the result of being a bullet biting utilitarian, but it seems heavily implied by your framing. To be clear, this is roughly the argument in this part of the book:

-population ethics provably leads every theory to one or more of a set of highly repulsive conclusions most people don’t want to endorse

-out of these the least repulsive one (my impression is that this is the most common view among philosophers, though don’t quote me on that) is the repugnant conclusion

-nevertheless the wisest approach is to apply a moral uncertainty framework that balances all of these theories, which roughly adds up to a version of the critical level view, which bites a sandpapered down version of the repugnant conclusion as well as (editorializing a bit here, I don’t recall MacAskill noting this) a version of the sadistic conclusion more palatable and principled than the averagist one

Note that his argument doesn’t invoke utilitarianism anywhere, it just invokes the relevant impossibility theorems and some vague principled gesturing around semi-related dilemmas for person-affecting ethics. Indeed many non-utilitarians bite the repugnant conclusion bullet as well, what is arguably the most famous paper in defense of it was written by a deontologist.

I can virtually guarantee you that whatever clever alternative theory you come up with, it will take me all of five minutes to point out the flaws. Either it is in some crucial way insufficiently specific (this is not a virtue of the theory, actual actions are specific so all this does is hide which bullets the theory will wind up biting and when), or winds up biting one or more bullets, possibly different ones at different times (as for instance theories that deny the independence of irrelevant alternatives do). There are other moves in this game, in particular making principled arguments for why different theories lead to these conclusions in more or less acceptable ways, but just pointing to the counterintuitive implication of the repugnant conclusion is not a move in that game, but rather a move that is not obviously worse than any other in the already solved game of “which bullets exist to be bitten”.

Maybe the right approach to this is to just throw up our hands in frustration and say “I don’t know”, but then it’s hard to fault MacAskill, who again, does a more formalized version of essentially this rather than just biting the repugnant conclusion bullet.

Part of my pet peeve here is with discourse around population ethics, but also it feels like discourse around WWOTF is gradually drifting further away from anything I recognize from its contents. There’s plenty to criticize in the book, but to do a secondary reading skim from a few months after its release, you would think it was basically arguing “classical utilitarianism, therefore future”, which is not remotely what the book is actually like.

Thanks so much for this, great reflection!

One small comment I'd  make is that Bill Gates has been hammered for the way he does philanthropy, I would argue more severely then effective altruism.  Most notably by mainstream development orgs and a number of fairly high profile consipiracy theories.

But if the debacles of the last few months continue we might overtake Bill on the criticism front, but let's hope not.

I think that like you say, EA AGI doomer longtermists might have performed one of the most botched PR job in history. Climate change advocates rightly focus on protecting the world for our grandchildren, and the effect of climate change on the poorest people being far worse. I'm not sure I've ever heard AGI people talking in these kind of heartstring-pulling compassionate terms. These same arguments should be made by the AI crowd, realising that the general public has different frames of reference than they do.

Thank you! That's very interesting r.e. Gates; that wasn't my impression at all but to be honest I may very well be living in a bubble of my own making, and I'm sure I've missed plenty of criticism. That said, I think I might still suggest that there's two different kinds of criticism here: EA gets quite a bit of high-status criticism from fairly mainstream sources (academics, magazines, etc.); if Bill Gates's criticism comes more from conspiracy loons then I would suggest it's probably less damaging, even if it's more voluminous. (I think both have got a lot of flak from those development orgs who were quite enjoying being complacent about whether they were actually being successful or not.)

And yes I completely agree r.e. longtermism & PR! I wrote something quite similar a couple of months ago. It seems to me that longtermism has an obvious open goal here and yet hasn't (yet) taken it. 

I agree that Gates has been heavily criticised too. This is probably because he’s a billionaire and because he’s involved himself so heavily in issues (such as the pandemic) which attract lots of attention. It might not be a coincidence, though, that there’s not much philosophical difference between Bill Gates and, say, Peter Singer. Gates wrote a blurb praising Singer’s The Most Good You Can Do, and in his recent annual newsletter he said that his goal is to “give my wealth back to society in ways that do the most good for the most people”.

If it's true that longtermism is much more controversial than focusing on x-risks as a cause area (which can be justified according to mainstream cost-benefit analysis, as you said), then maybe we should have stuck to promoting mass market books like The Precipice instead of WWOTF! The Precipice has a chapter explicitly arguing that multiple ethical perspectives support reducing x-risk.

While I definitely think it’s correct that EA should distance itself from adopting any one moral philosophy and instead adopt a more pluralistic approach, it might still be useful to have a wing of the movement dedicated to moral philosophy. I don’t see why EA can’t be a haven for moral and political philosophers collaborating with other EA members to do the most good possible, as it might be worthwhile to focus on wide-scale systematic change and more abstract, fundamental questions such as what value is in the first place. In fact, one weakness of EA is precisely that it isn’t pluralistic in terms of the demographic of its members and how they view systematic change; for example, consider Tyler Cowen’s quote about EA’s demographic in the United States:


“But I think the demographics of the EA movement are essentially the US Democratic Party. And that's what the EA movement over time will evolve into. If you think the existential risk is this kind of funny, weird thing, it doesn't quite fit. Well, it will be kind of a branch of Democratic Party thinking that makes philanthropy a bit more global, a bit more effective. I wouldn't say it's a stupider version, but it's a less philosophical version that's a lot easier to sell to non-philosophers.” 

 

If wide-scale philosophical collaboration was incorporated into EA, then I think it might be a rare opportunity for political philosophers of all stripes (e.g., libertarians, socialists, anarchists, neoliberals, etc.) to collaborate on systematic questions relating to how to do the most good. I think this is especially needed considering how polarised politics has become. Additionally, considering abstract questions relating to the fundamental nature of value would particularly help with expected value calculations that are more vague, trying to compare the value of qualitatively distinct experiences.

Hi there, and thanks for the post. I find myself agreeing a lot with what it says, so probably my biases are aligning with it, and that has to be said. I am still trying to catch up with the main branches of ethical thought and giving them a fair chance, which I think utilitarianism deserves (by instinct and inclination I am probably a very Kantian deontologist), even if it instinctively feels 'wrong' to me.

Curated and popular this week
Relevant opportunities