Our latest guest essay on utilitarianism.net is 'Buddhism and Utilitarianism', by Calvin Baker.  Here I'll just reproduce the final section comparing Effective Altruism and Engaged Buddhism, which may be of particular interest to Forum readers:

Engaged Buddhism is a somewhat heterogeneous social movement grounded in the conviction that Buddhists ought to bring Buddhist practices and values to bear on contemporary issues. Engaged Buddhists tend to be united in their commitment to addressing the structural, systemic, and institutional causes of suffering in their political, economic, social, and environmental forms, in a way that manifests Buddhist values of compassion and nonviolence. More succinctly, “Engaged Buddhism is characterized by activism to effect social change.” Activities carried out under the banner of Engaged Buddhism have taken a variety of forms, e.g., environmental activism in Thailand, hospice and elder care, participation in the Extinction Rebellion movement, work to alleviate hunger and poverty in Sri Lanka, disaster relief, recycling, and attempts at peaceful conflict resolution in Myanmar.

Effective altruism (EA) is a movement whose goal is to do the greatest possible amount of good, in terms of well-being, given a fixed quantity of resources (money, research hours, political capital, etc.).53 Given its emphasis on impact maximization, EA is heavily invested in global priorities research: research into which cause areas, and which interventions within those areas, are most effective at promoting well-being. So far, EA has focused the majority of its efforts on global health and developmentfarm animal welfare, and risks of extinction and civilizational collapse, including risks from transformative artificial intelligence (AI), pandemics, nuclear weapons, great power conflict, and extreme climate change. The EA emphasis on prioritization research marks a significant contrast with Engaged Buddhism, which has not attempted to systematically answer the question of how to bring about the greatest amount of well-being, given a finite quantity of resources. So, whereas EA retains a more analytical, research-heavy orientation that attunes it to problems that are—thankfully—not currently manifest, like engineered pandemics and misaligned, superintelligent AI, Engaged Buddhism is geared more towards social activism and immediately salient social issues.

It is also productive to compare EA efforts to reduce the suffering of farmed animals with the implications of Buddhist philosophy for non-human animal welfare. Buddhists have traditionally regarded all sentient beings as moral patients, holding that, like us, non-human animals are subject to duḥkha.55 Buddhist ethics, EA, and utilitarianism are therefore similar in assigning greater importance to non-human animal welfare than most other moral approaches.

We can nuance this picture, though, by recalling that Buddhism distinguishes between pain (negative hedonic valence) and duḥkha and maintains that pain is only bad to the extent that we are averse to it. (From a Buddhist perspective, pain is unavoidable, but suffering on account of pain is not.) It is extremely plausible that pain is aversive to many non-human animal species—including all those currently subjected to the horrendous conditions on factory farms, such as cows, chickens, pigs, and fish. However, it is possible that some species—perhaps only a tiny minority—lack the cognitive architecture that is necessary to generate what is, for the Buddhist, the ethically-relevant conjunction of pain and the higher-level attitude of aversion (dveṣa) to pain. It is therefore possible that Buddhists will end up with a slightly less expansive moral circle than many utilitarians and effective altruists, who tend to hold that pain simpliciter is bad and worth alleviating.

Finally, we can inquire into Buddhist and utilitarian perspectives on the future of humanity. Although utilitarianism is compatible with multiple positions in population ethics, a prominent strand in recent utilitarian(-leaning) work embraces totalism, which says, very roughly, that the more happy people there are in a population, the better. By totalist lights, the best-case scenario for humanity is that it develops into an extremely long-lasting interstellar civilization composed of trillions of happy people (or more!). To me, it seems doubtful that Buddhism would go in for a picture like this. As we saw in section 2, Buddhist ethics does not start with a conception of what is good and then say that we should maximize the total quantity of that thing in the universe (as does utilitarianism). Instead, Buddhist ethics starts with the problem of duḥkha and then sets out paths to the solution to that problem. Even on the tentatively optimistic reading of Buddhism, on which attaining the cessation of duḥkha is positively valuable, it seems to me that Buddhists would find the claim that we should bring new beings into existence, so that they too can overcome suffering, to be an alien one. Rather, it seems that Buddhists thinking about the future would wish for us to lead whichever beings currently exist along the path to awakening, and perhaps for the bodhisattvas of the interstellar space age to try to save the aliens too (if doing so turns out to be tractable).

There is one fascinating way in which Buddhist and utilitarian thinking about the future seems to converge, however. Over the past several decades, applied ethicists—alongside the public—have become increasingly interested in human biomedical enhancement, which we can gloss as the project of biomedically intervening on the human organism for the purpose of increasing well-being. Human enhancements would thus include everything from currently existing, relatively mundane procedures such as laser eye surgery to radical possible interventions, such as genetic engineering aimed at dramatically increasing general mental ability (“IQ”). 

I believe that Buddhism and utilitarianism are both committed to in-principle support for human enhancement (if this can be achieved without harmful side-effects or unintended consequences). Utilitarianism says that we should promote the sum-total of well-being. So, if a certain enhancement would make humanity better off, utilitarianism would support it. For its part, unlike many other religious traditions (such as Christianity), Buddhism thoroughly rejects the notion that there is a sacrosanct human essence that we must preserve. Moreover, Buddhism is pragmatic about attaining the cessation of suffering. For instance, if it turned out that stimulating the brain in a certain way during meditation allowed meditators to more efficiently gain insight into the nonexistence of the self, it seems that Buddhists should heartily endorse this practice. So although Buddhists may disagree with totalist utilitarians that our primary objective should be to become a vast interstellar civilization, they may well agree that we should use the tools of modern technology to intervene in our biology and psychology—perhaps radically—to attain a greater level of well-being.

43

9 comments, sorted by Click to highlight new comments since: Today at 10:44 AM
New Comment

Kind of off topic, but I want to throw it out there anyways. If Engaged Buddhism is a social movement that

  1. has an opinion on what the best way to do good is
  2. is actually doing something in practice Then we (EA and EB) should probably be talking. There might be valuable things we can learn from eachother

I don't think it actually has (1).

Engaged Buddhism is, as I see it, best understood as a movement among Western Liberals who are also Buddhists, and as such as primarily infused with Western liberal values. These are sometimes incidentally the best way to do good, but unlike EA they don't explicitly target doing the most good, they instead uphold an ideology that values things like racial equality, human dignity, and freedom on religion (including freedom to reject religion).

As for (2), I'm not sure how much there is to learn. There's likely some things, but I also worry that paying too much attention to Engaged Buddhism might be a distraction because it suffers common failure modes that EA seeks to avoid. For example, people I know who are part of Engaged Buddhism would rather volunteer directly, even if it's ineffective, than earn to give, because they want to be directly engaged. That's fine, but from what I've seen the whole movement is oriented more around satisfying a desire to help rather than actually doing the most good.

Would an agent who accepted strong pessimism [i.e. the view that there are no independent goods]—which I absolutely believe we should reject—have most reason to end their own life? Not necessarily. An altruistic agent with this evaluative outlook would have strong instrumental reason to remain alive, in order to alleviate the suffering of others.

I agree that life can be worth living for our positive roles in terms of reducing overall suffering or dukkha. More than that, such a view seems (to me at least) like a perfectly valid view on what constitutes evaluative meaning and positive value.

Indeed, if I knew for a fact that my life were overall (hopelessly) increasing suffering or dukkha, then this would seem to me like a strong reason not to live it, regardless of what I get to experience. So I'm curious how the author has come to believe that we should absolutely reject this view in favor of, presumably, offsetting views.

However, such an agent would be forced to accept the infamous null-bomb implication, which says that the best thing to do would be to permanently destroy all sentient life in the universe. I join almost every other philosopher in taking the fact that an ethical theory accepts the null-bomb implication as a decisive reason to reject the theory (as not merely misguided, but horrifically so).

To properly consider such a theoretical reductio, I trust that most philosophers would agree (on reflection) that we need to account for potential confounders such as status quo bias, omission bias, self-serving bias, and whether alternative views have any less horrific theoretical implications.

In particular, offsetting views theoretically imply things like the “Very Repugnant Conclusion”, “Creating Hell to Please the Blissful”, and “Intense Bliss with Hellish Cessation”, none of which seems to me any less horrific than does the non-creation of an imperfect world (cf. the consequentialist equivalence of cessation and non-creation).

Are these decisive reasons to reject offsetting views? A proponent of such views could still argue that such implications are only theoretical, that we shouldn't let them (mis)guide us in practice, and that the practical implications of impartial consequentialism are a separate question.

Yet the quoted passage neglects to mention that the very same response applies to minimalist consequentialism (whose proponents take pains to practically highlight the importance of cooperation, the avoidance of accidental harm, and the promotion of nonviolence).

I would just generally caution against performing such theoretical reductios so hastily. After all, a more bridge-building and illuminating approach is to consider the confounding factors and intuitions behind our differing perceptions on such questions, which I hope we can all do to better understand each other's views.

I'm concerned that this comment has received so many upvotes.  I just want to flag two major concerns I have with it:

(1)

if I knew for a fact that my life were overall (hopelessly) increasing suffering or dukkha, then this would seem to me like a strong reason not to live it, regardless of what I get to experience. So I'm curious how the author has come to believe that we should absolutely reject this view

This is extremely misleading.  You make it sound like the author favours causing a net increase in suffering for his own personal gain.  But of course that is not remotely fair or accurate. What he absolutely rejects is the idea that there are no positive goods (i.e., positive welfare has no moral value).  The alternative, "Positive Goods" view implies that it can be permissible to do things that include some additional suffering, so long as there are sufficient offsetting gains (possibly to those same individuals -- the author didn't take any stand on the further issue of interpersonal tradeoffs).

For example, suppose you had the option of bringing into existence a (causally isolated) blissful world, with the only exception being that one person will at one point stub their toes (a brief moment of suffering in their otherwise blissful life). Still, every single person on this world would be extremely happy to be alive (for objective list theorists: feel free to add in other positive goods, e.g. knowledge, accomplishment, friendship, etc.). The "No Positive Goods" view implies that it would be wrong to allow such a blissful world to exist. The author -- along with pretty much every expert in moral philosophy -- absolutely rejects this view.

Again, I want to emphasize how misleading I find it to characterize this as endorsing "increasing suffering", since in ordinary use we only describe lives as "suffering" when they have overall negative welfare, and we typically use "increasing suffering" to mean increasing suffering on net, i.e. more than one increases positive well-being. To give someone a blissful life + a stubbed toe is not, as most people use the term, to "increase suffering".  I would urge you in future to be clearer about how you are using this phrase in an unusually literal way (and also please avoid making it sound like your interlocutors have selfish motivations [as in: "regardless of what I get to experience"] when there is no basis for such an incendiary charge).

(2) 

More substantively, I'd dispute the suggestion that there's any sort of parity between the Postive Goods and No Positive Goods views when it comes to "horrific implications".

I don't want to get into a back-and-forth about this, but I'll just report my editorial view that there is something distinctively problematic about propounding a view that implies that destroying the world is literally the best possible outcome.

Speaking as a moral philosopher, my professional opinion is that the No Positive Goods view lacks basic justification in a way that differs markedly from Positive Goods views (even ones, like Totalism, with some troubling implications). And speaking as an editor of a public-facing website, I'm much more concerned that some crazy person might act on the No Positive Goods view in horrific ways than I am that anyone would (or could) do likewise with Positive Goods views like Totalism (for the obvious reason that it's easier to commit mass-murder than to create the positive replacement conditions that would be required for Totalism to permit such an act).

So that's why I whole-heartedly endorse (as both theoretically and practically warranted) our guest author's strong rejection of No Positive Goods views.  I don't expect proponents of the view to agree, and I don't wish to be drawn into further discussion of the matter. This explanation is more for third parties who might otherwise be confused or misled by the above comment.

Interesting essay, thanks for sharing.  Buddhist practice is the central focus of my life & is how I became interested in EA.  I see the two as fairly compatible.  I'm assuming the essay's focus is on Buddhists that have a primarily physicalist ontology (that subjective experience is an epiphenomena of brain chemistry).  If that is the case, then I think engaged Buddhism, when taken to the highest degree of intensity, converges fairly well with EA.  

Things become arguably more interesting if we adopt the traditional Buddhist ontology which includes multiple realms of existence, karma & rebirth.  For instance, the population ethics does change in this case.  In the traditional Buddhist worldview, there are a finite set of sentient beings being reborn in the universe.  The total population of sentient beings can decrease (because sentient beings reach liberation & stop being reborn) but not increase (since Buddhist logic negates a first cause).  

The main thrust of population ethics in this case is to increase the proportion of sentient beings reborn into "fortunate human births" (a traditional Buddhist phrase) which thus allows them the greatest opportunity to generate positive momentum (i.e. by being effective altruists) to eventually reach liberation.  Ordinary sentient beings are not really able to effect this; at most they can encourage other humans to maximize their altruistic efforts & thus build that positive momentum.  To me, this is how traditional Buddhadharma could align with EA.

Where they don't align is around doing more than just practicing altruism.  The traditional Buddhist worldview suggests that some of the most possible good someone can do is to strive to become a Buddha through training in meditative concentration & insight into the nature of reality.  Through this training, it is possible to progress through degrees of liberation which put one in a position to do the most possible good for others from a multi-lifetime perspective.  This would include occupying altruistic worldly functions such as those encouraged by EA, but also encouraging others to spend a large portion of their lives meditating.  In other words, spending a large portion of life meditating is highly recommended by traditional Buddhism but only makes sense from a utilitarian perspective if one takes a multi-lifetime view.

I think there's some case for specialization. That is, some people should dedicate their lives to meditation because it is necessary to carry forward the dharma. Most people probably have other comparative advantages. This is not a typical way of thinking about practice, but I think there's a case to be made that we could look at becoming a monk, for example, as a case of exercises comparative advantage as part of an ecosystem of practitioners who engage in various ways based on their comparative abilities (mostly focused on what they could be doing in the world otherwise).

I use this sort of reasoning myself. Why not become a monk? Because it seems like I can have a larger positive impact on the world as a lay practitioner. Why would I become a monk? If the calculus changed and it was my best course of action to positively impact the world.

Really appreciate that notion.  It is something I've thought a lot about myself.  I also tend to find that my personal spiritual practice benefits from a mix of many short meditation retreats, daily formal meditation sessions & ongoing altruistic efforts in daily life.  I don't feel that I would make a good teacher of meditation if I did that full time or that my practice would reach greater depth faster if I quit my job & practiced full time.  

One point I would like to add: whether you take the lay path of incorporating some Buddhist practices into your ordinary daily life, or the monastic path of dedicating yourself full time to Buddhist practice, it can help build the emotional resources necessary to do good for the world and live a life in service of others.  

Considering how difficult it can be to do good - let alone trying to do the most good - and to make sacrifices on behalf of others, and how common burnout and other challenges are, such tools for building emotional resilience, clarity, and compassion can be extremely helpful. 

[Apologies for the accidental multi-post.  Should be fixed now!]