Hide table of contents

Introduction

I recently read The Buddhist and the Ethicist – a transcript of conversations between Peter Singer and the Taiwanese Buddhist nun, Shih Chao-Hwei. Most of their conversation is about practical ethics, including the ethics of suicide, abortion, and war, and quite a lengthy discussion of karma/nirvana. It also made me think about how a greater appreciation of Buddhist ideas could be beneficial for the EA community. 

My main points from this are: 

  1. Buddhists practice specific techniques (e.g. metta meditation) for extending our empathy to all sentient beings. The same techniques have helped me empasized with beings like insects or digital minds. 
  2. Principles such as the illusion of self could help us avoid hubris, and suggest that concepts like “high impact individuals” rest on psychological delusions
  3. The ‘problem of moral obligation’ doesn’t seem to be comprehensible under a Buddhist ethical framework

Prior to this, I have had some exposure with Buddhism, and lived at Tibetan Buddhist Meditation Centre. However, I have not engaged at all with the primary Buddhist Pali texts, so I'm uncertain if my interpretations of Buddhism here are accurate. This isn’t a general overview of the philosophical differences/similarities between EA and Buddhism – for this purpose, I recommend this – or whether the Buddha would have been an EA. I’m skeptical. I also don’t go much into the (personal) psychological benefits of meditation. Thank you Gabriel for reading through this.

Feeling Radical Empathy 

In Buddhism, Bodhisattvas are enlightened people who choose not to achieve nirvana, because they want to continue to work to relieve the suffering of all sentient beings. As noted elsewhere, the ‘Bodhisattva path' – which is particularly prominent in some Buddhist traditions [1]– has a fair bit of overlap with the ~EA principle of radical empathy. Buddhist teachings give concrete advice for how we can extend our moral circle on an emotional level. 

Buddhists give concrete insights about how to extend our empathy to other sentient beings – including those for whom it is difficult to extend our empathy towards. Take Metta Bhavana (loving-kindness) meditation: traditionally, this practice begins with directing metta towards oneself, a close friend/relative, a neutral person, a difficult person, and eventually all sentient beings. I have found this practice beneficial for my personal wellbeing, as have other EAs I know. But, in addition, I have found this practice has enabled me to begin to sense the possibility of feeling empathy for beings I have a lot of trouble empathizing with: including insects, or future digital minds. 

(This is not to suggest that the Bodhisattva path is an ideal for secular people, or should be accepted without question. I have come across Buddhists who have taken Bodhisattva vows and think the best way of improving the world is spending most of their life alone on meditation retreats.) 

Becoming Less Self-Centered 

Another key principle in Buddhism is Śūnyatā or emptiness: the idea that all things are empty of intrinsic existence and nature. The principle that humans have no self is called Anattā. Whilst I don’t claim to have had any special personal experiences with either principle, my impression is that, within Buddhism, the experience of ‘self’ is not an on/off experience – there are degrees of self/clinging to experience, and through meditation we can become less self-obsessed. 

Why is this principle useful, in particular, for EAs? In the most obvious way, reduced self-concern makes people happier. In addition, reduced self-concern helps us be kinder people. My personal experience is through practicing loving-kindness meditation, I have fewer self-obsessive thoughts, and generally feel more positive about other people. 

(It’s important to note there are many pitfalls here – and I remember reading somewhere that spiritual people are more self-obsessed, on average. Buddhists call this problem spiritual materialism. I suspect everyone has met a spiritual narcissist at some point or another). 

By practicing meditation, and hopefully avoiding spiritual materialism, Buddhism can help us become more self-centered in our acts of altruism, as Chao-Hwei describes. [2] Reduced self-concern links to another Buddhist concept, Pratītyasamutpāda or codependent origination – the observation that all things arise depend upon other things. Sarah Weiler has recently written about the problem of hubris from beliefs that certain individuals are ‘higher impact’ than others. A Buddhist approach might be to recognise that ‘individuals’ don’t exist in a fundamental sense; highly impactful actions depend on countless prior causes; and thus, hubris about “high impact individuals” rests on a deluded sense of personhood. 

How Problematic is the Problem of Moral Obligation?

As far as I can tell, this problem describes how moral properties can both (I) exist in the natural world, and (II) give normative reasons (‘oughts’) for our actions. From this, JL Mackie argued that if moral facts existed, they would be ‘queer’ entities, unlike anything else in the natural world; so, a thorough-going empiricism should lead us to believe that all moral statements are false. 

Singer sets out this problem using two different framings, both of which Chao-Hwei doesn’t seem to understand. In Chapter 2, Singer describes the 'is-ought’ problem: “Some philosophers argue that you can’t just move from the fact that this is in our nature to the moral judgment that this is what we ought to do ... If we cannot simply appeal to nature, how do we reach the judgment that we endorse compassion for those beings who are suffering, but we refuse to endorse hatred of outsiders?” Singer describes his current position as follows: “we have to say that there is something like a self-evident perception, or perhaps “intuition” (that is the word that Henry Sidgwick uses) that enables us to see that relieving suffering is good and increasing suffering is bad. ” Chao-Hwei seems to interpret this question as a psychological one about how emotions and rationality interact: e.g. “among will and reason and passion, which one is more dominant?”

​​A similar interaction happens later on in the Chapter. Singer describes Plato’s “Ring of Gyges”, and asks “What do we say to people whose attitude to others in need is “I just don’t care about them; they have nothing to do with me”? How would you respond to that question from a Buddhist perspective?” Again, Chao-Hwei seems to interpret this as a practical question – and recapitulates the question as, “how do we persuade people to act altruistically toward strangers?”.

In both cases Chao-Hwei didn’t seem to understand what Singer is getting at. 

Perhaps this is because Buddhists don't have a strong sense of moral "obligation": one article suggests that "it’s not clear that Buddhist thinkers have a concept of moral obligation at all." This vibes with my personal experience with Buddhism. One Tibetan lama encouraged me against thinking in terms of "should" or "ought"s, and that, instead, a better psychological framing for my day-to-day actions is "would like".  

This reminded me of other cases where common concepts to Westerners just failed to land amongst Buddhists – e.g. there is a story of the Dalai Lama failing to understand how one of his students could have the emotion of “self-hatred”. It seems like there just wasn’t a Tibetan translation for this. Perhaps "moral obligation" has a specific legalistic/Christian etymology. I’d be interested to hear peoples’ thoughts on this. 

  1. ^

     This ideal is particularly emphasized in Mahayana Buddhism, relative to Theravada Buddhism: as Chao-Hwei explains, practitioners in the latter tradition focus on personal liberation; those in the former tradition also want to attain enlightenment for the sake of others. Singer says, “if I were a Buddhist, I would be a Mahayana Buddhist!"

  2. ^

     Singer describes his Drowning Child thought experiment, and asks how Buddhist teachings can inspire us to help distant strangers to whom one has no emotional connection. Chao-Hwei explains how reducing our self-concern naturally extends our moral concern to others. “When facing a drowning child … the working of the mind will actually extend our own self-love… Normally, we have emotional connections with our family, friends, and community, but if we expand our compassion in this way, eventually there will be a limit on how far we will go; thus, it cannot be called immeasurable … A bodhisattva does not love the drowning child due to a certain connection; instead, they turn the limited love (that we typically only have toward ourselves or those close to us) into a strong concern for this child’s well-being."

Comments7


Sorted by Click to highlight new comments since:

One Tibetan lama encouraged me against thinking in terms of "should" or "ought"s, and that, instead, a better psychological framing for my day-to-day actions is "would like"

This is common advice:

  •  In Nonviolent communication, they say that there is no right and wrong and that it's better to reframe everything as needs.
  •  In Radical Honesty, we do exercises to stop being led by "shoulds". Instead of “shoulds”, we simply talk about our sensations in the body, what we feel, and what we want.
  •  In CBT, they see “shoulds”, “musts”, “oughts” as cognitive distortions. They think that these rigid, absolutist self-demands lead to feelings of guilt and frustration and encourage reframing such statements to be more flexible.
  • Acceptance and Commitment Therapy (ACT) encourages values-driven actions rather than actions taken out of obligation or avoidance of guilt[1]
  • The Replacing Guilt series also talks about replacing shoulds


I still understand what Peter Singer is getting at because I used to think in the same way, but that way doesn't make sense to me anymore. I just don't see what in the real world he is pointing at. E.g., I noticed that when I read Peter saying "If we cannot simply appeal to nature, how do we reach the judgment that we endorse compassion for those beings who are suffering, but we refuse to endorse hatred of outsiders?", I was confused and had to reframe it into "Peter wants everyone to want to reduce suffering." I think that means that I’m an anti-realist in meta-ethics, while Peter Singer is probably a realist.

  1. ^

    I don’t have much experience with CBT and no knowledge of ACT. I took these descriptions from GPT-4.

Interesting! Makes sense that this is common advise. I’ve heard similar stuff from CBT therapists, as you mention.

That point was fairly anecdotal, and I don’t think contributes too much to the argument in this section. I place more weight on the Stanford article/Chao-Hwei responses.

I don’t think that the quote you mention is exactly what Singer believes. He’s setting up the problem for Chao-Hwei to respond to. His own view is that the view “suffering is bad” is a self-evident perception. Perhaps this is subtly different from Singer disliking suffering, or wanting others to alleviate it. Perhaps self-evident in the same way colour is. I think moral realists lean on this analogy sometimes.

Perhaps "moral obligation" has a specific legalistic/Christian etymology.

This is one of the positions G.E.M. Anscombe defends in her influential essay "Modern Moral Philosophy".  She argues in part that the moral "ought" is a vestige of religious ethics, which doesn't make much sense without a (divine) lawgiver.  Indeed, one of the starting points of many modern virtue theorists is arguing that the specific moral sense of "ought" and moral sense of "good" are spurious and unfounded.  One such view is in Philippa Foot's Natural Goodness, which argues instead that the goodness ethics cares about is natural goodness and defect (e.g., "the wolf who fails to contribute to the hunt is defective" is supposed to be a statement about a natural, rather than moral, defect of the wolf).

Ah nice! I had forgotten about this Anscombe article, which is where this point had come from. Thanks for pointing that out.

[anonymous]1
0
0

Thanks Charlie, enjoyed this. 

Thanks, glad you enjoyed 👍

Executive summary: Engaging with Buddhist ideas and practices, such as extending empathy to all sentient beings and recognizing the illusion of self, could benefit the effective altruism community.

Key points:

  1. Buddhist techniques like metta meditation can help extend empathy to beings that are difficult to empathize with, such as insects or digital minds.
  2. The Buddhist principle of the illusion of self suggests that concepts like "high impact individuals" rest on psychological delusions and could help avoid hubris.
  3. The author's experience suggests that the "problem of moral obligation" may not be comprehensible under a Buddhist ethical framework.
  4. Practicing meditation and avoiding spiritual narcissism can help reduce self-concern and make people kinder.
  5. The Buddhist nun Shih Chao-Hwei seemed to misunderstand Peter Singer's philosophical questions about the is-ought problem and moral motivation, possibly due to differences in Western and Buddhist concepts.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f