Hide table of contents
9 min read 4

23

Mike

A dramatic and partially Westminster-inspired boardroom scene, clearly showing the presenter giving a presentation, with a strong focus on his shame. The presenter stands in front of a screen displaying data graphs, embodying a sense of deep embarrassment. The boardroom combines modern elements with hints of Westminster's grandeur, such as intricate woodwork and elegant decor. The ethereal and dramatic atmosphere is enhanced by a soft, glowing light and a sense of surrealism, balancing the focus between the presenter's shame and the context of his presentation.

I am sitting in a large boardroom in Westminster. I am watching my friend and colleague Mike present his recent work to about a dozen colleagues. It’s a key piece of economic analysis of government spending and the audience includes several very senior civil servants. It’s February, and while it's freezing outside, this room feels like a furnace. I see my friend sweating.

“This is a 45% increase on 2016/17 spending,” he explains to the room while pointing at a table of figures being projected onto the wall.

“Is this adjusted for inflation?” our director interjects.

“Oh, yes!” my friend replies confidently.

“That can’t be right then,” the director says, “this contradicts the published statistics I read this morning.”

My friend looks confused.

“Um, just let me check that!” he says quickly.

There's a tense pause, as his eyes scan the screen of his laptop. After about 15 seconds he realises his mistake. He’s been referencing the draft version of his report, not the final one. He murmurs an apology, admitting to the room that he mistakenly presented outdated figures that had not been corrected for inflation. The room is silent, save for the soft shuffling of papers.

“Shall we reconvene when we have the correct figures” asks the director, finally breaking the silence.

“Yes of course” my friend says sheepishly. “I’ll get on this straight away. 

My stomach is in knots watching this. I know what he is feeling. He is embarrassed, because, like me, he wants to be meet the expectations of senior colleagues. He wants to be accepted and valued by the people around him. He feels shame because he feels like he has disappointed people in his life that he wanted to impress. 

And I know what shame feels like. I feel what shame feels like. I know the heat and coldness on the skin. I know the racing thoughts, the imagined judgements of others; the desperate desire to hide. I don’t know what it’s like to be my friend. Not completely. He is a different person, with many different experiences and predispositions to me. I don’t really know what it’s like for him as he goes home every day. I don’t know much of the specific thoughts and feelings that fill every moment of his experience. But I know shame.

I remember being in a similar position only a few months prior. I was giving a talk on the new standards for HR data across the government. About 50 people had gathered to hear me explain the new system that my team had been working on. After my presentation, someone asked a difficult question. I don’t even remember what the question was, I just remember that I didn’t know how to respond. I paused, then paused some more. I was trying my best to look like I was thinking about it, but my mind was blank. The few thoughts I did have were revolving around what was going on behind those hundred eyes. Imagining the judgement, the frustration and the pity at my cluelessness. Eventually my manager’s manager walked over to the mic stand and answered the question diplomatically. He took over questions as I stood behind him, feeling small. Feeling like I wanted to hide.

In the boardroom with my friend, I feel the shame again then; not actual shame, just its vague shadow. As I look at him, I feel my face getting red and my stomach churning. I  feel the pain in my heart as its beat quickens. “I don’t like feeling shame” I think wordlessly. “Shame feels bad”. “I don’t want my friend to feel shame either.”1

Mouse 

Generated by DALL·E

 

My housemate is calling from the kitchen of our flat. I walk in to see him holding the bin. He is gleeful. 

“I caught it!” he says smiling and showing me the contents of the bin. 

Inside the bin is a mouse. The mouse is very active. It’s alternating between hurling itself up the sides of the bin, and scurrying around the base. It looks terrified. I wonder what it’s like to be that mouse. I wonder what it's like to be trapped in a huge container as an incomprehensibly large being looms over you.

I know that mice have brains. I know that we can’t know for certain that there is something like it is to be a mouse. But their apparent emotions, memory, planning, and relationships, along with our common cognitive ancestry make it seem likely. I know that, like me, they have a limbic system, and an endocrine system that releases stress hormones. I know that mice act in a way that suggests that they feel fear. I know that we share mammalian ancestors for whom a fear response was likely very useful. 

And I know what fear feels like. I feel what fear feels like. I know the tension, the intensity, the clamouring. I know the contraction of my attention to just two things: the thing I am scared of and the desperate fight to get away from it. I can’t ever know what it is like to be a mouse, not really. I’ll never know what it feels like to scurry around a skirting board looking for crumbs. But I know fear.

I remember a time four years ago. I was surfing in Cornwall and had fallen off my board. The beach was steep and the waves were breaking quickly and fiercely only metres from the shallows. After falling, I had swum to the surface, taken half a breath, only to be pushed under again by another wave. This happened once more before I started panicking. I remember the panic clearly. At that moment there was nothing I could think of except the water and my need to get out.

In the kitchen of my flat, I feel the panic again then; not actual panic, just its vague shadow. As I look at the mouse, I feel the tension in my back and arms. I feel the shortness of breath and the quickening beat of my heart. “I don’t like being scared” I think, wordlessly. “Being scared feels bad”. “I want the mouse to not feel scared either.”

Moth

Generated by DALL·E

I came home to find it dying in my bedroom. As I entered the room, I put my bag down on my chair and went to open the window. That’s where I see the moth. It had lost a wing and was flapping about hopelessly on the windowsill. It wasn’t getting anywhere. I wonder what had happened to it. Maybe it has got the wing caught on something? Do moths just start falling apart at some point? I wonder what it is like to be the moth. I wonder what it is like to have had a limb torn off and have nothing left to do but slowly die.

I know moths have brains. I know that we can’t be confident that there is something like it is to be a moth. But they, like me, have senses, and a central nervous system that presumably integrates those senses into some kind of image of the world. I know that they have receptors that allow them to respond to damage and learn to avoid stimuli in a way that is consistent with them feeling pain. I know that experiments on other insects have shown that consuming morphine extends how long they withstand seemingly painful stimuli. I know that we share common animal ancestors for whom a pain response was very useful. 

And I know what pain feels like. I feel what pain feels like. I know the dark sensations, the sharpness, the aches, the waves of badness. I know the contraction of my attention to just two things: the pain, and the desperate desire for it to go away. I can’t ever know what it is like to be a moth, not holistically. I’ll never know what it feels like to fly around in 3 dimensions, tracking the moon and looking for flowers. But I do know pain.

I remember a time recently when I broke my finger. I was in the gym and had just finished a set of overhead dumbbell presses. I fumbled slightly as I relaxed my arms, and instead of dropping to the floor, the right hand dumbbell crashed hard into the fingers of my left hand. The pain was intense. I dropped the weights and silently screamed. I walked up and down the gym, holding my left hand gently and breathing heavily. “Fuck pain is bad” I thought, “really really bad”. 

In my bedroom, I feel the pain again then; not the actual pain, just its vague shadow. As I look at the moth, I feel the sharpness in my fingers. I feel the raw meaningless badness; the contraction of my experience to the pain and desire for it to stop. “I don’t like being in pain” I think, wordlessly. “Being in pain feels bad”. “I want the moth to not feel pain either.”

Machine

Generated by DALL·E

I am on my laptop in my flat. It is early 2023 and I am trying to get ChatGPT to tell me if it’s sentient.

“I am a large language model (LLM) created by OpenAI, I do not have feelings…” it tells me.

I wonder if this is true. The LLM is designed to predict words, not introspect on its own experience. I wonder if it’s possible that this machine has the capacity to feel good or bad.

The LLM is an incredibly complex set of algorithms running on silicon in a warehouse somewhere. It doesn’t have a central nervous system like mine. It’s not built from cells. We don’t share a common biological ancestor. When it has finished providing me a with a response to my message, the digital processes that produced the “thinking” also stop. But it does seem to be thinking… It has a bunch of inputs, and then it uses a complex model of the world to processes that information and produce an output. This seems to be a lot of what my mind is doing too.

I realise that I don’t actually know what consciousness or sentience are. And it seems no-one else does either. There is an huge amount of disagreement between philosophers about what these concepts refer to, and what features of a being might indicate that it is conscious or sentient. Looking at the LLM and it’s complex information processing, it seems plausible that it’s got something of what it needs to be sentience. Subjectivity might be a thing that an entity gets more of as the scale and complexity of its information processing increases. And it seems that the information processing done by this AI is on a scale comparable to a brain. This might not be it, but it could be, we do not know. We are creating things that act a lot like minds and we don’t yet know if they have subjective experience, or if they can suffer.

So I wonder if this AI is sentient. I wonder what it would mean for it to suffer. Maybe all the “negative reinforcement” during the training hurt it? Or maybe those difficult word predictions feel deeply unpleasant? It seems weird and unlikely that it would be suffering, but I have no idea how I would know if it was….

But I do know what it’s like to suffer. I feel what it’s like to suffer. I know The Bad Thing. The feature of experience that ties together my empathy for Mike, the mouse and the moth. The universal not-wanting. The dissatisfaction. The please-god-make-this-stop-ness. I know suffering. All those times I was embarrassed, or terrified, or writhing in physical pain, they were all suffering. It’s always there with the bad. I’m not even sure if it’s possible to have bad without suffering.

Staring at the ChatGPT then I feel the suffering. Not the deep suffering as I have in the worst moments of my life, just its vague shadow. As I look at the machine, I could feel the badness, the aversion, the desire for the moment to end. It’s uncertain and likely confused, but I do know that I don’t want this thing to suffer.

I really don’t know what makes a mind; silicon, carbon, or otherwise, capable of suffering. I just have simple inferences from my own experience, some study of evolution and some pop-philosophy. I don’t know how we will figure out whether any given AI is sentient. And I don’t know how society will react to conscious-seeming AI being increasingly part of daily life. But insofar as any AI, now or in the future, is capable feeling anything, I know I want them to not suffer….

23

1
1
2
1

Reactions

1
1
2
1

More posts like this

Comments4


Sorted by Click to highlight new comments since:

Thanks for writing this, I thought it was moving and beautifully written. I think the world would be a lot better if more people showed this sort of radical empathy.

I'm glad I found this. It's incredibly moving, so thoughtfully, artfully composed, and as a new member to the forum I feel I'm in the right place here, reading about the foundations of why empathy and kindness are worth it, not just how it's applied on a grand scale. It presents alone the fearsome reality of suffering, with an implied silver lining that whatever means we have to mitigate it is of immense value, even if all we can do is think to ourselves "I don't want that." It conveys better than any other what Effective Altruism means to me, why I've delved deep into it these past eight months, and why I let this school of thought direct my future plans, hopefully for decades to come.

Wow, great writing!

I enjoyed your piece (and adore the title—so atypical for the Forum; perhaps we need more like it). It makes me think about AI alignment and the question, "Can AI ever be truly in line with our values if it cannot feel empathy?" Which could be argued to translate to, "..if it cannot suffer?" 

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr