4 min read 6

-11

There is an important moral puzzle to solve, because this is an ongoing problem. To simplify the problem and push it to an extreme where the issue is easy to see, I am going to frame this as a trolley problem.
Imagine that there is a person in a room with two buttons. They are told that they have two minutes to decide which button to push. If they push the button on the left, a child will be spared from death, but some quantity of invertebrate animals will be killed. If they push the button on the right, a quantity of invertebrate animals will be spared but the child will be killed. If they push neither button, both the child and the invertebrate animals will be killed. Accept for the hypothesis that the person believes this to be the truth.
Now we zoom out, and get to you. Turns out, you are watching this scenario through a hidden camera. In front of you is a single button. You are told that if you push this button the person you are watching in the room with two buttons will be killed and the child will be spared. If you do nothing, then you will simply not interfere with their choice.
Under what circumstances would you push the button? Does it matter what number of invertebrate animals' lives are on the line?
For me, it does not matter what number of invertebrate animals' lives are on the line. As soon as I see the person reaching for the button to kill the child, I will hit my kill button to kill them and spare the child. If they reach for the button to kill the invertebrates, great! Both they and the child get to live. I do not think this is a matter of personal taste, I believe it is a moral rule. That I would be wrong to do otherwise. So wrong, in fact, that if there were a third room with a person watching me and deciding whether to push the button to kill me and the original subject, that I would argue the third person has a moral obligation to kill both me and the original subject if the original subject reaches for the button to kill the child and I do not stop them.

Try this with any combination of number and type of invertebrate animal, and my answer will stay the same. A billion cockroaches? A trillion squid? A Graham's Number ants? No hesitation.

If there was no child's life on the line, if it were simply spare the invertebrates or kill them? I would not press a kill button to prevent someone from killing the invertebrates. I would be sad if they chose to kill the large number of invertebrates for no reason, but not morally justified in killing them. There is no number of invertebrate animals which would make me feel justified in killing the person to save the invertebrates.

Why am I picking on invertebrates here? Because their brains are relatively small and simple. So small and simple that I argue that, wherever the minimum line for 'experiencing meaningful suffering' or 'moral relevance' is, they are below it. I'm down to discuss whether, and to what degree great apes or dolphins or elephants deserve moral consideration. I have empirical reasons from examining their brains and their behaviors to believe that these creatures may indeed be moral agents. Cockroaches? Absolutely not within the bounds of reasonable argument.

Examine their behaviors. Do they exhibit behaviors we could reasonably interpret as forming empathic social bonds or reciprocal social contracts? No. 
Look at their nervous systems. Do they have complex networks of neurons devoted to social interaction and compassion for self and others? No. Ability to experience sensory stimuli and interpret it as aversive is insufficient. They don't have the capacity to have emotional valence on top of those aversive stimuli, they don't have emotions, they don't care for themselves or others. They don't make choices with moral agency, deciding between doing good things or bad things. They are emotionless robots obeying genetically hardcoded instructions with only a little simple learning. They do not build a complex worldview and rich episodic memories. They do not ascribe rich personal meaning to events and experience desire for possible future outcomes. They are not little humans or even puppies, scaled down to tiny sizy and simpler brains. They are categorically a different thing which lacks all attributes of experience that give moral worth to a creature. No large number of zeros adds up to anything but zero. Cockroaches are not 0.000001 of a human, they are pure zero. With animals, there is gray area, a gradient of sorts, and we can talk about where to draw what lines. Invertebrates aren't on the scale.

So when people talk about 'Charitable Cause Prioritization', and their discussion includes protecting invertebrates and trades this off against protecting humans, I think they are mistaken. I don't think there is a valid tradeoff there.

So what's the problem? Why don't I just let the invertebrate-lovers go do their thing, while I do mine? The problem is that those arguing for the invertebrate cause as an issue of moral importance have brought bad arguments to the table. And then these arguments were not rejected. I'm fine with people bringing in new possible arguments about novel moral issues to consider. I'm not fine with failing to reject those potential moral issues when the arguments turn out to be flawed.

In practice, these issues do compete. When we are hosting a global conference about doing good, and we choose who to include as speaker and attendees, we are making choices. When we choose what posts to allow on our internet forums, we are making choices. When we decide what organizations to include in a list of 'affiliated charities', we are making choices. These all trade off against alternatives, including the alternative of not including them and thus not diluting a list of organizations pursuing valid causes with organizations pursuing incorrect ones.

I am asking for the community to make a stand and draw a line. While there are people dying of hunger, of currently preventable diseases, or of medical problems we could potentially find cures for (e.g. senescence, cancer), then we should firmly state that people matter morally in a way that invertebrate animals do not. Until all the human problems of sufficient threshold to be moral issues, such as involuntary death, have been completely solved, we should stop validating expending resources or attention on the welfare of invertebrates.

Comments6


Sorted by Click to highlight new comments since:

Welcome to the Forum!

This post falls into a pretty common Internet failure mode, which is so ubiquitous outside of this forum that it's easy to not realise that any mistake has even been made - after all, everyone talks like this. Specifically, you don't seem to consider whether your argument would convince someone who genuinely believes these views. I am only going to agree with your answer to your trolley problem if I am already convinced invertebrates have no moral value...and in that case, I don't need this post to convince me that invertebrate welfare is counterproductive. There isn't any argument for why someone who does not currently agree with you should change their mind.

It is worth considering what specific reasons people who care about invertebrate reasoning have, and trying to answer those views directly. This requires putting yourself in their shoes and trying to understand why they might consider invertebrates to have actual moral worth.

"So what's the problem? Why don't I just let the invertebrate-lovers go do their thing, while I do mine? The problem is that those arguing for the invertebrate cause as an issue of moral importance have brought bad arguments to the table."

This is much more promising, and I'd like to see actual discussion of what these arguments are, and why they're bad.

wonderfully welcoming comment, @Jay Bailey! :)

(EDITED)

Ability to experience sensory stimuli and interpret it as aversive is insufficient. They don't have the capacity to have emotional valence on top of those aversive stimuli, they don't have emotions, they don't care for themselves or others.

Can you elaborate on what you think is required for emotional valence and emotions? And why you don't think invertebrates have that?

This could be a crux here.

 

I think it's reasonably likely that many invertebrates, including fruit flies, have states worth describing like fear/anxiety and anger (driving aggression), and hence some states worth describing as emotions. Rethink Priorities also cited a study of a depression-like state in fruit flies: 

  • Ries, A.-S., Hermanns, T., Burkhard Poeck, & Strauß, R. (2017). Serotonin modulates a depression-like state in Drosophila responsive to lithium treatment. Nature Communications, 8(1). https://doi.org/10.1038/ncomms15738

See also the table here from RP's more recent moral weight work.

I don't agree with your take, but many people in EA (and definitely outside it) feel similarly to you so I appreciate you writing this. It's also a good time to criticize invertebrate welfare, as more attention and funding are directed there.

That said, I found the tone of this post too confrontational to promote useful discussion so I've decided neither to upvote or downvote. I know it's a tough ask, but it would also be very useful to directly engage with works in this space (say, recent posts on invertebrate welfare or RP's sequence) and clearly outline the difference in views. 

Oh! I see that you've also posted Shrimp Neuroanatomy. Great, I'm looking forward future posts on this topic to inform the debate :) 

I agree with you on the practical sense: the degree of complexity of artrophod brains is clearly very small, and our capability to affect the existence of very small beings is extremly low. In fact, this activism competes with practical animal welfare (see here). 

On the theoretical side, my view is that conscience is epiphenomenal, and consequently, it is both absolutely real, but its assessment becomes exponentially harder with distance to our own neuroanatomy.

Curated and popular this week
 ·  · 1m read
 · 
(Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.) > “There comes a moment when the children who have been playing at burglars hush suddenly: was that a real footstep in the hall?”  > > - C.S. Lewis “The Human Condition,” by René Magritte (Image source here) 1. Introduction Sometimes, my thinking feels more “real” to me; and sometimes, it feels more “fake.” I want to do the real version, so I want to understand this spectrum better. This essay offers some reflections.  I give a bunch of examples of this “fake vs. real” spectrum below -- in AI, philosophy, competitive debate, everyday life, and religion. My current sense is that it brings together a cluster of related dimensions, namely: * Map vs. world: Is my mind directed at an abstraction, or it is trying to see past its model to the world beyond? * Hollow vs. solid: Am I using concepts/premises/frames that I secretly suspect are bullshit, or do I expect them to point at basically real stuff, even if imperfectly? * Rote vs. new: Is the thinking pre-computed, or is new processing occurring? * Soldier vs. scout: Is the thinking trying to defend a pre-chosen position, or is it just trying to get to the truth? * Dry vs. visceral: Does the content feel abstract and heady, or does it grip me at some more gut level? These dimensions aren’t the same. But I think they’re correlated – and I offer some speculations about why. In particular, I speculate about their relationship to the “telos” of thinking – that is, to the thing that thinking is “supposed to” do.  I also describe some tags I’m currently using when I remind myself to “really think.” In particular:  * Going slow * Following curiosity/aliveness * Staying in touch with why I’m thinking about something * Tethering my concepts to referents that feel “real” to me * Reminding myself that “arguments are lenses on the world” * Tuning into a relaxing sense of “helplessness” about the truth * Just actually imagining differ
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma