BF

Bob Fischer

Bio

I'm a Senior Research Manager at Rethink Priorities, an Associate Professor of Philosophy at Texas State University, and the Director of the Society for the Study of Ethics & Animals.

Sequences
1

The Moral Weight Project Sequence

Comments
49

Thanks for all this, Michael. Lots to say here, but I think the key point is that we don't place much weight on these particular numbers and, as you well know and have capably demonstrated, we could get different numbers (and ordinal rankings) with various small changes to the methodology. The main point to keep in mind (which I say not for your sake, but for others, as I know you realize this) is that we'd probably get even smaller differences between welfare ranges with many of those changes. One of the main reasons we get large differences between humans and many invertebrates is because of the sheer number of proxies and the focus on cognitive proxies. There's an argument to be given for that move, but it doesn't matter here. The point is just that if we were to focus on the hedonic proxies you mention, there would be smaller differences--and it would be more plausible that those would be narrowed further by further research.

If I had more time, I would love to build even more models to aggregate various sets of proxies. But only so many hours in the day!  

Great question, Michael. It's probably fine to use the silkworm estimates for this purpose.

Let me renew my offer to talk. DM me for my Calendly link.

Sorry for the slow reply, Vasco. Here are the means you requested. My vote is that if people are looking for placeholder moral weights, they should use our 50th-pct numbers, but I don't have very strong feelings on that. And I know you know this, but I do want to stress for any other readers that these numbers are not "moral weights" as that term is often used in EA. Many EAs want one number per species that captures the overall strength of their moral reason to help members of that species relative to all others, accounting for moral uncertainty and a million other things. We aren't offering that. The right interpretation of these numbers is given in the main post as well as in our Intro to the MWP.

Thanks, Vasco!

Short version: I want to discourage people from using these numbers in any context where that level of precision might be relevant. That is, if the sign of someone's analysis turns on three significant digits, then I doubt that their analysis is action-relevant. 

As for medians rather than means, our main concern there was just that means tend to be skewed toward extremes. But we can generate the means if it's important!

Finally, I should stress that I'm seeing people use these "moral weights" roughly as follows: "100 humans = ~33 chickens (100*.332= ~33)." This is not the way they're intended to be used. Minimally, they should be adjusted by lifespan and average welfare levels, as they are estimates of welfare ranges rather than all-things-considered estimates of the strength of our moral reasons to benefit members of one species rather than another. 

Thanks for all this, Nuno. The upshot of Jason's post on what's wrong with the "holistic" approach to moral weight assignments, my post about theories of welfare, and my post about the appropriate response to animal-friendly results is something like this: you should basically ignore your priors re: animals' welfare ranges as they're probably (a) not really about welfare ranges, (b) uncalibrated, and (c) objectionably biased. 

You can see the posts above for material that's relevant to (b) and (c), but as evidence for (a), notice that your discussion of your prior isn't about the possible intensities of chickens' valenced experiences, but about how much you care about those experiences. I'm not criticizing you personally for this; it happens all the time. In EA, the moral weight of X relative to Y is often understood as an all-things-considered assessment of the relative importance of X relative to Y. I don't think people hear "relative importance" as "how valuable X is relative to Y conditional on a particular theory of value," which is still more than we offered, but is in the right ballpark. Instead, they hear it as something like "how valuable X is relative to Y," "the strength of my moral reasons to prioritize X in real-world situations relative to Y," and "the strength of my concern for X relative to Y" all rolled into one. But if that's what your prior's about, then it isn't particularly relevant to your prior about welfare-ranges-conditional-on-hedonism specifically.

Finally, note that if you do accept that your priors are vulnerable to these kinds of problems, then you either have to abandon or defend them. Otherwise, you don't have any response to the person who uses the same strategy to explain why they assign very low value to other humans, even if the face of evidence that these humans matter just as much as they do.

Great question, Lucas. My hunch is that all the broad conclusions probably apply, though I'd want to think through the details more carefully before standing behind that claim. I suppose one thing that really affects my thinking is whether the organism has to navigate its environment in search of resources. My impression is that the youngest shrimp aren't doing this; they're just being carried along like plankton. So, that lowers my estimation of their capacities to something more like grubs than juvenile crickets. But of course I haven't investigated this at all, so please don't put too much weight on that hot take!

Happy to discuss if it would be helpful; feel free to DM me.

Hi LGS. A few quick points:

  1. You don't know what my intuitions about bees were before we began, nor what they are now. FWIW, I came into this project basically inclined to think of insects as little robots. Reading about them changed what I think I should say. However, my intuitions probably haven't shifted that much. But as we've seen, I place less weight on my intuitions here than you do.
  2. You're ignoring what we say in the post: our actual views, which are informed by the models but not directly determined by them, are that the verts are within one OOM of humans and inverts are within 2 OOMs of the verts. The specific values are, as we indicate, just placeholders.
  3. We tried to develop a methodology that makes our estimates depend on the state of empirical knowledge. I'll be the first to point out its limitations. If we're listing criticisms, I'm worried about things like generalizing within taxonomic categories, the difficulty of scoring individual proxies, and the problem of handling missing data--not "hiding our intuitions behind a complex model."
  4. I want to do better going forward. This is the first step in an iterative process. If you have concrete suggestions about how to improve the methodology, please let me know.

Thanks for reading, LGS. As I've argued elsewhere, utilitarianism probably leads us to say equally uncomfortable things with more modest welfare range estimates. I'm assuming you wouldn't be much happier if we'd argued that 10  beehives are worth more than a single human. At some point, though, you have to accept a tradeoff like that if you're committed to impartial welfare aggregation. 

For what it's worth, and assuming that you do give animals some weight in your deliberations, my guess is that we might often agree about what to do, though disagree about why we ought to do it. I'm not hostile to giving intuitions a fair amount of weight in moral reasoning. I just don't think that our intuitions tell us anything important about how much other animals can suffer or the heights of their pleasures. If I save humans over beehives, it isn't because I think bees don't feel anything--or barely feel anything compared to humans. Instead, it's because I don't think small harms always aggregate to outweigh large ones, or because I give some weight to partiality, or because I think death is much worse for humans than for bees, or whatever. There are just so many other places to push back.

Appreciate the comment!

Re: further research priorities, there are "within paradigm" priorities and "beyond paradigm" priorities. As for the former, I think the most useful thing would be a more thorough investigation of theories of valence, as I think we could significantly improve the list of proxies and our scoring / aggregation methods if we had a better sense of which theories are most promising. As for the latter, my guess is that the most useful thing would be figuring out whether, given the hierarchicalism, there are any limits at all on discounting animal welfare simply because it belongs to animals. My guess is "No," which is one of the problems with hierarchicalism, but it would be good to think this through more carefully.

Re: some animals having larger welfare ranges than humans, we don't want to rule out this possibility, but we don't actually believe it. And it's worth stressing, as we stress here, that this possibility doesn't have any radical implications on its own. It's when you combine it with other moral assumptions that you get those radical implications.

Load more