When you comment on your vote on the debate week banner, your comment will appear on this thread. Use this thread to respond to other people's arguments, and discuss the debate topic.
You should also feel free to leave top-level[1] comments here even if you haven't voted. As a reminder, the statement is "It would be better to spend an extra $100m on animal welfare than on global health".
If you’re browsing this thread- consider sorting by “New” and interacting with posts that haven’t been voted or commented on yet. There are a lot of comments!
Also- perhaps don’t vote karma below zero for low effort submissions, we don’t want to discourage low effort takes on the banner.
- ^
The first comment in a thread is a top-level comment.
My sequence might also be helpful. I didn't come up with too many directly useful estimates, but I looked into implications of desire-based and preference-based theories for moral weights and prioritization, and I would probably still prioritize nonhuman animals on such views. I guess most importantly:
The quantity of attention, in roughly the most extreme case in my view, could scale proportionally with the number of (relevant) neurons, so humans would have, as a first guess, ~400 times as much moral weight as chickens. OTOH, I'd actually guess there are decreasing marginal returns to additional neurons, e.g. it could scale more like with the logarithm or the square root of the number of neurons. And it might not really scale with the number of neurons at all.
People probably just have different beliefs about how much their own suffering matters, and these beliefs are plausibly not interpersonally comparable at all.
Some people may find it easier to reflectively dismiss or discount their own suffering than others for various reasons, like particular beliefs or greater self-control. If interpersonal comparisons are warranted, it could just mean these people care less about their own suffering in absolute terms on average, not that they care more about other things than average. Other animals probably can't easily dismiss or discount their own suffering much, and their actions follow pretty directly from their suffering and other felt desires, so they might even care more about their own suffering in absolute terms on average.
We can also imagine moral patients with conscious preferences who can't suffer at all, so we'd have to find something else to normalize by to make interpersonal comparisons with them.