I live for a high disagree-to-upvote ratio
Small drive-by question for you: In your opinion, if C. Elegans is conscious and has some moral significance, and suppose we could hypothetically train artificial neural networks to simulate a C. Elegans, would the resulting simulation have moral significance?
If so, what other consequences flow from this—do image recognition networks running on my phone have moral significance? Do LLMs? Are we already torturing billions of digital minds?
If not, what special sauce does C. Elegans have that an artificial neural network does not? (If you’re not sure, where do you think it might lie?)
(Asking out of genuine curiosity—haven’t had a lot of time to interface with this stuff)
I guess I don’t find your conclusion intuitive. I’m sure there are a range of preference questions you could ask these extreme sufferers. For example, whether they, at a 5/10 life satisfaction, would trade places with someone in a low-income country with a life satisfaction of 2/10 who does not have their condition.
My hunch is that the former is true, that there is something you can elicit from these people that isn’t being captured in the Cantril Ladder. (In my work, we’ve found the Cantril Ladder to be unreliable in other ways). But on the other side of this, I do worry about rejecting people’s own accounts of their experiences—it may literally be true that these people are somewhat happy with their lives, and that we should focus our resources on those who report that they aren’t!
Why do you think people who suffer so frequently and deeply rate their life satisfaction relatively highly?
(My best sense is some combination of:
Like, I can’t see a reason why wellbeing measures shouldn’t, in theory, capture these extremely negative states.
I am surprised at this, only because I remember the Gulf states were quite keen on bringing production into their countries and I would’ve thought they’d have declared Halal sooner!