1) Different options or uncertainty about the moral relevance of different qualia.
It's unclear that physical pain is the same experience for humans, cats, fish, and worms.
Even if it is the same mental experience, the moral value may differ due to the lack of memory or higher brain function. For example, I think there's a good argument that pain that isn't remembered, for instance via the use of Scopolamine, is (still morally relevant but) less bad than pain experienced that is remembered. Beings incapable of remembering or anticipating pain would have intrinsically less morally relevant experiences - perhaps far less.
2) Higher function as a relevant factor in assessing moral badness of negative experiences
I think that physical pain is bad, but when considered in isolation, it's not the worst thing that can happen. Suffering includes the experience of anticipation of bad, the memory of it occurring, the appreciation of time and lack of hope, etc. People would far prefer to have 1 hour of pain and the knowledge that it would be over at that point than have 1 hour of pain but not be sure when it would end. They'd also prefer to know when the pain would occur, rather than have it be unexpected. These seem to significantly change the moral importance of pain, even by orders of magnitude.
3) Different value due to potential for positive emotion.
If joy and elation is only possible for humans, it may be that they have higher potential for moral value than animals. This would be true even if their negative potential was the same. In such a case, we might think that the loss of potential was morally important, and say that the death of a human, with the potential for far more positive experience, is more morally important than the death of an animal.
Some more related articles:
Is Brain Size Morally Relevant? by Brian Tomasik
Quantity of experience: brain-duplication and degrees of consciousness by Nick Bostrom
I also wrote an article about minimal instantiations of theories of consciousness: Physical theories of consciousness reduce to panpsychism.
My initial intuition is the same here; it seems like nothing is lost, only things are added. I suppose one could object that actually independence of the parts is lost, and this could actually make the parts less conscious than they would have been if separate (although there is now also a greater whole that's more conscious), but I'm not sure why this should be the case; why shouldn't it make the parts more conscious? One reason to believe it would reduce the consciousness in each part (whether increasing or decreasing the amount of consciousness in the system) if the connections are reoptimized is that there's new redundancy; the parts could be readjusted to accomp... (read more)