It shouldn't be used as a unitary measure, but can be included in a combined measure, which is likely to correlate better.
AB, I think you're looking at a different stage of analysis than we are here. We're looking at how to weigh the intrinsic value of different organisms so that we know how to count them. It sounds to me like you're discussing the idea of, once we have decided on a particular way to count them, what actions should we take in order to best produce the greatest amount of value.
It's an interesting thought, although I'd note that quite a few prominent authors would disagree that the cortex is ultimately what matters for valence even in mammals (Jaak Panksepp being a prominent example). I think it'd also raise interesting questions about how to generalize this idea to organisms that don't have cortices. Michael used mushroom bodies in insects as an example, but is there reason to think that mushroom bodies in insects are "like the cortex and pallium" but unlike various subcortical structures in the brain that also play a role in integrating information from different sensory sources? I think there's need to be more of a specification of which types of neurons are ultimately counted in a principled way.
I appreciate your interesting point! I would note that as Erich mentioned, we're interested in moral patiency rather than moral agency, and we ultimately don't endorse the idea of using neuron counts.But in response to your comment, there are different ways of trying to spell out why more neurons would matter. Presumably, on some (or most) of those, the way neurons are connected to other neurons matters, and as you know in babies the connections between neurons are very different from the connections in older individuals. So I think a defender of the neuron count hypothesis would still be able to say, in response to your point, that it's not just the number of neurons but rather the number of neurons networked together in a particular way that matters.
Here's the report on conscious subsystems: https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we
Thanks, yeah, I agree those are better than just raw neuron count and we discuss those a bit more in the longer report. But also the objections are meant to apply to even these measures.
Thanks, I agree on these points. In regards to focusing on neurons involved in pain or other emotions, while I agree this would be the ideal thing to look at, the problem is that there is so much disagreement in the literature about issues that would be relevant for deciding which neurons/brain areas to include. There are positions that range from thinking that certain emotions can be localized to very specific regions to those who think that almost the whole brain is involved in every different type of experience, and lots of positions in between. So for that reason we tried to focus on more general criticisms.
No, it's not a reference Goodheart's Law.
It's just that one reason for liking neuron counts is that we have relatively easy ways of measuring neurons in a brain (or at least relatively easy ways of coming up with good estimate). However, as noted, there are a lot of other things that are relevant if what we really care about is information-processing capacity, so neuron count isn't an accurate measure of information processing capacity.
But if we focus on information-processing capacity itself, we no longer have a good way of easily measuring it (because of all the other factors involved).
This framing comes from Bob Fischer's comment on an earlier draft, btw.