Jason Schukraft

Dr. Jason Schukraft is a Senior Research Manager at Rethink Priorities. He earned his Ph.D. in philosophy from the University of Texas at Austin.


Invertebrate Sentience


Differences in the Intensity of Valenced Experience across Species

Hey Michael,

Thanks for engaging so deeply with the piece. This is a super complicated subject, and I really appreciate your perspective.

I agree that hidden qualia are possible, but I’m not sure there’s much of an argument on the table suggesting they exist. When possible, I think it’s important to try to ground these philosophical debates in empirical evidence. The split-brain case is interesting precisely because there is empirical evidence for dual seats of consciousness. From the SEP entry on the unity of consciousness:

In these operations, the corpus callosum is cut. The corpus callosum is a large strand of about 200,000,000 neurons running from one hemisphere to the other. When present, it is the chief channel of communication between the hemispheres. These operations, done mainly in the 1960s but recently reintroduced in a somewhat modified form, are a last-ditch effort to control certain kinds of severe epilepsy by stopping the spread of seizures from one lobe of the cerebral cortex to the other. For details, see Sperry (1984), Zaidel et al. (1993), or Gazzaniga (2000).

In normal life, patients show little effect of the operation. In particular, their consciousness of their world and themselves appears to remain as unified as it was prior to the operation. How this can be has puzzled a lot of people (Hurley 1998). Even more interesting for our purposes, however, is that, under certain laboratory conditions, these patients seem to behave as though two ‘centres of consciousness’ have been created in them. The original unity seems to be gone and two centres of unified consciousness seem to have replaced it, each associated with one of the two cerebral hemispheres.

Here are a couple of examples of the kinds of behaviour that prompt that assessment. The human retina is split vertically in such a way that the left half of each retina is primarily hooked up to the left hemisphere of the brain and the right half of each retina is primarily hooked up to the right hemisphere of the brain. Now suppose that we flash the word TAXABLE on a screen in front of a brain bisected patient in such a way that the letters TAX hit the left side of the retina, the letters ABLE the right side, and we put measures in place to ensure that the information hitting each half of the retina goes only to one lobe and is not fed to the other. If such a patient is asked what word is being shown, the mouth, controlled usually by the left hemisphere, will say TAX while the hand controlled by the hemisphere that does not control the mouth (usually the left hand and the right hemisphere) will write ABLE. Or, if the hemisphere that controls a hand (usually the left hand) but not speech is asked to do arithmetic in a way that does not penetrate to the hemisphere that controls speech and the hands are shielded from the eyes, the mouth will insist that it is not doing arithmetic, has not even thought of arithmetic today, and so on—while the appropriate hand is busily doing arithmetic!

So I don’t think it’s implausible to assign split-brain patients 2x moral weight.

I also think it’s possible to find empirical evidence for differences in phenomenal unity across species. There’s some really interesting work concerning octopuses. See, for example, “The Octopus and the Unity of Consciousness”. (I might write more about this topic in a few months, so stay tuned.)

As for the paper, it seems neutral between the view that the raw number of neurons firing is correlated with valence intensity (which is the view I was disputing) and the view that the proportional number of neurons firing (relative to some brain region) is correlated with valence intensity. So I’m not sure the paper really cuts any dialectical ice. (Still a super interesting paper, though, so thanks for alerting me to it!)

Differences in the Intensity of Valenced Experience across Species

Hi Michael,

Thanks for the comment and thanks for prompting me to write about these sorts of thought experiments. I confess I’ve never felt their bite, but perhaps that’s because I’ve never understood them. I’m not sure what the crux of our disagreement is, and I worry that we might talk past each other. So I’m just going to offer some reactions, and I’ll let you tell me what is and isn’t relevant to the sort of objection you’re pursuing.

  1. Big brains are not just collections of little brains. Large brains are incredibly specialized (though somewhat plastic).

  2. At least in humans, consciousness is unified. Even if you could carve out some smallish region of a human brain and put it in a system such that it becomes a seat of consciousness, that doesn’t mean that within the human brain that region is itself a seat of consciousness. (Happy to talk in much more detail about this point if this turns out to be the crux.)

  3. Valence intensity isn’t controlled by the raw number of neurons firing. I didn’t find any neuroscience papers that suggested there might be a correlation between neuron count and valence intensity. As with all things neurological, the actual story is a lot more complicated than a simple metric like neuron count would suggest.

  4. Not sure where this fits in, but if you yoke two brains together, it seems to me you’d have two independent seats of consciousness. There’s probably some way of filling out the thought experiment such that that would not be the case, but I think the details actually matter here, so I’d have to see the filled-out thought experiment.

Research Summary: The Subjective Experience of Time

Cool, thanks Michael, I hadn't seen that. (And thanks to Antonia as well for writing the summary!)

Parenting: Things I wish I could tell my past self

Hey Ruth,

Unfortunately, I don't have an answer, but I just wanted to tell you that you're not alone! My wife and I both struggled with sleep deprivation for a long time. Our two kids didn't consistently sleep through the night until ~21 months. I became pretty good at stealing a 20 minute nap whenever the opportunity presented itself, but other than that, I didn't find a solution...

Parenting: Things I wish I could tell my past self

As the parent of two young children, I was really pleased to see this post on the EA Forum.

I'll echo the bit about the importance of having support networks. Parenting is really hard in unexpected ways, and having other parents with whom to share your strange hardships is really comforting. (I have so many potty training horror stories that only other parents could possibly appreciate.)

That said, I also think it's really important to cultivate a support network of non-parent friends. It's pretty easy (at least for me, especially when I was a stay-home-dad for 18 months) to let your kids become your whole identity. It's sometimes a relief to talk about anything but my kids,  just to remind myself that I'm an independent human with his own thoughts and interests.

In addition to being full of misinformation and pseudo-science, many parenting books also give the false impression that once you reach certain milestones, parenting magically becomes super easy. I remember being convinced that as soon as my kids could sleep through the night, my job was pretty much done. In reality, parenting is a marathon, not a sprint. I don't wake up in the middle of the night anymore, but the sheer willpower that a 3-year-old can display when he doesn't want to get dressed for the day is draining in its own unique way.

Contra Michelle's experience, I did change a bit as a person, sometimes in surprising ways. (For instance, before I had kids I would watch sports for hours on the weekend, and my subjective well-being rose and fell with the fortunes of my favorite teams. For whatever reason, I've now completely lost interest in sports, and for the life of me, can't remember why I spent all those hours glued to the TV.)

One last thing, in case it's not obvious: parenting can be incredibly rewarding. Earlier this year my 5-year-old daughter donated, of her own volition and without pressure from me, a portion of her allowance to Evidence Action's Deworm the World Initiative. The pride I felt is pretty close to indescribable. (Obviously I helped her pick the charity, based on her goal to "help kids who aren't as lucky as I am.")

Research Summary: The Subjective Experience of Time

Thanks, that’s a great question!

Welfare is constituted by those things that are non-instrumentally good or bad for the creature. Insofar as reflexes are unconscious, they probably are not non-instrumentally good or bad. (They are, of course, often instrumentally good; they help the creature get other things that are good for it.) Conscious experiences, on the other hand, are usually non-instrumentally good or bad. Experiences with a positive valence are non-instrumentally good; experiences with a negative valence are non-instrumentally bad. (Experiences that are perfectly neural may not be non-instrumentally good or bad; experiences can also be instrumentally useful in a variety of ways.)

Differences in the subjective experience of time—assuming they exist—are relevant to welfare (both realized welfare and capacity for welfare) because they reflect differences in the amount of experience a creature undergoes per unit of objective time. I write about the moral importance of the subjective experience of time in this part of the first post.

You’re right that there are other aspects of temporal perception that may not be directly relevant to welfare. We already know that there are differences in temporal resolution (roughly: the rate at which a perceptual system samples information about its environment) across species. Enhanced temporal resolution may, among other things, enable faster unconscious reflexes. Naturally, the speed of a creature’s reflexes will indirectly contribute to its welfare, but those unconscious reflexes won’t be part of what constitutes the creature’s welfare. Whether or not there is a correlation between temporal resolution and the subjective experience of time is an open question, one that I explore in depth in the second post.

Hope that clarifies things a bit for you, but if not, please ask a follow-up question!

Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time?

Great, thanks Michael - that clarifies the argument for me.

Premise 1: Any observed conscious temporal resolution frequency for an individual X (within some set of possible conditions C) is a lower bound for the maximum frequency of subjective experience for X (within C).

While I think it's plausible that one's temporal resolution sets some sort of bound on one's rate of subjective experience, I just want to reiterate that I believe this is an empirical claim, not a conceptual claim. I'm open to the possibility that temporal resolution is just totally irrelevant to the subjective experience of time.

(As an aside, I think we have to be a bit careful how we (myself included) use the word 'conscious' in this context. In the post I distinguish behavioral methods for determining CFF from ERG methods for determining CFF. But even bees can be trained on the behavioral paradigm. This of course doesn't settle the question of whether they're conscious.)

Does it make sense to interpret the rate of subjective experience as a frequency, the number of subjective experiences per second? Maybe our conscious experiences are not sufficiently synchronized across our brains for such an interpretation?

This is another good question for which I don't have the answer. A related issue is whether experiences are discrete (countable) in the relevant sense. There are arguments that pull in either direction here. But, just to clarify, even if experiences are countable in the relevant sense, it would be an astounding coincidence if our experience frequency exactly matched our critical flicker-fusion frequency (i.e., 60 experiences per second).

Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time?

Hi Michael,

Thanks for the interesting argument. Before I can evaluate it, however, I'd need you to clarify your terms a bit for me. In particular, I'd need to know more about what you mean by "frequency of conscious experience." Based on my best reconstruction of the argument, it can't mean temporal resolution or rate of subjective experience.

I'll try clarify my position a bit, in case it's helpful to you or other readers. I don't think there's an a priori connection between temporal resolution (as measured by CFF or any other method) and rate of subjective experience. If there's a correlation between the two, that's a contingent empirical fact. There is no conceptual tension between the claim that a creature consciously perceives the flicker-to-steady-glow transition at some high threshold (200 Hz vs 60 Hz for humans, say) and the claim that the creature has the same rate of subjective experience as a typical human. (Similarly, there is no conceptual tension between the claim that some creature consciously perceives the transition at the same threshold as humans but has a different rate of subjective experience.) It's tempting to think that temporal resolution is like the frame rate of a video, and as the temporal resolution goes up or down, so too must the rate of subjective experience. But the mechanisms that govern the intake and processing of perceptual information are a lot more complicated than that, and the mechanisms that govern the subjective experience of time appear to be more complicated still.

One analogy that is sometimes helpful to me is to think of (visual) temporal resolution as a measure of motion blur. As one's temporal resolution improves, motion blur is reduced. But changes in motion blur need not have any connection to temporal experience. When I'm drunk, my motion blur greatly increases, but my rate of subjective experience doesn't change.

(Also, apologies if in elaborating my position I've missed the point of your argument. Like I said, it looks interesting, I just need to understand the terms better to evaluate it.)

Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time?

Doesn't using behavioural studies based on trained behaviour avoid this concern?

Thanks, this is a good question. The short answer is no, it doesn’t. The longer answer is a bit more complicated.

Nobody denies that differences in CFF generate differences in perceptual experience. But differences in perceptual experience are cheap. As I say in the post, the values I discuss are maximum CFF thresholds (that is, the highest CFF an individual can register in any condition). One’s actual CFF threshold is constantly shifting due to differences in things like background lighting conditions. So a light that an individual perceives as flickering in one situation may be perceived as glowing steadily in a different situation. The question is whether maximum CFF thresholds correlate with differences in subjective temporal experience.

Differences in one’s perceptual experience affect what one’s body can do unconsciously. Balancing on one foot with one’s eyes open is much easier than balancing on one foot with one’s eyes closed. The reason is that your visual system allows your body to make continual microadjustments to stay balanced.

So if differences in visual temporal resolution (as measured by CFF) confer a fitness advantage only in virtue of improvements in unconscious movements, we shouldn’t expect differences in CFF to be correlated with differences in subjective temporal experience. As I explain in the post, the temporal resolution of one’s senses doesn’t directly govern the subjective experience of time. If differences in temporal resolution correlate with differences in subjective temporal experience, it’s probably because improvements in temporal resolution make improvements in the subjective experience of time more useful (and/or vice versa).

Did the CFF estimates in your table come from behavioural studies or ERG studies, or both?


Load More