Hide table of contents

This post provides a summary of my working paper “Welfare and Felt Duration.” The goal is to make the content of the paper more accessible and to add context and framing for an EA audience, including a more concrete summary of practical implications. It’s also an invitation for you to ask questions about the paper and/or my summary of it, to which I’ll try to reply as best I can below.

What’s the paper about? 

The paper is about how duration affects the goodness and badness of experiences that feel good or bad. For simplicity, I mostly focus on how duration affects the badness of pain. 

In some obvious sense, pains that go on for longer are worse for you. But we can draw some kind of intuitive distinction between how long something really takes and how long it is felt as taking. Suppose you could choose between two pains: one feels longer but is objectively shorter, and the other feels shorter but is objectively longer. Now the choice isn’t quite so obvious. Still, some people are quite confident that you ought to choose the second: the one that feels shorter. They think it’s how long a pain feels that’s important, not how long it is. The goal of the paper is to argue that that confidence isn’t warranted.  

Why is this important? 

This issue affects the moral weights assigned to non-human animals and digital minds. 

The case for thinking that subjective time experience varies across the animal kingdom is summarized in this excellent post by Jason Schukraft, which was a huge inspiration for this paper.  One particular line of evidence comes from variation in the critical-flicker fusion frequency (CFF), the frequency at which a light source that’s blinking on and off is perceived as continuously illuminated. Some birds and insects can detect flickering that you and I would completely miss unless we watched a slow motion recording. That might be taken to indicate that time passes more slowly from their subjective perspective, and so, if felt duration is what matters, that suggests we should give additional weight to the lifetime welfare of those animals. In line with that idea, Jason’s research has motivated using CFFs to inform the assignment of moral weights at Rethink Priorities, as outlined here

A number of people also argue that digital minds could experience time very differently from us, and here the differences could get really extreme. Because of the speed advantages of digital hardware over neural wetware, a digital mind could conceivably be run at speeds many orders of magnitude higher than the brain’s own processing speed, which might again lead us to expect that time will be felt as passing much more slowly. As above, this may be taken to suggest that we should give those experiences significantly greater moral weight. Among other places, this issue is discussed by Carl Shulman and Nick Bostrom in their paper on digital minds

What’s the argument? 

You can think of the argument of the paper as having three key parts.

Part 1: What is felt duration?

The first thing I want to do in the paper is emphasize that we don’t really have a very strong idea of what we’re talking about when we talk about the subjective experience of time. That should make us skeptical of our intuitions about the ethical importance of felt duration.

It seems clear that it doesn’t matter in itself how much time you think has passed: e.g., if you think the pain went on for six minutes, but actually it lasted five. If subjective duration is going to matter, it can’t be just a matter of your beliefs about time’s passage. Something about the way the pain is experienced has got to be different. But what exactly? I expect you probably don’t have an obvious answer to that question at your fingertips. I certainly don’t. It’s also worth noting that some psychologists who study time perception claim that we can’t distinguish empirically between judged and felt duration, whereas others who think we can make this distinction also claim that people frequently mix them up, especially when it comes to reported feelings of time passing quickly. 

Part 2: What felt duration could be

At the next stage, I look at theories of what felt duration consists in. The idea is that once we have a theory of what the subjective rate of experience really is, we’ll be in a much better position to say whether it’s the sort of thing we ought to care about for its own sake. I claim it isn’t.

One theory I consider is the cognitivist theory of felt duration, favoured by Valtteri Arstila and Ian Phillips. Very roughly, this says that our experience of the passage of time arises from the fact that we’re aware of external events in relation to our own stream of conscious thoughts. When there’s a big speed-up in the volume of conscious thought occurring alongside some experienced event, the event feels longer. That seems plausible enough. But it also seems plausible that it doesn’t matter in and of itself how quickly your conscious thoughts move in relation to external events while you’re in pain. If there's a suitable change in the content of your thoughts and the way they interact with your pain experience, that could potentially make a difference for better or for worse, but the speed of conscious thought relative to external processes surely doesn’t matter in and of itself to pain’s badness. 

Another theory I consider is the quantum theory of felt duration, favoured historically by Karl Ernst von Baer and more recently by Carla Merino-Rajme. This theory assumes that experience isn’t continuous. It’s divided up across discrete experiential frames, a bit like the frames in a film reel. The more of these experiential frames that make up your experience of an event, the longer it feels. This also strikes me as plausible. But the only plausible explanation I can think of for why it should matter how many of these frames divide up your experience is something like the following. If your pain experience isn’t continuously ‘on’, but instead made up of lots of little bursts of pain, then it could be that those bursts of pain are packed more densely in time when time feels like it’s passing slowly, as a result of which more time overall could end up being filled with pain as opposed to non-pain. That does sound like it’s got to be worse for you. But this is also extremely speculative. It’s also ultimately a story on which the pain is worse because it fills more objective time, so it doesn’t actually support the view that subjective time experience matters in itself. 

Part Three: Rebutting an argument from digital simulations

The final part addresses a thought experiment that a lot of people raised when I was discussing the ideas in the paper. Imagine a digital simulation of someone’s experience. Imagine varying the speed at which the simulation runs by changing the clock speed on the hardware running it. A lot of people have the intuition that that doesn’t make any difference for how good or bad it is for the simulated people we’re creating. After all, they can’t tell the difference: their experiences are subjectively indistinguishable. 

I reject the assumption that subjectively indistinguishable experiences of pleasure or pain are equally good or bad. Suppose, plausibly, that what it is for two experiences to be subjectively indistinguishable is that there exists some one-to-one mapping among the instants that make up those experiences so that you can’t tell apart any instants mapped to one another. Insofar as that’s right, we should reject the idea that subjectively indistinguishable pains are equally good or bad. Note, for example, that  is a one-to-one mapping between  and  and so if there is a pain (one that’s continuously ‘on’) lasting exactly one second and another lasting exactly two seconds, and if those pain experiences are qualitatively exactly the same at every instant they occur, then they’re subjectively indistinguishable on this analysis. But the two-second pain is surely worse. 

The arguments given in the paper itself are obviously more careful and detailed. There are also a bunch of issues covered in the paper that I’ve completely left out of this summary: whether the theory of relativity makes it impossible to assign an objective duration to a valenced experience; whether we really have evidence that conscious experience is discrete as opposed to continuous, and in what sense; whether the 'amount' of conscious thought occurring in a given time period can be meaningfully defined and measured in a way that allows for interspecies comparisons; and much more besides. 

What are the practical upshots? 

I think we should significantly reduce our credence that subjective time experience modulates welfare. As a result, we should give less weight to subjective time experience when assigning moral weights to animals and digital minds in order to set priorities. 

To give some sense of this, in 2020 Jason Schukraft reported a 70% credence that there exist morally relevant differences in the rate of subjective experience across the animal kingdom. I currently think something in the range of 10-30% is more plausible, though I don’t think my views on this are very stable or especially well-considered. What’s important to note is that the lower range I favour isn’t explained by the fact that I think we should be more skeptical than Jason that there is variation in the rate of subjective experience. Instead, I think we should be more skeptical that that kind of variation is morally significant. That means putting less weight than we might have done on the welfare of small, high-metabolism animals, such as birds like the pied fly-catcher (CFF: 146 Hz), and more weight than we might have done on the welfare of larger, slower animals, like the leatherback turtle (CFF: 15 Hz). It also means putting significantly less weight on the welfare of fast-paced digital minds than we might have done, and thus potentially significantly reducing our estimate of the contribution of digital minds to total welfare over future time.

Still, it’s important to keep in mind some caveats and limitations of the conclusions I draw. In particular, a lot of what I focus on is the question of whether subjective time experience matters in and of itself, i.e., holding fixed things like objective duration, intensity, etc. It’s compatible with that idea that differences in the rate of subjective time experience tend to bring about other kinds of changes that are morally significant in their own right, like differences in felt intensity or differences in objective duration. I don’t think we currently have good evidence that that’s the case, but it’s also very much an open question. 

Bonus content

Jason gives an argument for thinking that it’s subjectively experienced time that matters, which appeals to an analogy with intensity; John Firth at GPI also pressed me on this in conversation. I didn’t address this argument in the paper, as it’s already longer than many philosophy journals are happy to consider. Instead, I’ll address the argument now. 

Here’s the argument from Jason

“Two subjects might be exposed to a negative stimulus (an electric shock, say) of the same intensity but differ with respect to the felt badness of the subsequent pain. Such a difference is subjective (in that the painfulness of the shock is relative to the subject being shocked) but no less genuine in virtue of the subjectivity. What matters morally is not the objective intensity of the stimulus, but the subjective badness of the experience. ... [I]f it is the perceived intensity of a stimulus that matters morally, rather than any objective feature of the stimulus, it’s unclear why we shouldn’t apply the same reasoning to duration.”

(The dots here link parts of the original post that are actually somewhat far apart, but that strike me as parts of a single argument.)

In my view, applying the same reasoning to duration does not support using subjective time experience as opposed to objective duration to assign moral weights. When it comes to intensity, we can talk about the intensity of the noxious stimulus - e.g., the temperature of the stove you accidentally touch - or of the painful sensation it evokes. When it comes to duration, we can draw a similar distinction: a distinction between the duration of the noxious stimulus - how long you held your hand on the stove - and the duration of the painful sensation it evokes. As I see it, the analogue of saying that we don’t care about the intensity of the stimulus, but of the sensation, is that we shouldn’t care about the duration of the noxious stimulus but only the duration of the painful sensation. That seems right: it doesn’t matter how long your hand was actually in contact with the stove; what matters is how long the painful burning sensation endures thereafter. But that’s totally compatible with measuring the duration of a pain in clock time. 

The analogue of caring about the subjective time filled by an experience of pain rather than its objective duration would seem to be saying that you shouldn’t care about how intense a sensation is, but how intense it seems. It’s not clear if that’s a meaningful claim, let alone a plausible one. 





More posts like this

Sorted by Click to highlight new comments since: Today at 12:57 AM

Thanks for sharing, Andreas!

I think the computational equivalence argument is quite compelling:

The argument appeals to the fact that the same computation occurs each time. In addition, it relies on the idea that if what goes on in the head and gives rise to the mind is Turing-style computation, then the phenomenal component of lifetime welfare must be the same whenever the same underlying computational processes are reproduced, regardless of elapsed time. A Turing machine model of computation, after all, has nothing in it corresponding to the flow of time. The machine’s computation is defined in terms of the sequence of configurations yielded by the starting configuration. There’s nothing in the model corresponding to the amount of time the machine spends in a given configuration or requires when transitioning from one configuration to another. If the mind is essentially Turing machine-style computation, the time the computation needs in order to complete when physically instantiated ought to be irrelevant to the character of mind, and so to the phenomenal component of lifetime welfare.

You reject the above argument, saying:

Clearly, conscious experience unfolds in time, and any fully adequate account of consciousness as an empirical phenomenon needs to be able to account for this. On some views, temporal properties of conscious experience end up playing an essential role in determining the representational content of consciousness, because experience represents temporal properties in the world based on a mirroring principle: an experience of change requires a change in experience, and, more generally, any experience representing any temporal property must itself instantiate the property it represents (Phillips 2014). However, we need not agree with this controversial position in order to recognize the more basic point that atemporal models of the basis of consciousness should be presumed to be incomplete.

However, I am not convinced conscious experience unfolds in time. Simulations of digital minds would presumably be run in a digital computer, so I would intuitively guess their conscious experience could be the sum of discrete/digital experiences instead of the integral of a continuous experience.

More broadly, "digital physics is a speculative idea that the universe can be conceived of as a vast, digital computation device, or as the output of a deterministic or probabilistic computer program". As an example, mechanics can be described with time as a dependent rather than independent variable.

Relatedly, I liked the discussion between Joscha Bach and Spencer Greenberg in this podcast about whether the universe is discrete of continuous:

[JOSCHA:] So a different perspective that I have on the world is that, for instance, the notion of continuous space as a computer scientist doesn't make a lot of sense. Because it's not actually computable. I'm not able to build some kind of letters that have an infinite density and compute transitions in it at infinite resolution that have a finite number of steps. I can define this in abstract mathematics, but the languages that are required to define it have contradictions of the nature that Gödel has discovered. So the only thing that me and mathematicians can ever do is to work this finite view of lattices when we want to describe some continuous space. And what we mean by continuous space in physics and mathematics, ultimately, is a space that is composed of too many locations to count. And the objects that are moving to this space might be consisting of too many parts to count. As a result, you need to use operators that converge in the limit, and a set of operators over too many parts to count that converge in the limit is roughly geometry. It's a particular kind of trick of computing things. And some of the stuff in geometry is not computable in the sense that you're able to get to a perfect result. So imagine that you try to do a rotation of an object in your computer with finite resolution. If you don't preserve the original object, and you do this a number of times, then the object will lose its shape. It will fall apart because of rounding errors. And if you want to get rid of the rounding errors, you need to use tricks. You need to store the original shape of the object, and we load it from time to time, or something like this. So in practice, these things matter. They don't matter in this kind of mathematics, where you can perform infinitely many steps in a finite amount of time. But in any practical sense, this doesn't work. So this is basically the transition that my own mind has made from the mathematical tradition that existed before the last century and the one that was invented in the last one. Actually, constructive mathematics is much older than this. But in classical mathematics, constructive mathematics, I think, was seen as a Viet aberration. And from the perspective of computation, it's part of mathematics that actually works.

SPENCER: Okay, I think I'm beginning to home in on our philosophical differences. I think one thing is I'm not that confident that the universe is computable. So when you say, “Well, you can't really have an infinitely fine grid, you can't have continuous space.” I'm not that confident in that. I'm not saying that the universe is definitely not computable. I just feel undecided on that question. I feel like you're more confident it's computable. Is that right?

JOSCHA: So do you think that universe exists?

SPENCER: Sure. And yeah, in some definitions it exists, absolutely.

JOSCHA: So what does exist mean?

SPENCER: That's a tough one. [laughs] It is there... The stuff happening.. Yeah, I don't know how to define existence. Do you have a better definition?

JOSCHA: I don't know. From my own perspective, for something to exist, it needs to be implemented. And something exists to the degree that it's implemented. That's also true for highly abstract objects: tables exist “kinda, sorta”. They exist as long as you squint very hard, but there are borderline cases where it's not clear whether it's a table or not. And when you zoom in very hard, it's just all a bunch of atoms. And so at which point does the table start, and other things end? It's not that clear. So the table exists to the degree that it's implemented. It exists in a certain context, in a certain cause range description. And for coarse grained objects, I think it's completely obvious that they only exist to the degree that they're implemented. So, you could say that the financial system exists to a certain degree of approximation to the degree to which it is actually implemented. There is a part of the financial system that is a fiction, that is not actually implemented. And that is changing under our eyes and is melting away and so on. But there is a part that is rock-hard implemented, and that is not a fiction. But it's an approximation that changes from time to time. And the physical universe, I think, in order to exist — for instance, to say that electrons exist, the electron needs to be implemented in some sense. — So you could say, “I'm not sure if the universe exists, but electrons exist.” So I can talk about them, because I can measure them, I can interact with them, and so on. They exist to the degree that they are implemented. What does it mean for an electron to be implemented? It means that you have to have a type of particle that has a spin like this, and they charge like that. And spin and charge are defined as interactions with other things that play out in this way. So electrons are a particular way to talk about patterns of information. To say that the universe exists means that a certain causal structure exists that gives rise to the observations that I make in a regular fashion. And this, to me, means there is an implementation of some sort. And I can talk about the existence of the universe to the degree that I'm able to discover a language in which I can talk about its existence. So the inconvenient thing is, if I am unable to describe what existence means, then it could imply that existence doesn't mean anything and the universe doesn't actually exist.

I may go listen to the podcast if you think it settles this more, but on reading it I'm skeptical of Joscha's argument. It seems to skip the important leap from "implemented" to "computable". Why does the fact that our universe takes place in an incomputable continuous setting mean it's not implemented? All it means is that it's not being implemented on a computer, right?

Interesting point.

Why does the fact that our universe takes place in an incomputable continuous setting mean it's not implemented?

I do not think we have any empirical evidence that the universe is:

  • Continuous, because all measurements have a finite sensitivity.
  • Infinite, because all measurements have a finite scale.

Claiming the universe is continuous or infinite requires extrapolating infinitely far from observed data. For example, to conclude that the universe is infinite, people usually extrapolate from the universe being pretty flat locally to it being perfectly flat globally. This is a huge extrapolation:

  • Modelling our knowledge about the local curvature as a continuous symmetrical distribution, even if the best guess is that the universe is perfectly flat locally, there is actually 0 % chance it has zero local curvature, 50 % it has negative, and 50 % it has positive.
  • We do not know whether the curvature infinitely far away is the same as the local one.

In my mind, claiming the universe is perfectly flat and infinite based on it being pretty flat locally is similar to claiming that the Earth is flat and infinite based on it being pretty flat locally.

Sorry, I shouldn’t have used the phrase “the fact that”. Rephrased, the sentence should say “why would the universe taking place in an incomputable continuous setting mean it’s not implemented”. I have no confident stance on if the universe is continuous or not, just that I find the argument presented unconvincing.

Thanks, Vasco! It's possible that we're just reading different things into the idea that "conscious experience unfolds in time"? For example, there's a sense in which that's fully compatible with thinking that experience is discrete as opposed to continuous if by that we mean that the content of consciousness changes discontinuously or that consciousness proceeds in short-lived bursts against the backdrop of surrounding unconsciousness. Is the view you're proposing that our experiences have no location or extension in time? I think all I'm saying here is that that view is false, so there might otherwise be no disagreement. It's also worth noting that I take the falsity of that sort of view to be a presupposition of the argument I'm criticising in the paper, since it assumes that adjusting the clock speed of the simulation hardware results in experiences that fill different amounts of objective time. 

It's interesting to me that you refer to (CPU) clock speed. If my understanding is correct, when you change the clock speed of a CPU, you don't actually change the speed at which signals propagate through the CPU, you just change the length of the delay between consecutive propagations. (Technically, changes in temperature or voltage could have small side-effects on propagation speed, but let's ignore those for the sake of argument.) It seems to me that the length of the delay is not morally relevant, for the same reason that the length of a period of time during which I am unconscious is not morally relevant, all else being equal. I am curious if you agree, and if so, whether that changes any of your practical conclusions.

For what it's worth, it seems to me that both digital and biological minds are discrete in an important sense, regardless of whether physics is continuous. Indeed, for a digital simulation of a biological mind to even be possible, it has to rely on a discrete approximation being sufficient. But I think I'd have trouble making that argument precise to your satisfaction, so for now the thought experiment will have to do. Also, thank you for the post, I found it quite thought-provoking!

Thanks for following up! Sorry for my lack of clarity. Here is an attempt to explain how I am thinking:

  • Time is discrete, and therefore conscious/unconscious experience is a sequence of discrete conscious/unconscious states.
  • The objective duration of an experience is proportional to the number of states comprising it.
  • For the same reason that it does not make sense to talk about accelerating/decelerating e.g. the sequence of integer numbers, it does not make sense to talk about accelerating/decelerating experiences.
    • So, strictly speaking, it is not possible to have "simulated minds that have the same experiences but run through those same experiences at different objective speeds". If 2 minds have the same experiences, their objective duration will necessarily be the same.
    • However, casually speaking, an experiences can be said to be accelerated (decelerated) if it was obtained by running the original n (1/n) times as fast. For example, for a mind of 1 bit where 0 and 1 represent unconsciousness and consciousness, one can have:
      • An original experience comprised of 4 states: o1 = 0; o2 = 1; o3 = 0; o4 = 1.
      • An accelerated experience comprised of 2 states, corresponding to running the original 2 times as fast: a1 = o1 + o2 = 1; a2 = o3 + o4 = 1.
      • A decelerated experience comprised of 8 states, corresponding to running the original 50 % as fast: d1 = o1 = 0; d2 = o1 = 0; d3 = o2 = 1; d4 = o2 = 1; d5 = o3 = 0; d6 = o3 = 0; d7 = o4 = 1; d8 = o4 = 1.
  • The welfare of an experience is the sum of the welfare of the states comprising it. The way I defined accelerated and decelerated experiences above, if states 0 and 1 have welfare of 0 and 1, decelerating the experience would increase welfare:
    • The original and accelerated experiences would each have a welfare of 2.
    • The decelerated experience would have a welfare of 4.
  • The intensity of an experience is the sum of the absolute welfare of the states comprising it. Higher computation rates are associated with greater intensity.
  • The felt duration of an experience is a property of the current state, but felt duration is not independent of past states. Longer felt duration is associated with greater intensity.

Am I making any sense?

Thanks! I think that makes sense. I discuss something slightly similar on pp. 21 - 22 in the paper (following the page numbers at the bottom), albeit just the idea that you should count discrete pain experiences in measuring the extensive magnitude of a pain experience, without any attempt to anchor this in a deeper theory of how experience unfolds in time. 

Maybe one thing I'm still a bit unsure of here is the following. We could have a view on which time is fundamentally discrete, rather than continuous. There are physical atoms of time and how long something goes on for is a matter of how many such atoms it's made up of. But, on its face, those atoms needn't correspond to the 'frames' into which experiences are divided, since that kind of division among experiences may be understood as a high-level psychological fact. Similarly, the basic time atoms needn't correspond to discrete steps in any physical computation, except insofar as we imagine fundamental physics as computational. Thus, experiential frames could be composed of different numbers of fundamental temporal atoms, and varying the hardware clock-speed could lead to the same physical computation being spread over more or fewer time atoms. This seems to give us some sense in which experiences and physical computation unfolds in time, albeit in discrete time. However, I took it you wanted to rule that out, and so probably I've misunderstood something about how you're thinking about the relationship between the fundamental time atoms and computations/experiential frames, or I've just got totally the wrong picture? 

Thus, experiential frames could be composed of different numbers of fundamental temporal atoms, and varying the hardware clock-speed could lead to the same physical computation being spread over more or fewer time atoms. This seems to give us some sense in which experiences and physical computation unfolds in time, albeit in discrete time. However, I took it you wanted to rule that out, and so probably I've misunderstood something about how you're thinking about the relationship between the fundamental time atoms and computations/experiential frames, or I've just got totally the wrong picture?

Interesting! I think you got my picture right, but I am assuming one experiential frame always corresponds to one temporal atom, because one's mind, which is a physical system, will be in a certain state for each temporal atom. However, since temporal atoms are super short (the Planck time is 5.39*10^-44 s), I guess the vast majority of experiential frames is pretty empty, having welfare close to 0. I suppose it would be possible to accelerate/decelerate a given experience by orderly elimating/adding a bunch of empty experiential frames.

What you described seems analogous to what I have in mind if I interpret your experiential frames as ones with welfare meaningfully different from 0. If these are packed closer together (further apart), the experience will be accelerated (decelerated).

You write "Suppose, plausibly, that what it is for two experiences to be subjectively indistinguishable is that there exists some one-to-one mapping among the instants that make up those experiences so that you can’t tell apart any instants mapped to one another." You note that there is a one-to-one mapping between a continuous one-second-pain and continuous two-second-pain, while the two-second-pain seems obviously worse.

Consider the parody principle "what it is for two ranges of numbers to be mathematically indistinguishable is that there exists some one-to-one mapping among the numbers that make up the two ranges". This principle is of course false (0 to 1 vs 0 to 2).

Many people might consider the parody principle plausible. Do you have a reason in mind for thinking that the mistaken intuition supporting the parody principle isn't also the primary intuition supporting your principle?

Thanks for the question! There's a lot more about how I arrive at this conception of subjective indistinguishability in the paper itself (section 4.2), but in terms of the analogy with your parody principle, notice that your definition of mathematical indistinguishability just says that there has to be a one-to-one mapping, whereas the proposed account of subjective indistinguishability says that there has to be such a mapping and the mapped pairs must always be pairwise indistinguishable to the subject. If I said that two ranges of numbers are mathematically indistinguishable if there's a one-to-one mapping among them such that the numbers we map to one another are indistinguishable, that doesn't sound too implausible and presumably doesn't generate the counter-example you note? (Though it might turn on what we mean by saying that two numbers are 'indistinguishable'!) If that's right, then I don't think my principle is challenged by the analogy with the parody principle you note. 

Another theory I consider is the quantum theory of felt duration, favoured historically by Karl Ernst von Baer and more recently by Carla Merino-Rajme. This theory assumes that experience isn’t continuous. It’s divided up across discrete experiential frames, a bit like the frames in a film reel.


It’s also ultimately a story on which the pain is worse because it fills more objective time, so it doesn’t actually support the view that subjective time experience matters in itself. 

It's probably worth mentioning here that you're assuming the frames and bursts of pain have nonzero durations.

If they instead had 0 duration (instantaneous) but gaps between them, then you'd just be counting them, not measuring and summing their durations, and there would be twice as many, so twice as much pain. Is it psychologically or physically plausible for the frames to be instantaneous with gaps between them? Is there any possible evidence we could collect that would differentiate the two accounts?

One might look to the time it takes for a neuron to fire and put gaps between every firing of a neuron, but there could be no fact of the matter about where exactly lines should be drawn. There might not be any one obvious moment to identify with a neuron firing. Action potentials aren't instantaneous, e.g. 


Even if our waking lives are composed of discrete experiential frames, there appears to be nothing in the nature of consciousness itself that requires that kind of discretization. Continuous consciousness seems to be possible, even if in fact our own experiences are discrete. For continuous minds, we seem forced to say either that there is no number of discrete experiences that make up the experience of a minute of pain or that there are uncountably many or that minds like that undergo exactly one discretely demarcated experience during any period of uninterrupted consciousness (compare Tye 2003a: 97). But each of these claims yields absurd results when making welfare comparisons across discrete and continuous minds, as well as among the experiences of continuous minds, if we insist that the right way to measure pain’s extent is in terms of the number of discrete experiences that comprise an experience of pain.

This is an interesting point.

Could it cause problems the other way, too, though? Is it also possible that there could be minds where the frames are factually instantaneous with gaps between them? If we measure the total duration, it would be 0, because the gaps take up all the time.

This also reminds me that invertebrates also have graded neuron potentials, which we might think have dramatically more possible relevant states than just the two in action potentials. We might even imagine brains with a continuum of possible states. If we tried to measure welfare intensity and welfare ranges by counting just-noticeable differences from 0, you might get infinitely many with brains with continuous potentials, but only finitely many in humans. (But maybe quantum mechanics places a limit on this precision.)

Thanks, Michael. Yes, you're right - in the bit you quote from at the start I'm assuming the bursts have some kind of duration rather than being extensionless. I think that probably got mangled in trying to compress everything! 

The zero duration frame possibility is an interesting one - Some of Vasco's comments below point in the same direction, I think. Is your thought that the problem is something like - If you have these isolated points of experience which have zero duration, then since there's no experience there to which we can assign a non-zero objective duration, if you measure duration objectively, you count those experiences as nothing, whereas intuitively that's a mistake - There's an experience of pain there, after all. It's got to count for something!

I think that's an interesting objection and one I'll have to think more about. My initial reaction is that perhaps it's bound up with a general weirdness that attaches to things that have zero measure but (in some sense) still aren't nothing? E.g., there's something weird about probability zero events that are nonetheless genuinely possible, and taking account of events like that can lead to some weird interactions with otherwise plausible normative principles: e.g., it suggests a possible conflict between dominance and expected utility maximization (see Hajek, "Unexpected Expectations," p. 556-7 for discussion). 

The moral intuition that we should use the continuous/Lebesgue measure here seems tied up with the intuition that consciousness is continuous and not a bunch of instantanenous frames with gaps between them. If it is in fact instantaneous with gaps, then the moral intuition seems unreliable and you should probably go with the counting measure instead, with which the frames would have nonzero measure.

FWIW, I'm not a moral realist, and if it turned out both kinds of beings were possible together, then there could be no fact of the matter about how to weigh them against each other. But maybe I'd want to weight the continuous minds infinitely more anyway. You could use the continuous measure first, and then break ties with the counting measure.

Executive summary: The paper argues we should be skeptical that differences in subjective time experience affect the moral value of good or bad experiences.

Key points:

  1. We don't have a clear understanding of what "subjective time experience" really means.
  2. Theories of what it could mean don't clearly connect it to moral value. For example, a higher rate of conscious thoughts during an experience doesn't inherently make that experience better or worse.
  3. Thought experiments appealing to subjective indistinguishability between digital simulations run at different speeds fail, since longer objective durations still correlate with more total suffering even if experiences are indistinguishable moment-to-moment.
  4. As a result, we should reduce credence that differences in subjective time experience between individuals or species affect their moral weights. For example, smaller faster-living animals may deserve less additional weight than previously thought.
  5. However, subjective time could still indirectly affect moral weights by changing other features like intensity or objective duration. More research is needed.



This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities